8Bot-Time Stories

New Model, Original Parts

Jean-Luc Godard’s first choice of title for his 1965 French New Wave film was not Alphaville, the title he eventually adopted, but Tarzan versus IBM.1 All films are exercises in the practice of semiotics, but Godard’s Alphaville was to be a movie consciously steeped in the philosophy of semiotics. Signs and signifiers abound in bright neon and in large print, with significations that challenge and tease the viewer. Godard imagined a future dystopia—the eponymous city of Alphaville—whose populace is regulated and controlled by signs and in which the dictionary (found in every hotel room in place of a Bible) is continuously revised to excise proscribed linguistic signs and thereby limit people’s access to proscribed ideas. We can see why Godard would want his film’s protagonist and antagonist to be the most potently familiar signifiers of all. In a film that sets out to contrast the natural with the artificial and pit the human against the inhuman, Tarzan could serve as a signifier for all that is natural, virile, raw, and uncooked in the human condition. Against the passionate nobility of this savage, who bows to no system of control but instead obeys his own code of honor, Godard would pit IBM, a loaded signifier (in 1965 at least) for all that is automated, overregulated, passionless, and neatly buttoned down. This superficial clash of genres conceals a plotline that is commonplace to each. In the first, Tarzan is “the One” who must lead a rebellion against the machines that scheme to rob us all of our humanity, while in the second, IBM assumes the role of the colonialists in those old movies who scheme to turn the jungle into a factory and its natives into soulless slaves. Naturally, it is the neo-Tarzan who must disrupt and dismantle the machinery of power and dramatically escape with a liberated Jane by his side. This is a turn of events that is as satisfying as it is predictable, which is why we see it so often, in Tarzan, Alphaville, Blade Runner, The Matrix, and many other films.

Godard’s desire to repurpose Tarzan and IBM raised some legalistic eyebrows, flirting as it did with slander, copyright theft, and trademark infraction. Rooting around for alternate signifiers of comparable potency, Godard replaced Tarzan with the grizzled detective, Lemy Caution, whom he borrowed wholesale from a series of popular crime novels, and swapped out IBM for Wernher von Braun, the creator of the Nazi missiles that rained down on London in World War II. Lemy Caution would be a worldly sci-fi hero in the mold of the hard-boiled but soulful Philip Marlowe (whom Raymond Chandler had previously fashioned as an errant knight of the Arthurian tradition, transplanted to modern crime fiction), while von Braun would represent the darkest potentialities of modern science, from nuclear weapons to artificial intelligence. In the course of the film, we learn that von Braun hides another identity from an earlier life, Nosferatu, and thus is the fusion achieved of Wernher von Braun and Robert Oppenheimer, who famously intoned, “Now I am become Death,” after the successful test of the first atom bomb. It seems that familiarity is no hindrance to creativity, and Godard’s film is a glorious mishmash of signs and ideas whose originality is not in the least bit diminished by our familiarity with the many archetypal elements that it cleverly juxtaposes.

The Irish writer Brian O’Nolan, who wrote darkly comic fiction under the pseudonym Flann O’Brien, described his narratives as self-evident shams about which readers were responsible for regulating their own levels of credulity. So a story can be as bizarre or as labyrinthine as you please just so long as you give your readers sufficient cause to invest in your characters and keep on reading. In his quest for compelling characters, O’Brien was not averse to reusing what had worked so well in the past, and he presents in his first and best novel, At Swim-Two-Birds (1939), the view that every story should be a tissue of clever character reuse. Indeed, O’Brien, who seems to have preinvented Ted Nelson’s idea for hypertext2 years before its time, has a top-level character in his nested narrative offer the following views on the creation of compelling characters:

Characters should be interchangeable as between one book and another. The entire corpus of existing literature should be regarded as a limbo from which discerning authors could draw their characters as required, creating only when they failed to find a suitable existing puppet. The modern novel should be largely a work of reference. Most authors spend their time saying what has been said before—usually said much better. A wealth of references to existing works would acquaint the reader instantaneously with the nature of each character [and] obviate tiresome explanations.3

O’Brien might well been predicting Godard’s Alphaville back in 1939 with its “wealth of references” to preexisting characters to obviate the need for long back stories and dry exposition. Though his tongue was firmly in cheek, O’Brien’s modernist ideas were eagerly adopted and put to good use by writer Alan Moore when assembling his rosters of characters for the graphic novels Watchmen and The League of Extraordinary Gentlemen.4 In the former, Moore sought to impose adult ideas on a trove of second-tier superheroes that DC had bought from its rival Charlton, but he was thwarted by DC’s commercial plans for those assets. So for the latter, Moore trawled Victorian novels for unencumbered characters to repurpose as he pleased, and from these he recruited his team of “gentlemen,” including Mina Harker, Dr. Jekyll, Mr. Hyde, Captain Nemo, Allan Quatermain, and the Invisible Man. There is a flavor of William Burroughs’s and Brion Gysin’s cut-up method in O’Brien’s and Moore’s willingness to slice‘n’dice the literary canon to satisfy their creative needs, and something of the bot design philosophy too in their harvesting of low-hanging fruits to bake into novel confections of their own.

The “limbo” from which O’Brien imagines the crafty author plucking his or her gently preowned characters is of course a search space dense with possibilities. Our NOC list comprises a subset of this limbo, our own league of extraordinary ladies and gentlemen from whence our bots can draw “suitable existing puppets” for their tiny theaters of the absurd. In this chapter, we explore how we might extend the generative reach of our bots to produce coherent long-form narratives that inject apt pairings of these characters into longer sequences of connected tweets with a logical three-act structure, that is, with a clear beginning, middle, and end. If it seems that modernist writers have commoditized, or at least democratized, the function of character in narrative, it may come as a relief to learn that they have done much the same to plot structures, too. There really is nothing truly new under the sun, least of all a satisfying story.

Into the “Woulds”

Godard did more than reuse characters from popular culture and fiction; his plot was preowned, too, insofar as it borrowed liberally from the legend of Orpheus and Eurydice. In that Greek myth, Orpheus journeys into the dark depths of hell to rescue Eurydice from the lord of the Underworld’s grasp. Caution fills the role of Orpheus in Alphaville, and Natasha—the daughter of Von Braun and the target of Caution’s quest—is Eurydice. As they flee the disintegrating Alphaville, Caution instructs Natasha not to look back in an obvious reference to the end of the Greek myth (and an allusion to the biblical story of Lot’s wife). The plot is also not unlike that of Raymond Chandler’s The Big Sleep, insofar as it involves a search (by detective Philip Marlowe) for a general’s daughter, whom Marlowe must pry from the grasp of evil gangsters.5 The Big Lebowski was loosely based on the same Chandler novel, as was the shooting script for Blade Runner, which borrows extensively from Alphaville. Though Blade Runner was based on Philip K. Dick’s 1968 novel, Do Androids Dream of Electric Sheep?,6 it also shares many strong similarities to a largely forgotten 1962 B-movie Creation of the Humanoids,7 which, though stuffed with good ideas, has acting and visuals that would embarrass Ed Wood. It is difficult not to see shades of one story in another when writers are so often and so easily influenced by one other.

All human stories show structural similarities to others because each has been shaped to obey unspoken expectations about what constitutes a good narrative. From the immense space of all possible story structures, we humans have carved ourselves a sweet spot that conforms to our human-shaped view of the world. Stories force a causal structure onto events, to show how purposeful actions can advance one goal while thwarting others, and a journey or quest is one of the most purposeful activities we humans can perform. We leave home in search of a foreign land and along the way encounter allies and enemies, as well as boons and obstacles. Some obstacles require a side trip to sidestep, and so one grand journey spawns a series of smaller nested journeys. It does not matter whether a journey is as epic as Odysseus’s ten-year voyage home after the Trojan War, in which we literally follow him to hell and back, or as parochial as Leopold Bloom’s walk around Dublin in James Joyce’s Ulysses, or as history spanning as Orlando’s journey of becoming in Virginia Wolfe’s novel of the same name. When a story is structured as a journey or a quest, readers gladly stow away for the ride.

It is no accident that many of our most enduring fairy tales and fables involve a journey into the dark woods, a foreboding forest of discovery in which seekers encounter love, villainy, and their true selves. Stephen Sondheim’s musical Into the Woods weaves many of those tales into a single coherent narrative, showing how no seeker is unchanged by a journey into the dark forest.8 This forest is a metaphor for all our known knowns and for all our unknown unknowns too, and while it is so often a real wood forest or jungle, as in Conrad’s Heart of Darkness or Coppola’s Apocalypse Now, or Yoda’s swamp planet Dagobah in George Lucas’s The Empire Strikes Back, where Luke must confront his darkest fears to become a Jedi, it is most often realized in a less obvious and nonliteral form in our stories. It may be the undiscovered country of Hamlet, the realm of the afterlife whose ghosts send Hamlet on his spiral of revenge and death, or it may be a war zone, an important rite of passage, parenthood, or the world of adult responsibilities more generally. Sondheim’s two-act structure for Into the Woods is constructed so as to give its characters a collective moment of wish fulfillment in a happy-ever-after scene at the end of act 1, before making them suffer the unintended consequences of these naive wishes in act 2. When our heroes venture into the woods, literally or figuratively, they are always changed by the experience, for better or worse.

The structure of stories has been ruthlessly dissected by a long succession of humanities scholars who are to narrative what medieval grave robbers were to human anatomy. Each charts the most rutted paths into and out of the woods, though each shows a fondness for different kinds of story or different degrees of granularity. An excellent survey of the competing analyses, which are all more similar than not, is offered by John Yorke’s book Into the Woods.9 Yorke is not an academic scholar but a respected writer for television, and he approaches his subject with a keen eye for practical specifics and a deaf ear to overgenerality.

The most famous of the narrative anatomists is Joseph Campbell, in part because his structural analysis of heroic myths inspired George Lucas to write Star Wars and in part because the financial success of Star Wars then persuaded Hollywood to take the schematic analysis of cinematic stories seriously. Campbell argued that most mythic hero stories instantiate a generic journey schema that can be encoded as a single abstract monomyth from which new and unseen stories could be analyzed or indeed generated. This unifying view of heroic narratives, which Campbell published in his seminal work The Hero with a Thousand Faces,10 was later organized into a twelve-step plan, much like an alcoholic’s path to recovery, in Christopher Vogler’s “A Practical Guide to Joseph Campbell’s The Hero with a Thousand Faces.”11 We illustrate Vogler’s twelve steps here with examples from The Matrix,12 a movie that is as much in thrall to Campbell’s ideas as it is to Star Wars:

  1. The Ordinary World: Neo is a nameless engineer in a big corporation
  2. The Call to Adventure: Neo is bored and follows the white rabbit to Trinity.
  3. Refusal of the Call: Neo lacks the self-belief to follow Morpheus’s lead.
  4. Meeting with the Mentor: Morpheus beckons again: Blue pill or red pill?
  5. Crossing the Threshold: Neo swallows the red pill and sees reality as it is.
  6. Character, Allies and Enemies: Neo meets the crew, learns Kung Fu.
  7. Approaching the Innermost Cave: Neo visits the Oracle for enlightenment.
  8. Enduring an Ordeal: Morpheus is captured; a daring rescue plan is needed.
  9. A Reward for Endurance: Neo saves Morpheus and masters “bullet-time.”
  10. The Road Home: Neo & Co. head back to the safety of the Nebuchadnezzar.
  11. Resurrection from Death: Neo is defeated by Smith but rises triumphantly.
  12. Return with an Elixir: Neo becomes “The One” and offers inspiration to all.

These are not rules as such, not in the sense of “the rules” for scary movies that are ironically mocked in the film Scream, merely recommended ingredients for a satisfying tale.13 Though Vogler’s steps fit Neo’s path in The Matrix like a tailored black suit and imbue that movie with a deeply satisfying sense of the mythic, we should not expect to see every step so clearly signposted in every story. Yet even when some steps are omitted, we may still infer their presence. So at the start of Alphaville, we can assume that someone or something has called Lemy Caution to action and given him his assignment, just as M summons 007 to MI6 headquarters to give him an official briefing for his latest mission. M and his narrative equivalents fill the role of Mentor, spurring Bond/Caution to cross the threshold into a world of danger and intrigue. Helpers and false friends abound at this stage, and soon after Bond is prepped for adventure with gadgets from his ally Q and further information from his CIA ally Felix Leiter, thuggish henchmen in the employ of the great villain will no doubt cross paths and fists with Bond. Ian Fleming was especially fond of the eighth step on the path, Ordeal, and found new and ingeniously sadistic ways to inflict pain and suffering on his hero in each new book. In Casino Royale, the villain Le Chiffre does ghastly things to Bond with a rug beater, and Auric Goldfinger literally tries to slice Bond in two. After the ordeal, once Bond has found the desired MacGuffin—this is how Alfred Hitchcock named the arbitrary reward that motivates every heroic quest—he must return it to MI6, but he finds his homeward path strewn with lethal pitfalls.14 A final face-off with the villain is the point when all seems lost for our hero, when he must overcome impossible odds to triumph over evil. Although Bond’s elixir of choice in the earlier movies is champagne and sex, recent incarnations show a preference for increased self-knowledge and the pride of a hard job well done.

Vogler’s twelve steps blend aspects of characterization and plotting, showing that each has a place in the periodic table of major storytelling elements. For Vogler, functions of character such as Helper and Mentor are just as important as functions of plot such as Threshold and Ordeal. Yet if Vogler’s twelve steps seem too linear or too coarse, an alternate system proposed by the Russian folklorist Vladimir Propp in his 1928 work, Morphology of The Folktale, offers a freer and more granular picture of the relationship between character and plot.15 Folklorists are empiricists at heart, and Propp built his system of recurring story elements, or story functions, from a painstaking analysis of a corpus of Russian tales. In contrast to Vogler’s twelve steps, Propp identified thirty-one recurring elements in his analysis, which he arrayed into an idealized sequence that is far from rigid. The earliest functions in this sequence loosely align with the earliest of Vogler’s steps, and we can see how Propp’s functions Abstention (a key member of the community suddenly leaves, perhaps unwillingly), Interdiction (an edict or prohibition is placed upon the community, curtailing its freedoms), and Violation (an edict is violated, incurring the wrath of its issuer) might motivate a hero to heed Vogler’s call to action. But Propp also allows the villain to enter the fray during this opening act, via a range of character functions that hint at future wickedness; these include Reconnaissance (the villain seeks out a MacGuffin and forms a plan that will affect the hero and/or the community), Trickery (the villain obtains important leverage by deceiving a dupe), and Delivery (the villain obtains that all-important MacGuffin that will drive the plot forward). The plot thickens when the hero crosses the threshold into adventure via the Departure function or the villain crosses his own threshold of wickedness to impose a Lacking condition on the hero’s world, by, for example, abducting a loved one, stealing an object of value, or foisting famine or discord or slavery on the community. If The Matrix seems to be shaped with Vogler’s twelve-step cookie cutter, it is Propp’s thirty-one functions that give the film its specific fillings, as each of its major characters—Neo, Morpheus. Trinity, Agent Smith, the Oracle, and the traitorous Cypher—fulfills a different functional need as identified by Propp in his 1928 study of folktales.

Other folklorists have doubled down on Propp’s approach, to bring ever more zeal to bear on the deconstruction of myths and folktales from diverse cultures. The work of folklorist Stith Thompson in the 1950s at the University of Indiana is especially notable for the scale of its analysis.16 Thompson and his colleagues set out to build a comprehensive catalog of the motifs that recur throughout the world of myth and fable. Their catalog is hierarchical and organizes its motifs into families of generic schemas and specific instances, assigning a Dewey Decimal–like code to each. You can browse the fruits of their labors at the multilingual folk tale database (MFTD) at mftd.org. The catalog’s contents make for an engrossing read of the “you couldn’t make this stuff up” variety, for when shorn of their narrative contexts, the motifs at the heart of so many fables can seem so alien that—dare we say it—they might even be machine generated. Consider a motif that the MFTD labels B548.2.2.2: Duck recovers lost key from sea. This is cataloged as a special case of B548.2, aquatic animal recovers object from sea, which is, in turn, an instance of B548, animal recovers lost object, and of B54x, animal performs helpful action. A resource as comprehensive as the MFTD allows folklorists to precisely codify the points of overlap between the tales of different cultures, but it can also be used to stimulate the generation of new stories or perhaps suggest motifs and writing exercises for the sufferers of writer’s block. In much the same spirit as Darius Kazemi’s @museumbot, which tweets random samplings from the Met’s art catalog, a bot named @MythologyBot (courtesy of @BooDooPerson) tweets a random pick from Thompson’s index of folk motifs at three-hourly intervals. Leveraging the weirdness of the MFTD, the bot dares its readers to dismiss its tweets as machine-crafted cut-ups of more sensible texts. It offers frequent and vivid demonstrations of a counterintuitive truth: we can use precooked schematic forms to tell stories that are traditional and oddly original.

It’s no accident of language that reporters speak of newsworthy events as “stories.” Reporters should not invent the facts, but we do ask them to interpret what facts there are and spin these into a coherent and compelling narrative. Reporters adhere to their own storytelling principles, such as “don’t bury the lead,” yet they also share many of the same concerns as a writer of fiction. Fact shapes fiction, but the inverse is also true: the revolving door between art and reality ensures that one is always a constant source of inspiration for the other. Newsmen learn from novelists, and storytellers take inspiration from the news. Dick Wolf, the creator of so many TV shows with plots ripped from the headlines (such as the various long-running Law and Order franchises), has spent decades mining the news for gripping drama, but he is not the first to do so, nor is he the first to construct a pipeline between news and drama on so commercial a scale.17 During the 1920s, while Vladimir Propp was conducting his scholarly analysis of Russian folktales to see what made them work as stories, a Canadian writer of pulp fiction named William Wallace Cook was developing a way of synthesizing new plots for his books, which he wrote at speed in a triumph of quantity over quality. Cook’s goal was to systematize the process of pure plot creation so that writer’s block would never prevent him from meeting a deadline again. He called his system Plotto and championed it as a means of plot suggestion with which writers could quickly generate high-level plot skeletons for their stories.18

Every story needs a conflict—note how Propp and Campbell/Vogler are as one on this issue in their analyses—and so the Plotto system sees stories emerge from the combination of themes (or what Cook called master plots) and conflicts. As Cook put it in his 1928 book for budding “Plottoists” (his term), “Each master plot consists of three clauses: An initial clause defining the protagonist in general terms, a middle clause initiating and carrying on the action, and a final clause carrying on and terminating the action.” We might see that first clause as cueing up Vogler’s first four steps (the call to adventure), the middle clause priming the middle stretch of Vogler’s steps (crossing into a world of adventure and ordeal), and the final cause as encapsulating Vogler’s final four steps (hero’s reward and the journey home). But unlike Vogler, Campbell, and Propp, Cook saw Plotto as a practical resource for budding writers, a trove of master plots that he himself had assiduously scribbled in notebooks, clipped from newspapers, cribbed from history books, or distilled from the work of others. His book enumerates more than one thousand master plots, some stale and stodgy and some that look as alien as Thompson’s folk motifs when formulated in Cook’s concise yet florid prose. And it does not end there: Cook corrals his master plots into a comprehensive system of cross-indexing that allows plot elements to be colored by different conflicts and clicked together like LEGO blocks. Consider the master plot numbered 1399:

A seeks wealth, his by right, which has been concealed * A seeks wealth which his father, F-A, has left him, but concealed in a place whose location has been lost

Cook uses placeholder variables A and B to denote, respectively, male and female protagonists, while placeholders like F-A above denote character functions such as “A’s father.” The plot has both a generic and a more specific rendering, separated with the * token. Cook indexes his master plots by conflict type, and he places the plot above in group 57, “Seeking to unravel a puzzling complication.” He cross-indexes each plot to others so that writers can connect their plots like the track segments of a train set. Cook links master plot 1399 to this potential follow-on segment:

A asks that B allow herself to be hypnotized in order that he may learn where buried treasure has been concealed * A hypnotizes B, and B dies of psychic shock

This is in turn linked to the following master plot, suggesting a dark closing act:

A helps A-2 secure treasure in a secret place * A, helping A-2 secure treasure in a secret place, is abandoned to die in a pit by A-2 who makes off with the treasure

The wonder of Plotto is not its plots per se, which can read to the modern eye like the stuff of Victorian bodice rippers, but Cook’s system of plot organization. Just as Flann O’Brien invented a proto-hypertext with At Swim-Two-Birds, Cook’s Plotto is very much a steampunk imagining of symbolic AI in the 1960s and 1970s, and it is, in its way, an application of what Ada Lovelace called “poetical science.” Like O’Brien’s tongue-in-cheek views on the construction of intertextual collages with precooked characters, Cook’s proto-AI also offers an early vision of the cut-up method of text generation that Brion Gysin and William S. Burroughs would later make famous, though Cook’s version is much more tightly constrained and bureaucratic in spirit.19 Yet there is also something of the Twitterbot spirit in Cook’s Plottoist approach to the synthesis of novel human experiences via mechanical methods. Cook’s own stories may not have stood the test of time, but with access to tools like Tracery and Cheap Bots Done Quick, he might have built some remarkable Twitterbots.

The Hero and Villain with 800 Faces … and Counting

For Joseph Campbell, the mythic hero figure is a recurring archetype that pops up in countless guises in just as many tales of popular mythology. Whether we pick Gilgamesh, Rama, Beowulf or Conan, or Samson, Moses, Joan of Arc, or Jesus, or Allan Quatermain, Indiana Jones, or Lara Croft, or Sam Spade, Philip Marlowe, Jane Marple, Lemy Caution, Rick Deckard, or Jeff Lebowski, these characters all have the right stuff to undertake a heroic journey for us and with us. These, and many more besides, all reside in Flann O’Brien’s archetypal limbo (population: untold thousands) “from which discerning authors [can] draw their characters as required.” One digital realization of O’Brien’s limbo is Wikipedia/dbpedia.org,20 or even TVTropes.org,21 but another more amenable version is the NOC list, which gives our bots access to as many heroes or villains or sidekicks or mentors or false friends or love interests as a story-generation system could hope for.

Recall that the NOC list offers up positive and negative talking points for each of its more than eight hundred residents, so that each has the background to play a flawed hero or a redeemable villain. The qualities that establish a character’s heroic standing, as well as those that establish a character’s lacking, to use a term from Propp, are all there waiting to be corralled into a brand-new story. The NOC list is also a well-stocked props department that provides all the necessary costumes and other accoutrements to our automated raconteurs so they can establish a vivid mise-en-scène for a story. Let us assume, for simplicity, that each story will be woven around of pair of two NOC characters A and B (unlike in Plotto, the letters do not imply a gender). The first of these can be plucked at random from the NOC list, but the second should be chosen so as to exhibit intentionality and create the conditions for an interesting story. However A and B are chosen, they comprise the flip sides of a narrative coin that will spin continually as the tale is told, with each turn highlighting the qualities and actions of an alternating face. Our stories can reimagine the past or the present by choosing A and B to be characters that are already linked in the NOC list. If linked via the Marital Status dimension, A and B will be characters that are known to have married, divorced, or just dated, and if via the Opponent dimension, A and B will be characters that are known to be rivals. These are first-order connections, insofar as the link between A and B is asserted explicitly within the knowledge base. Thus, a bot might weave a story about Cleopatra and Julius Caesar, or Angelina Jolie and Brad Pitt, or Lois Lane and Superman, yet do so in a way that completely reimagines their relationships (because, to be frank, the bot will not know enough to faithfully reimagine them as we all know them). Like the first row of a cellular automaton, the relationship between A and B establishes the foundation on which the ensuing story will rest, so this pairing should be selected with care. Consider the following setup from a storytelling bot @BestOfBotWorlds, which sets out to reimagine an old rivalry:

10859_008_fig_001.jpg

Let’s put to one side for now the question of why emoji animals are used here for the famous rivals Edison and Tesla, noting only that emoji are useful single-character icons for complex As or Bs and that when the relationship at the heart of a story is a factual one, as it is here, framing it as a “what-if” scenario via emoji ensures the counterfactuality of the narrative. This is a signal to readers that the story is a playful reimagining of history with only a fabulist’s regard for the truth.

Just as Godard picked Tarzan to oppose IBM, a bot may choose its A and B to serve as vivid incarnations of two opposing qualities, so that some quality of A highlights the opposing quality in B. With this strategy, a bot might pair the dirt-poor Bob Cratchit with any of the fabulously wealthy Lex Luthor, Warren Buffett, Bruce Wayne, or Donald Trump. Or a bot might match the well-mannered Emily Dickinson with the more vulgar Eminem, or pit the savage Conan the Barbarian (or Tarzan, for that matter) against the urbane and sophisticated Gore Vidal. The postmodern humor of Godard’s pitting of the fictional Tarzan against the very real IBM can also be facilitated using the Fictive Status dimension of the NOC list, so that, for example, the spiritual and kindly Mahatma Gandhi is pitted against the (darkly) spiritual and malevolent Darth Vader. A pairing based on inferred opposition is a second-order connection between characters, insofar as the link is not directly provided by the knowledge base and must be discovered by the bot itself, in a search of the NOC list’s unstated possibilities. Naturally, given the combinatorial possibilities that a search can consider, the space of second-order connections is far larger than that of first-order connections, and a bot can make greater claims to originality by exploring a large second-order space of its own construction than a small first-order space that is given to it on a platter by its designer. A host of second-order spaces can be mined to obtain resonant character pairings; some spaces are simple and based on an obvious premise, and others can be far more complex. Consider the space of character pairs that just share NOC talking points. These pairings are essentially metaphors, but metaphors that suggest apt and imaginative story possibilities. So consider another pairing for Nikola Tesla:

10859_008_fig_002.jpg

The nutty Doc Brown of the Back to the Future movies seems an ideal fictional counterpart for the real life Tesla, for Tesla was something of a nutty professor himself.22 It is often said that heroes are only as great as their opponents allow them to be, and in the hero’s journey, we require our protagonist to meet with an antagonist of equal stature who can present a challenge worthy of our interest. Metaphor offers a useful means of ensuring that our As and Bs, whether heroes or villains, are well matched, and so the seed for this story is a NOC metaphor taken very seriously indeed. The framing of the metaphor above as literal fact also makes use of a specific talking point in Tesla’s NOC entry: later in life, he was a recluse and spent his last days developing so-called death ray technology in New Jersey. The bot finds the quality reclusive in Tesla’s NOC entry and uses it to frame Tesla’s relationship to his antagonist Doc Brown at the beginning of the tale, using the motif that “A hides from B because A is a recluse.” A storytelling bot must do more than choose an apt pair of characters to strap together for the ride: rather, a bot must decide how best to use what it knows about its characters and what it knows its audience knows about them too, to shape the tenor of As and Bs interactions in the story, beginning with the very first action that anchors them all. So when a metaphor pairing the pioneer Steve Jobs with the pioneer Leonardo da Vinci is framed by a storytelling bot as literal reality—or literal surreality—an apt choice of pioneer-on-pioneer motif might be “A funds B (to pioneer for A)”:

10859_008_fig_003.jpg

Second-order connections can be inferred on the basis of overlapping attributes or associations in the NOC list. We can see two kinds of overlap in the examples we’ve shown: an overlap in positive or negative talking points, which is indicative of a metaphorical link between characters, and an overlap with opposition between talking points, so that one character is known for a quality that is not just lacking in the other, the opposite quality is actually a noteworthy aspect of the other (e.g., strong versus weak, humble versus arrogant, or good-hearted versus wicked). But characters can overlap in other dimensions too, to suggest other kinds of second-order spaces. For instance, two characters may share the same creator (e.g., Indiana Jones and Luke Skywalker share George Lucas), or share the same actor on TV or in film (e.g., Sherlock Holmes and Tony Stark share Robert Downey Jr., while Han Solo and Indiana Jones share Harrison Ford), or share the same group affiliation (for instance, Abraham Lincoln, George W. Bush, and Donald Trump all share an affiliation with the Republican Party) or share the same romantic partner (e.g., Billy Bob Thornton and Brad Pitt share a marriage to, and a divorce from, Angelina Jolie), or share a screen portrayal as actors (e.g., Christian Bale, George Clooney, Adam West, and Ben Affleck all share the role of Batman). Each type of inferred association between characters is a metaphorical club into which each can be placed, and joint membership in an ad hoc club, such as the club of actors who have all played Batman or Sherlock Holmes, suggests that two characters are of sufficiently equal stature to make for a good A + B story pairing. Second-order spaces often result in story pairings that seem to break the fourth wall and flash an ironic smile at the reader. A story in which Dracula is investigated by Commissioner James Gordon (both were portrayed by Gary Oldman) or Michael Corleone develops a bitter rivalry with Tony Montana (both were portrayed by Al Pacino) or Lois Lane delves into the mystery of Amelia Earhart (both were played by Amy Adams) need not explain why it has paired these characters. For even when readers do not consciously detect a frisson of irony generated by a strangely apt but impossible pairing, they may nonetheless feel the juxtaposition to be resonant, albeit for reasons they cannot pin down.

The space of second-order possibilities can be made larger still by exploiting the inherent connectivity of the NOC list to chain other first- and second-order spaces together, in a postmodern version of Chinese whispers. In a first-order space Angelina Jolie might dump Brad Pitt, or Angelina Jolie might face off against the fictional Maleficent or the fictional Lara Croft, or any of her other screen roles. But in a second-order space Brad Pitt might cheat on Angelina Jolie with Maleficent. A storyteller should not have to explain why it makes choices like these, and bots rarely explain their workings anyway, for the same reasons comedians seldom explain their jokes: either a reader gets the implicit joke or does not. A knowledge-based bot can disguise its method with madness and invite readers to perceive intentionality where others might see only randomness. Knowledge allows a bot to thumb the scales in favor of creative intent while not being heavy-handed about its actions, allowing its followers to see what they want to see.

Imagine a story in which Alan Turing seeks medical advice from Dr. Strange, since, after all, each was played by the actor Benedict Cumberbatch. This pairing arises from an obvious second-order space that links famous people portrayed by the same actor, but the actions used to link these characters and drive the plot forward must come from a specific understanding of the characters themselves. Clearly, Dr. Strange is a doctor in the NOC list and diagnosing others is just what doctors do, so this pairing would aptly fit the motif “doctor diagnoses patient” if only we had a stock of motifs like this for our bot to exploit.

But just as Stith Thompson and his colleagues had to knuckle down and build their database of motifs from scratch, we too shall have to build this inventory for ourselves. Our job is a good deal easier, though, because we are inventing rather than analyzing and we do not have to trawl through the world’s collected folklore. As motifs are schematic structures that concern themselves with character types rather than character specifics, our first order of business is to create an inventory of the pairings of character types that will be linked by these motifs; then we can set about the task of providing specific linking verbs for the types paired in each motif. To construct this inventory, we consider the pairings across all of the first- and second-order spaces we plan to use for our stories and generalize each to the type level using the NOC Category dimension. For instance, we find Forrest Gump + Robert Langdon in the space of people linked by a shared actor, and one way that this generalizes at the type level is Fool + Professor. Once this inventory of paired types is created, we can sort it in descending order of coverage, so that the motifs that cover the most character pairs are pushed to the top. We then work our way down the list, providing linking verbs for the character types in each generic motif; for Fool + Professor (if we make it that far down the list), we can provide the verbs “study under” or “look up to” or “disappoint.” Readers can find a version of our motif inventory with linking verbs for more than two thousand type pairings on our GitHub, in the spreadsheet named Inter-Category Relationships. Take this as you find it, or adapt it to reflect your own intuitions about narrative.

The Road to Know-Where

Our characters are paired on the assumption that similarity is most interesting when there are so many reasons not to take it seriously. Doc Brown is a comedic fiction, but Nikola Tesla was a very real and tragic figure, and though Leonardo da Vinci and Steve Jobs were each the real deal, they lived in different historical eras. Emma Bovary and Alice in Wonderland are both wholly fictional beings, yet the fact that each was portrayed by the same actor undermines the credibility of the conceit as a serious story idea.23 Each pairing is as much a conceptual pun as a conceptual metaphor, but that’s also the source of its appeal: these star-crossed pairings are designed to tickle the fancy of a bot’s followers, not to pitch woo at a movie studio that might turn them into expensive cinematic products. Yet this is not to say that our bots shouldn’t take their own story ideas seriously. After all, a story idea is only as good as the stories that can be woven from it. An inventory of schematic motifs gave our story bot its initial pitch for a story in a single tweet, by providing—in an apropos plot verb—a vivid sense of how its characters might interact. But now our bot must build on these premises to generate full stories that can stretch across many threaded tweets. Let’s begin with the question of where a bot will find plots to sustain its stories. The answer is an oldie but a goodie: we’re going to treat every story as a journey.

It is not just heroic quests and road movies that build stories around journeys. The language of narrative encourages us to speak of all stories as journeys. So we talk of fast linear stories and slow, meandering ones; stories that take us on an emotional roller-coaster ride; stories that go nowhere, or stories that lose the plot and get stuck in the weeds; stories that race along at a breakneck pace, or stories that just seem to crawl by; stories filled with sudden twists and unexpected turns, as well as stories that lose all momentum before limping across the finish line. As variants of the journey metaphor go, Stories Are Races is especially productive. Actors speak of their most promising projects as “vehicles,” and successful well-crafted vehicles do seem to run on fast tracks and turn on greased rails. To lend this race metaphor a literal reality, think of the electric slot car sets that kids have played with for decades. Even if you haven’t played with a set yourself, you will almost certainly know of kids who have. Dinky little toy cars are slotted into current-carrying grooves in a track made of prefabricated segments, allowing the electricity-powered cars to zip around the track in a thrilling simulation of a real high-speed car race. The goal is to beat your opponent in the parallel groove, and the trick is to modulate your speed so that your car doesn’t fly off the track on a tight bend or at a chicane. The more complicated the shape of the track, the more dramatic the miniature narratives that a child can concoct. But complex track configurations need a great many prefabricated shapes to click together to form a circuit, and to support certain kinds of dramatic possibility, a child will also need specific kinds of track segment. For instance, two cars will always run in parallel grooves, no matter how many twists or turns in the track, if the track lacks a piece in which its two grooves cross over. With this piece, two cars might actually crash into each other as they switch lanes. But without this, no crashes!

The most popular brand of slot car sets in Europe is Scalextric, while the Gaelic word for story is scéal (imagine Sean Connery saying “scale”), so our bot-friendly implementation of the Stories Are Journeys metaphor and its variant, Stories are Races, has been christened Scéalextric.24 Beneath the cute name lies a surprisingly systematic analogy between storytelling and racing simulations. Take the two characters A and B: let’s keep the simplifying assumption that each story is built around a well-matched pairing of a protagonist A to an antagonist B, and so we can take A and B to be the story equivalent of two cars racing along parallel grooves on the same track. This track is the plot, a sequence of actions that each character must pass through in the right order, as each action frames an event in which both characters participate together. If A is selling, then B is buying, and if B performs surgery, it is because A is unwell. Because the same event can be viewed from the perspective of different characters (for example, a lend event for A is a borrow event for B), a well-crafted plot arranges its actions so as to draw the reader’s attention back and forth between characters, as though the reader was watching a fluid game of tennis. When rendered into English and threaded into tweets, the sequence of plot actions will describe how A and B proceed neck-and-neck from the starting position of the first plot action to the story’s finishing line.

Though our plot will be built from individual actions, like the click-and-extend segments of a Scalextric track, we will group these actions into standardized triples with a uniform three actions apiece. We can think of the action triples that schematize an unsurprising plot development (X happens and then Y happens, to no one’s surprise) as linear track segments, and those that suggest a surprising turn of events (X happens but then Y happens, defying expectations) as curved segments. An interesting plot, like an interesting racetrack, balances the straight with the curved in a satisfying whole that is neither too predictable nor too zany. To build a plot as one builds a model racetrack, a storyteller must choose compatible action triples to click together like so many Scalextric track pieces. The criteria for well-formed triple combination are twofold and simple: the third and final action of the first triple must be identical to the first action of the triple that will succeed it in the plot; except for this point of overlap, there can be no other action that is shared by both triples (in this way plot loops are avoided). Consider this standard action triple, which might link Drs. Strange and Turing:

10859_008_Tp253

For convenience we will assume the presence of A and B in our triples, so this is:

10859_008_Tp254_1

Event fillers are always assumed to be A (subject) and B (object) unless the verb is marked *, indicating a reversal of roles (making B the subject and A the object). Consider another pair of triples that reflect recurring plot structures in stories:

10859_008_Tp254_2

We can combine triples 1 and 3 in that order because the last action of 1 is the first action of 3 and the triples share no other overlaps. We can likewise combine 2 and 1 in that order, but we cannot combine 2 and 3. When two triples with an overlapping action are combined in this way, what results is a sequence of five successive actions (as we count the shared actions only once):

10859_008_Tp254_3

Triples can be connected into ever-larger chains of story actions, as in:

10859_008_Tp254_4

Two connected triples yield a sequence of five actions. If we join three triples together, a sequence of seven actions is obtained, and if we connect four, a story of nine actions emerges. So our storyteller need only add more triples to the plot until its desired story length is achieved. But notice how actions are not allowed to reoccur anywhere in the resulting chain. Though actions do repeat in real life, context can make them mean different things. Because context is an issue that is hard for a bot of little brain to grasp, it is best to prohibit recurrence altogether, to avoid the formation of troubling Groundhog Day loops. Note also how our triples have been crafted so that alternating actions tend to switch a reader’s focus from A to B and back again. Because triples are linked by a shared action and framed by a shared viewpoint at the point of connection, it follows that when viewpoint alternation is obeyed within action triples, with each triple passing the baton of its focus to the next, alternation will also be enforced at the overall plot level. The Scéalextric triple-store can be found in the GitHub resource Script Mid-Points.xlsx in a simple three-column tripartite structure. Simply, each triple is assumed to have a midpoint action, a lead-in action before this point and a follow-on action after this point. As an illustration, here is a peek at the first few rows of the resource:

10859_008_fig_004.jpg

Each row stores one or more triples, with disjunctive choices for the before, midpoint, and after actions separated by commas within cells. The third row in the table thus defines twelve unique triples that chart A’s movement from carer to skeptic to enemy. Why A cares in the first place, or how B will respond to A’s betrayal, are parts of the story that we must look to other triples to flesh out.

All of these plot triples can be collectively viewed as a graph, a dense forest of branching pathways in which any triple α:β:χ is tacitly linked to every other triple starting at χ or ending at α. So a random walk into this woods can give us our plot α, … , Ω: starting at a triple containing the initiating action α (as dictated by our choice of characters), a system picks its way from triple to triple to finish at some as-yet-undecided triple whose third and final action is Ω. The chain of actions that leads the teller from α to Ω then provides the plot skeleton on which a story can be fleshed out. A random walk from any point α may take a wanderer in this forest to many different Ω’s by many different routes, provided it takes care to avoid loops and obeys the basic rules of nonrecurrence. But how dense with pathways should this forest of branching possibilities be so that every walk in the woods is assured of charting a different story? Our forest must allow for many thousands of possible pathways between diverse points of ingress and egress. However, the number of possible triples in the forest is limited by the size of our inventory of plot verbs from which each triple’s tripartite structure is filled. Moreover, only a small fraction of possible triples are actually meaningful in any causal sense or show any promise as a story fragment. We shall thus need a relatively large stock of action verbs to compensate for the selectivity with which they are combined into triples. The Scéalextric core distribution (which can be found on our GitHub; see BestOfBotWorlds.com for a link and more detail) comes ready-stocked with about eight hundred plot verbs and approximately three thousand plot triples that pack three of these verbs apiece. With more than eight hundred verbs to choose from, the action inventory does not lack in nuance, and many entries are near, but not true, synonyms of others; for example, triples can employ “kill,” “murder,” “execute,” or “assassinate” to suit the context of a story (e.g., who is doing the killing and who is being killed, or why?). A good many verbs are also present in passive forms, allowing a plot to focus on either the agent or the patient of an action.

This nuance proves to be of some importance when we consider the rendering of stories at the narrative level. A computer-generated story is more than a list of verbs placed into causal sequence. We set aside for now the tricky question as to whether any arbitrary pathway α, … , Ω can be considered a “story,” and whether any α at all is a viable starting point, or whether any Ω is a viable finishing point. Rather, let’s assume that any path α, … , Ω can be rendered at the narrative level to become a story. Rendering is the process whereby stories go from logical skeletons to fully fleshed out narratives, where terse possibilities such as A kill B take on an expanded idiomatic form. Each action is trivially its own rendering, as we continue the long tradition in AI research of choosing our logical symbols from the stock of English words. Thus, A kill B might be rendered directly as <A> killed <B>, where <A> and <B> are placeholders where the eventual characters such as Tony Stark and Elon Musk will be inserted to yield “Tony killed Elon.” But this staccato does not remain charmingly Hemingwayesque for very long. It’s better to choose from a range of idiomatic forms when rendering any given action, since part of the joy of storytelling (and story hearing) is the use of words to convey attitude in a teller and stir feeling in an audience. Consider again the plot verb “kill.” This might be rendered idiomatically in any of the following ways:

A kill BA stabbed B, A mauled B, A put poison in B’s cup, A put poison in B’s food, A savaged B, A put B in the hospital, A gave B a terrible beating, A punched and kicked B, A gave B an almighty wallop, A kicked B into next Tuesday, A stomped all over B, A gave B a good kicking, A viciously assaulted B, A launched an assassination attempt on B, A wanted to kill B, A choked the air out of B, A flayed B alive, A knocked the stuffing out of B

In a story of just two people, the action “kill” has especially severe consequences, as it is most likely going to rob our narrative of one of its principals. This is not the kind of action we expect to see at the start or even in the middle of a narrative, as our story must still go on with both characters even if one of them is now dead. The renderings for “kill” above alleviate this burden by treating the plot verb as either an expression of homicidal intent (A wanted to kill B) or as a hyperbolic turn of phrase for an act of grievous rage. If B remains alive in the next action, the reader will know that A’s deadly intent has not been realized, but if B is obviously deceased in the next action (say because B now haunts A), readers will infer that A’s actions were fatal to B. Rendering buffs and varnishes a plot structure to give it both nuance and dramatic effect, but as we can see for “kill,” it may also use understatement, euphemism, and deliberate ambiguity to diminish the brittle certainty of a dry logical form. Our bots work best when, like good storytellers, they suggest more than they actually say and allow the reader’s imagination to do most of the heavy lifting. The rendering of individual actions in isolation, one action at a time, is a simple context-free approach to a problem that is inherently context sensitive, so it is important that our renderings are stretchy enough to fill any gaps that are left exposed between plot actions.

For each plot verb in our inventory we must provide a mapping from its logical form (e.g., A kill B) to the kind of idiomatic phrasings provided for “kill” above. As with many other things in language, the frequency with which different plot verbs are used in our triples follows a power law distribution, with a small number of popular verbs (such as “trust”) appearing in a great many triples and a longer tail of many more (such as “ensure”) appearing in very few. This is in part a function of the verbs themselves and their portability across domains, and it is in part a reflection of the mind-set of the triple-store’s designers, for it surely says something about us as creators of Scéalextric that its most frequently occurring verbs are “trust,” “disrespect,” “condescend to,” “deceive,” “disappoint,” “fall in love with,” “fear,” “impress,” and “push too far.” Yet whatever verbs turn out to be most useful, we must aim to provide the most renderings for the most popular verbs. Those will recur time and again in our stories, but diverse rendering can introduce variety at the narrative level and soothe the reader’s sense of tedious repetition. The GitHub resource Idiomatic Renderings.xlsx contains all of Scéalextric’s mappings from all of its eight hundred plot verbs to colorful colloquialisms, providing more mappings for the most popular verbs (such as “trust,” “deceive,” and “disappoint”) to allow greater variability in rendering across stories with common actions. If the storyteller chooses randomly from the action renderings available to it, it can ensure that readers are not bombarded with the same clunky boilerplate in story after story.

Rendering shapes how readers will perceive, process and appreciate a plot, and even the simplest one-to-many mapping from plot verbs to idiomatic templates can introduce vividness, personality, and drama into a narrative. Consider the use of dialogue: the old storytelling maxim is, “show, don’t tell,” so why tell readers than A insulted B or that B complimented A when we can show A actually saying something offensive to B, or show B liberally applying butter to A’s ego? Later we’ll explore how a bot might invent its own generous compliments and scornful insults as they are needed, to leverage what it and its readers already know about their target. For now, we can simply build the dialogue into the mapping of verbs to idiomatic templates. Consider Scéalextric’s mappings for “disappoint”:

A disappoint BA thoroughly disappointed B, B was very disappointed in A, B considered A to be a big disappointment, B thought “What a loser” when looking at A, “Could you be a bigger disappointment?” asked B sarcastically, “I'm very disappointed” said B to A, “I've let you down” apologized A to B, “You've let me down” said B plaintively, B considered A a loser, B treated A as a failure, A’s flaws became all too apparent to B, B wrote A off as a loser, “What a LOSER!” said B to A dismissively

This writing is a very long way from Jane Austen, but even occasional snatches of canned dialogue can help to draw readers into a story and make the plot feel that it is unfolding in real time. Of course, the most vivid way of showing and not telling is to use pictures instead of words. A storytelling bot might attach images to its tweets to illustrate the corresponding plot actions, but which images? A convenient source of storybook illustration can be leveraged from emoji, as those simple images have a suitably cartoonish aesthetic for our Twitter stories and can be inserted directly into a tweet, or, for that matter, into the idiomatic mapping of a plot verb. In fact, it is possible to construct an idiomatic mapping entirely from emoji, to produce stories that can themselves be rendered entirely in emoji and without words.25 Consider an example mapping of verb to idiomatic rendering that makes full use of a crude emojified metaphor:

10859_008_fig_005.jpg

There are even emoji for our A and B placeholders, but these will be replaced at rendering time with the specific animal emoji assigned to our main characters. And there are certainly enough emoji to allow us to map all eight hundred plot verbs into visual sign sequences, yet because the emoji standard lacks a widely accepted grammar and semantics for the composition of complex ideas, these mappings are not always as transparent to readers as we should hope. If we want our bots to use emoji as a rendering “language” (we use the term lightly, as emoji is very far from a language), we must teach readers how to understand the sign clusters generated by our bots. A bot might scaffold its emoji mappings with the pseudo-English logical forms that they signify, so that readers can come to appreciate over time the mapping from one to the other. Consider this scaffolded example from a story rendered from an earlier metaphor, Steve Jobs as Leonardo da Vinci:

10859_008_fig_006.jpg

This example also highlights the value of framing stories in a faux Aesop manner: by imagining the story’s chief protagonist and antagonist to be animals who imagine themselves to be famous people, the bot can insert the single-character emoji associated with those animals into its emoji translations. The idiomatic mappings for these multimodal renderings combine English text and emoji with the necessary scaffolding (in parentheses) to make the marriage work, but in principle, the language of the textual component could be anything at all, from French to Esperanto to Klingon. The underlying logical forms may use the stilted English of good-old-fashioned AI, but their use of bespoke idiomatic mappings means our bots are easily localized to any language or culture we could want.

Each plot action is rendered in isolation, without regard for the actions that happened before it or the actions that will happen after it, because to assume otherwise would complicate the process immensely, and for little obvious gain. However, there is an important linkage between actions that must be explicitly rendered if the plot is to appear causally coherent. For if, in rendering a plot, the teller makes no distinction between actions that follow naturally from the previous event and those that go against our expectations, readers will get little sense that the teller is in control of its own tale. This sense of control can be imposed with the smallest of words, the logical connectives “so,” “but,” “yet,” and “then.” Let’s see two at work in a pair of tweets from our tale of Steve versus Leonardo.

10859_008_fig_007.jpg

10859_008_fig_008.jpg

Notice the difference that a simple “so” and a “but” can make to the fluency of a rendering: the plot may indeed be just a long plodding sequence of one action after another, but a storyteller must never give this impression to its readers. Points of emphasis in a story are points of empathy too, and a storyteller who cannot identify the former cannot hope to achieve the latter. This may all be closer to Punch and Judy than to Austen and Aesop, but a storyteller still has to convince readers that they are all on the same journey into the woods and that, moreover, the teller holds a map to guide them through to the other side. Yet those innocuous little words “so,” “but,” “yet,” and “then” hide a great deal of pragmatic complexity: deciding when to use which is an easy task for a human speaker, but it requires a model of expectation and surprise if a machine is to do it too. With enough annotated data, we could use supervised machine learning, but we would still need to annotate a lot of cases by hand. So we consider every transition between successive actions allowed by our triples and manually annotate the transition with an apt logical connective. If you impress your boss and get a promotion, that’s a well-earned “so.” If you work hard for a boss who fails to appreciate your efforts, that’s an unfortunate “but” right there. Readers can find the appropriate logical connectives for thousands of action transitions on our GitHub, in a resource named Action Pairs.xlsx. Intrepid readers who want to go the machine learning route might consider training a statistical model from these annotated data, so that unseen action pairs in the future can also be labeled.

We have given our bots a means of charting a path through the forest of plots and of rendering every step they take in the language of tweets, but we have yet to address the question of whether any well-formed path through the forest is actually a story, and if it is, whether it is a story worthy of the telling. It certainly seems odd to suggest that any pathway at all between two arbitrary end points α and Ω can be considered a well-formed story, even if our triples ensure that the connections between its actions are causally coherent. For instance, does it make any sense to start a story with the death of a central protagonist? Probably not, but that didn’t stop Billy Wilder from showing his leading man dead, floating facedown in a pool in the opening scene of Sunset Boulevard.26 Or does it make sense to end a story with the capture of the hero, with no hope of rescue in sight? If you think not, cast your minds back to the end of The Empire Strikes Back, in which George Lucas does precisely that to his roguish hero, Han Solo. Though we might be tempted to try and circumscribe the space of possible storylines in advance by decreeing that certain actions cannot be used to open a story or that certain actions cannot ever close a story, such a move would run counter to the exploratory break-it-to-understand-it spirit of the best bots. If there is a line to be drawn between good stories and bad, or between natural and artificial stories, this is a line we want our Twitterbots to jump all over, for that is what they do best. By playing hopscotch at category boundaries, they reveal to us just how rickety those boundaries can be, and they do it by showing, not telling.

Rather than rejecting storylines that do start or end at the right places, we can instead frame a story so that any starting action α seems a natural place to start a narrative, and any final action Ω seems as good a place as any to stop and take stock. Fairy tales bookend their plots with an opening “Once upon a time …” and a parting “Happily ever after” to convey precisely this sense of self-containment. So the crawling text at the beginning of Star Wars serves as the top slice of bread for George Lucas’s sci-fi sandwich, and the medal sequence after the destruction of the death star is its bottom slice. The two together signal to an audience hungry for closure that “Wow! That was some sandwich!” We need a means of turning any opening event at all into a crawler-worthy introduction and any closing event into one that offers a sense of closure, if perhaps not for the story as a whole then for a distinct chapter within the larger narrative. We can achieve this by defining a set of potential opening bookends for every action and a complementary set of closing bookends for the same actions. So when a story opens with action α, a teller can simply insert an appropriate opening bookend for α at the start of the story, just like Lucas’s crawler, and when a story ends with Ω, the teller can insert a random selection from Ω’s closing bookends as its very last utterance. Readers can find both opening and closing bookend inventories on our GitHub. So what might be an apt opening bookend for our tale of Leonardo versus Steve?

10859_008_fig_009.jpg

The bookend “A had money and B needed money; it was a match made in heaven” is defined for the plot action fund. While this is a far cry from Jane Austen’s, “It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife,” it does the job it is designed to do.27 Recall that the “what if” premise for this story, set up in the first tweet, involves one pioneer (Jobs, or a dolphin who thinks it is Jobs) funding another (Leonardo, or a goblin who thinks it is Leonardo). Let’s look at the next action/tweet in the story:

10859_008_fig_010.jpg

With the opening bookend in place—it’s all a marriage of convenience based on money—our teller has laid the foundation for the first act proper of its narrative. But notice how the rendering above seems so oddly apt for the role of Steve Jobs. Rather than rely on the stock idiomatic renderings of its lookup table, the teller has used specific information available to it (from the NOC list) about the character of Jobs, namely, that one of his Typical Activities is “pioneering new technologies.” In this way the storyteller contributes to the mise-en-scène of the piece, much as the rainy nights, neon signage, and seedy locations establish the mise-en-scène of Alphaville and Blade Runner. Our lookup table of idiomatic phrases is used not as the foreground of the rendering process but as a backstop when action-specific rendering fails to produce a text that integrates specific details from the NOC. So the next tweet in the story has Leonardo lie to Jobs, but this not the idiomatic rendering of the action “deceive.” Rather, it alludes directly to Leonardo’s goal:

10859_008_fig_011.jpg

Notice also that Leonardo is qualified here as “brilliant” in the context of his role as a deceiver. The resource Quality Inventory.xlsx provides a mapping from plot actions to the specific qualities of the agents and patients that facilitate them. The action “deceive” is facilitated by the qualities “two-faced,” “insincere,” and “dishonest” in the deceiver, while a patient is more easily deceived if the agent is “brilliant.” No one likes to be deceived, and the bot’s triple-store suggests Jobs’s natural reaction (notice that an emoji translation is included only if there is space in the tweet):

10859_008_fig_012.jpg

This is a stock rendering, straight from the bot’s lookup table of plot verbs to linguistic templates. But the action that follows this is very specific to Leonardo:

10859_008_fig_013.jpg

Any plot verb that might incorporate elements of an agent or patient’s entries in the NOC list is amenable to this kind of specialized rendering. Violent actions are the obvious go-to here, as these can directly avail of the Weapon of Choice field. But other kinds of action can exploit other fields too: actions involving travel and avoidance can avail of the Vehicle of Choice field and the various Address fields, while creative use can also be made of the Creator, Typical Activities, Group Affiliation, Seen Wearing and Marital Status fields in the right action contexts. Here the address field is used to add local color to the plot action hide_from:

10859_008_fig_014.jpg

The teller then returns to its first element of mise-en-scène in its follow-on tweet:

10859_008_fig_015.jpg

We have already seen Leonardo’s secret weapon; now it’s time to see Steve’s:

10859_008_fig_016.jpg

This is, of course, a baked-in joke from the NOC list, yet the storyteller uses it at the right time and to good effect, leaving the reader to wonder how Steve might have persuaded an angry Leonardo to eat this dubious offering in the first place. What should one do when laid low with a poisoned peace offering? Gurn and splutter madly, of course, all the while hurling soul-rending curses:

10859_008_fig_017.jpg

We are now at the final action of the final plot triple, which has its protagonist earnestly offer sacrifices in the hope of mercy. This may seem a strange action on which to end the tale, but we can hope that the closing bookend saves the day:

10859_008_fig_018.jpg

So how does the storyteller tie a knot in its tale after a final action like this? It could dangle the prospect of a sequel of the “Will he or won’t he?” variety, or it might offer a sop of a resolution that attempts to end the tale on a balanced note (after all, the bot does not really appreciate who is ultimately in the right here):

10859_008_fig_019.jpg

So ends the acrimonious tale of two pioneering geniuses from very different eras, whose forays into new-fangled technologies descend into old-fashioned jealousy, violence, and ritual sacrifice.

The “Show, Don’t Tell” Must Go On

Anyone who has ever ad-libbed a story to a child will know that children can be a tough audience. Kids have no truck with convenient abstractions and will spy any chink in the boilerplate of a story built from prefabricated parts. Even when telling the same story for the umpteenth time, a teller must be prepared to create anew each time, not the plot as a whole but the minor details that lend vividness to a tale. So when a parent says to a child, “Then the snake insulted the monkey,” this child is going to want to know exactly what that snake said to that monkey. We humans care about the details of our interactions with other people, or with the anthropomorphized animals we agree to treat as people, because this is the basis of empathy that allows us to project ourselves into another’s shoes. Just as we cannot actually insult someone by merely uttering the words, “I insult you, I insult you,” a storyteller shouldn’t be allowed to merely assert that A insulted B. Like a kid putting a parent on the spot, we will want to know exactly what was said, and we won’t be placated with generic boilerplate like, “You no-good varmint, you!” A storyteller is going to have to invent an incisive put-down on the fly, one that meets the demands of what happens next and shows insight into the peculiar characteristics of the insult’s intended target.

Fortunately, this kind of tweet-sized speech-act is all in a day’s work for a bot designed to generate human-scale metaphors on demand. Because our bot’s stories are already anchored in metaphors, it is not a big ask for it to generate additional embedded metaphors on the fly, to flesh out one character’s figurative view of another’s virtues or flaws. A bot that can articulate the similarities of Steve Jobs to Leonardo da Vinci can do the same for similarities linking Jobs to Tony Stark (if a compliment is needed), Jobs to Kim Jong-un (for an insult), or Leonardo to Doc Emmett Brown (this one could go either way, depending on the speaker’s goal). But let’s consider a new bot-time story in which Frank Underwood, the scenery-munching political Icarus portrayed by Kevin Spacey in Netflix’s drama House of Cards is metaphorically equated with a politician from the real world who is just as ambitious and Machiavellian, Richard Nixon:28

10859_008_fig_020.jpg

Aptness is a quality that resides as much in the mind of a reader as in a text itself. So while the choice of the snake emoji for Nixon is entirely random, it is tempting to see it as an apt and deliberate choice by the bot, for when so much of what a bot packages is carefully chosen, the bits that it plucks at random from its grab bag of possibilities can seem every bit as deliberate as those it judiciously crafts. The choice of monkey for Underwood is just as random as Nixon’s snake, if much less obvious as a metaphor, yet readers are free to see this as an apt symbol of political agility if they so wish. In any case, the bot deliberately picks its opening bookend to pit one politician against the other in a campaign_against event:

10859_008_fig_021.jpg

Having sown the seeds of political disagreement, the bot opens with its first act:

10859_008_fig_022.jpg

The political rivals quickly fall to rancor and the mud begins to fly. The plot calls for Nixon to humiliate Underwood, but how should this humiliation be realized? The idiomatic renderings database provides some boilerplate for our bot to use:

A are_humiliated_by BB delivered a humiliating lecture to A, B read A the riot act, B publicly humiliated A, B did not spare A’s feelings in a scathing rant, B launched a humiliating tirade at A, B reduced A’s reputation to rubble, B verbally dismantled A brick by brick, B publicly chastised A as you would chastise a child, B excoriated A with a humiliating lecture, B made A feel very small indeed, B gave A a public dressing down, B subjected A to public ridicule

But no matter how colloquial the idiom, these mappings can do no more than say that an act of humiliation has occurred, without actually telling us what was said. But a bot that writes its own lines can focus on the negative talking points of the target, to craft an apt metaphor that is also a humiliatingly accurate put-down:

10859_008_fig_023.jpg

The dramatic irony of comparing Frank Underwood to Keyser Söze of The Usual Suspects (spoiler alert: Söze was also portrayed by Kevin Spacey—or was he?) is not beyond the reach of the metaphor generation process, as the NOC list allows just this kind of metaknowledge to be used when forming similarity judgments.29 However, the choice of comparison above must rank as another happy accident in the mold of Nixon’s snake. The bot, via its characterization of Nixon, does not intend to break the fourth wall, but that is the result nonetheless. When so many of a bot’s choices are informed by knowledge, it becomes hard to tell when it is knowingly winking at its audience, though the larger point here is that any storyteller who pursues a knowledge-based approach to character formation is freed from a dependence on baked-in gag lines for its speech-acts. And just as one speech-act often begets another in human interaction, we might expect the butt of one put-down to be the originator of the next. The plot dictates that Underwood now hates Nixon for his temerity, but his insult is internalized:

10859_008_fig_024.jpg

So the bot reaches into its Negative Talking Points for Nixon to pull out “secretive,” “deceptive,” and ... “jowly”? This may not seem like the most rational response, but in the bot’s defense, an emotion as extreme as hatred is rarely rational. We humans also reach for the first pejoratives to mind when we lash out at others, and our bots—in their simplicity—have a tendency to mirror our least flattering features. But let’s skip ahead to the end of this tale, passing over Underwood and Nixon’s temporary rapprochement and subsequent falling out (again). The final action in the story has Underwood cheat Nixon, and this is rendered as a financial swindle:

10859_008_fig_025.jpg

The closing bookend is perhaps more interesting, if only because it resonates so well—in what is not so much a happy ending as another happy accident—with our understanding of Underwood’s character in his Netflix drama House of Cards:

10859_008_fig_026.jpg

Bots may be clockwork contrivances, but they contrive for us, to create a series of happy accidents for our amusement and occasional incomprehension. We wind them up and set them loose so they might turn words and ideas into playthings and thereby wend their way into our imaginations.

Toy Story Ad Finitum

Children love to play with dolls, and so their paraphernalia (sold separately) have become the crack cocaine of the toy industry, especially when the merchandise is shifted as part of a tie-in deal with a hit movie. Once a child becomes the proud owner of a Han or a Rey figure, Chewbacca and Leia and Luke and Darth become clear objects of desire, too, as do the scale-model sand speeders, TIE-fighters, X-wings, Millennium Falcons, and anything else that can be molded in plastic. But if children show a laser-like focus on the latest tie-in products in the run-up to Christmas, the story is very different after Christmas, once the packaging is cleared away and the kids settle down to some serious playtime.

There are no genre boundaries or franchise restrictions in the toy box, and children show an ecumenical zeal in the ways they play with toys and accessories from multiple franchises, even when those elements have vastly mismatched scales. A Barbie doll or a Disney princess can stand in for Princess Leia in a pinch, and a soccer ball makes a decent Death Star. George Lucas’s hippy-dippy notions of “the Force” feel right at home in Hogwarts, so Obi-Wan Kenobi and Hermione Granger can make a great tag team against Darth Vader and Lord Voldemort (who makes an ideal Sith lord). Lego men and GI Joes can exist side by side, with a little Swiftian fantasy providing the necessary glue. Wittgenstein suggested that philosophers can learn a lot by watching children play: “I can well understand why children love sand,” he said, but he could just as well have been talking about how kids play with any kind of toy with rich affordances to explore.

A child’s imagination is rarely contained by anything so prosaic as the line between reality and fiction. Kids had Spider-Man square off against Superman in epic toy battles long before Marvel and DC got their acts together with a comic book crossover in 1975, and when DC pitted Superman against Muhammad Ali in 1978, it was long after kids had first put the pair on the same imaginary fight card.30 Kids have fertile imaginations when it comes to inventing bizarre mashups and face-offs that cross conventional boundaries of time, genre, medium, and historicity. Hollywood has thus sought to foster a childlike imagination when appealing to kids with blended offerings such as 1943’s Frankenstein Meets the Wolfman, yet as memorably satirized in Robert Altman’s film The Player, many films aimed at adults are similarly motivated by cross-genre blends. So who can blame writers for wanting to make sport of their own gimmicks, as in this exchange in Jurassic Park that wittily exposes the cut-up at the movie’s heart:31

John Hammond:

All major theme parks have delays. When they opened Disneyland in 1956, nothing worked!

Ian Malcolm:

Yeah, but, John, if the Pirates of the Caribbean breaks down, the pirates don’t eat the tourists.

Jurassic Park is as childlike a blend (in the best sense of “childlike”) as Ali versus Superman or King Kong versus Godzilla or Abbott and Costello Meet Frankenstein or The Towering Inferno (a film adapted from two novels, The Tower and The Glass Inferno) or any other mashup of narratives that you care to mention, from big-budget blockbusters to obscure fan-fiction blogs. This enthusiasm for coloring outside the lines has also given us TV’s Community, The League of Extraordinary Gentlemen, The Cabin in the Woods, Iron Sky, Penny Dreadful, and the BBC’s Dickensian, a show that throws all of Dickens into a blender so that Bob Cratchit can be arrested for the murder of Jacob Marley by Inspector Bucket of Bleak House. As we have seen in this chapter, our bots can play this game too, and play it well, for our amusement if not theirs. So the big lesson we draw here concerns the knowledge representations we give our Twitterbots. Real children must make do with imagination when toys are in short supply, but the more diverse the toy box that we can gift to our digital children, the more imagination they can show when playing genre-bending games for themselves.

Trace Elements

Squeezing a whole story into a single tweet can be harder than squeezing a ship into a bottle. Yet we shouldn’t overly concern ourselves with size limits, especially as far as Tracery and CBDQ are concerned, because the latter will not tweet outputs that exceed Twitter’s size limits. You will find a pair of Tracery grammars for generating one-tweet stories in a directory of our TraceElements repository named What-If Generator. Each translates the causal structures of Scéalextric into simple grammar rules that generate the next state of a story (the right-hand side of the rule) from its current state (the left-hand side). The following is a tweet in which a two-act story fits within Twitter’s original character limit:

What if Jaime Lannister was commanded by Professor James Moriarty but our “soldier” then disagreed with this “general”?

And here’s a grammar output that requires the new 280-character limit:

What if Orson Welles translated for Tom Hanks and our “interpreter” was then trusted by this “listener,” but our “intimate” then manipulated this “confidante,” and our “cheater” then profited from this “sucker”?

In either case, notice how each action of the story uses metaphors rather than pronouns to refer to the participants of a previous action. Our context-free grammars have no memory of what has gone before, so these stories have no persistent memories of their protagonists, their names, or their genders. Yet the grammar rules that generate subsequent actions from current actions can use knowledge of the current action (the verb, not its participants) to generate referring metaphors for the participants of the next. This version of the grammar is named What-if grammar backward.txt because the referring metaphors always refer back to the semantics of the previous action. Another variant, called What-if grammar forward.txt, generates referring metaphors that are specific to the next action only. Try both in CBDQ to see which generates the most coherent narratives for you.

With a little help from CBDQ, we can also use Tracery to generate stories that extend over an arbitrary number of threaded tweets. The key is CBDQ’s support for a response grammar, which allows a Tracery-based bot to respond to mentions from other Twitter users. If the core Tracery grammar generates the first act of the story and mentions itself in that first tweet, then the bot’s response grammar can respond to this first tweet—in effect, respond to itself—with a follow-up action in a new tweet. If this follow-up tweet also mentions the bot’s own handle, the response grammar will again be allowed to respond to itself with subsequent actions in subsequent tweets. This call-and-response structure marshals the two grammar components of CBDQ to allow a bot to generate a long-form story by talking to itself. You will find grammars for each side of the conversation in a directory named Story Generator in our TraceElements repository. You may notice that these grammars give names (such as Flotsam and Jetsam, or Donald and Hillary) to the characters in each story, and consistently use the same names for A and B across tweets in the same narrative. Different narratives may use different character names, so how does the grammar remember which names to use in different tweets?

We use another trick to build a long-term view into a grammar that lacks even a short-term memory. Rather than use the nonterminals of the grammar to represent simple story-states that correspond to plot actions, we create composite states that bind a plot action to the final action of a story. Thus, the grammar uses states such as fall_in_love_with/are_betrayed_by (which can be read as: A falls in love with B, but is eventually betrayed by B), and uses its rules to interlink states that end with the same final action. Because each story state “knows” how its story will end, it can use this knowledge to assign coherent character names across tweets. Thus, for example, stories that end with are_betrayed_by always use the names Mia and Woody. Since the grammar generates stories that terminate with more than two hundred unique actions, it uses a corresponding number of name pairs to name its characters in the same number of story families. Incidentally, this strategy resolves another issue with grammar-generated stories, which have a tendency to pursue meandering and looping routes through their possibility spaces. These complex states ensure that stories approach their conclusions with a sense of momentum, and they also allow the grammar to know when to end a tale. A story that reaches a state N/N, such as are_betrayed_by/are_betrayed_by, will have naturally reached its predestined conclusion and have nowhere else to go.

Notes