3Make Something That Makes Something

Animal Spirits

Twitterbots have the potential to do wild and wonderful things, from cracking jokes to making us think, but when all you can see is their outputs, it can be hard to visualize what’s going on under the hood. Later in this book, we begin to build our own Twitterbots from the ground up, gluing each piece together and applying a nice coat of paint to make them shine. Before we begin, though, it is useful to think for a moment about the theory of Twitterbots and generative machinery more generally. We come across generators every day, from machines that assign school places and hospital appointments to the algorithms that drop candy from the sky in the games on our phones. In this chapter, we think about what separates one generator from another and how to think about generative software in terms of the whole of its outputs, and not just one or two examples.

To do this, we are going to make some of our own generators so we can tweak and adjust them to think about generation in different ways. To keep it simple, we are going to make our generators by hand. It doesn’t take much to get going—you can just use your memory and this book—but if you have a trusty pen and paper by your side or a note-taking app on your phone, and perhaps some dice for flavor, that will certainly make things easier. We’ll start by using a common generative technique that developers all over the world still use, and all it needs is a few lists of words and some random numbers.

First, we need to pick our subject matter, the domain from which we aim to pluck new artifacts. It is always useful when starting out to think of something relatively simple that has a degree of inherent structure but allows for minute variation in its details. So for our first generator, we are going to generate fictional names for fictional English pubs. In England the pubs are named after a great many things—kings and queens, historical events, guilds, aristocrats, eccentric animals, naughty farmers—but they often instantiate very simple structures that are easy to break down.1 (You may remember a pub that was aptly named “The Slaughtered Lamb” in the movie American Werewolf in Paris.) Let’s start by making pub names that follow the “The <noun> and <noun>” pattern, such as “The Cat and Fiddle or “The Fox and Hounds.” We begin by writing down a list of ten animals. If you don’t have a pen handy or prefer using a premade list, here is one we made earlier:

  1. Walrus
  2. Antelope
  3. Labradoodle
  4. Centaur
  5. Pussycat
  6. Lion
  7. Owl
  8. Beetle
  9. Duck
  10. Cow

We call this list a corpus, which is a body of text or a database of information that we can use in our generator. The list is special because we can pick any item from the list and will know something about it before we have even looked at what we picked out. We know it will be an English noun, we know it will describe an animal, and we know it will be singular, not plural. We can thus use this list to fill out our template, “The <noun> and <noun>,” by randomly choosing any two entries from the list. The easiest way to do this is to flip this book open to a random page and choose the least significant digit in the page number (so if you turn to page 143, use the number 3). You can also roll two six-sided dice if you have them, summing the results and wrapping around to 1 or 2 if you get 11 or 12. Do this process twice, putting the resulting animal words into our template, and you might end up with “The Lion and Labradoodle” or perhaps “The Owl and Centaur.” After a while you might want to throw some nonanimals into the list (e.g., king, queen, bishop, lord, duke) or increase the size of the list (in which case you may have to change the way you generate random numbers, but you can always generate two numbers and add them together).

In any case, you have just generated your first artifact using the list. What were the ingredients in this simple generator? Well, we simply needed a list of things to draw from, our corpus—in this case, a list of animals. Remember that we knew certain things about all the items on this list, which meant we could choose any item without knowing exactly what they were. Like dipping into a bag of mixed candy, we may not know what kind of candy we will pick out next, but we do know it will be sweet and edible, not sharp and deadly or intangible and abstract. For this particular generator, we also needed a template to fill. This template codifies everything we know about the structure of the artifacts we aim to create. Our template in this instance is very simple, but we will introduce others that will make the idea more explicit. Finally, we needed a procedure to tell us how to combine our corpus with our template. In this case, we generated some random numbers and used those numbers to choose words from our list.

Let’s expand our pub name generator with some new ideas. First, we compile a new list of ten adjectives. As before, if you don’t have a good way to note them down or would just prefer to use our examples, you can use our list below:

  1. Happy
  2. Dancing
  3. Laughing
  4. Lounging
  5. Lucky
  6. Ugly
  7. Tipsy
  8. Skipping
  9. Singing
  10. Lovable

Let’s also experiment with a new template, “The <adjective> <animal>.” Instead of writing <noun>, we’ve now used a more descriptive label to mark the difference between the two corpora that we’ve constructed. The first list is now <animal> and the second list is <adjective>. So our first template would now be rewritten as “The <animal> and <animal>.” AI researchers refer to a collection of typed templates such as this as a semantic grammar because each template captures both syntactic and semantic constraints for generating a new linguistic artifact.2 Using the same random number method as before, we can now choose one word from the adjective list and one word from the animal list to instantiate our simple semantic grammar and generate new pub names such as “The Dancing Antelope” and “The Tipsy Beetle.”

You may have noticed while generating new artifacts that some are more appropriate as pub names than others. For example, “The Owl and Pussycat” evokes Edward Lear’s poem about the two titular animals who elope in a boat and marry, defying society’s standards for interspecies relationships. “The Ugly Duck” might also remind us of the children’s story about an ugly duckling that transforms into a beautiful swan, reminding children that getting the last laugh is more important than being happy with who you are. A name need not reference popular culture to stand out; “The Lion and Pussycat,” for example, also has a poetic quality to it, since the two animals have interesting shared qualities as well as obvious differences. By contrast, “The Walrus and Antelope” has a less obvious resonance, although this is a subjective observation and some readers might value this result over what we consider to be interesting.

Whichever you might personally favor, some results will clearly resonate with some readers more than others. But our generator is not very well positioned to understand this because it does not know that pussycats and lions are similar, just as it knows nothing of Edward Lear’s poetry. In fact it does not really know anything, relying instead on the built-in structure of its word lists and templates. Thus, it relies on the fact that the <animal> list contains only animals, that the <adjective> list contains only adjectives, and that anywhere it is appropriate to put the name of an animal or adjective, it can choose anything from the right list.

The fact that our generators can create surprises can be both a strength and a weakness. A result like “The Owl and Pussycat” is more than just two random animals; it is also a potent cultural reference that imparts an additional flavor to the whole, making it more than the sum of its parts. Now suppose we were to show others the output of our generator but not reveal exactly how it works. They might marvel at the complexity of the generated names, asking themselves: Does our generator know how to allude to famous literary figures, or did they just write down a big list of fables and cultural ideas? They can only guess at exactly how our generator works, and in these cases, readers are often generous with their guesses. If they are presented with an artifact that carries extra layers of meaning, they might think our generator is much more complicated and clever than it really is. This unearned charity means they may really be quite impressed with our generator and even give it the benefit of the doubt whenever it slips up.

Of course, this bonus has drawbacks of its own. If our generator hits some truly high notes, followers might later feel let down if they encounter periods of noise or if they intuit how the simple system actually works. This is a common syndrome that afflicts many AI systems, for the promise of an idea often exceeds its technical reality. Indeed, this exciting promise can curdle into disappointment once a system’s details are exposed to scrutiny.

A more complex problem comes from the fact that we cannot anticipate the unintended meanings that can slip into a bot’s outputs, meaning just about anything can happen. Just as our generator does not know when something is a poetic reference, it also does not know if something is inappropriate or offensive, or worse. In February 2015 police arrived at the house of Twitter user Jeffry van der Goot to discuss a death threat made on Twitter—not by Jeffry himself but by a Twitterbot based on his tweets.3 By using Jeffry’s own tweets as a corpus of word fodder to mash up into new sentences, his Twitterbot would write tweets in the same style. But just like our pub generator, Jeffry’s Twitterbot doppelganger did not and could not understand the ramifications of the texts it was producing. When one bot tweet cited an event in Amsterdam among words that sounded vaguely threatening, official searches inevitably flagged the result and the police responded with haste. Jeffry was asked to decommission, or retire, the bot, which was a “death penalty” decision for the bot as at least one Twitter user saw it.

Space Is Big—Really Big

Suppose we want to print out our pub name generator so as to give it to some friends, but we are a tad worried, or perhaps just a little curious, about what it might generate when we are not around to explain it. What we need to do is to think about our generator—or any generative system—as a whole system rather than simply considering any one output. A single output from a generator is sometimes called an artifact, such as, in our case, the name of the fictional pub: “The Tipsy Walrus.” If we want to talk about a set of artifacts instead, we use a different term: a space. A space is a mathematical term for describing a set of things that share a common property or a common use or a common genesis. But we mostly use the term space rather than set because we want to emphasize that a space is something we can navigate and something we are eager to explore.

One useful space we can talk about when discussing a generative system is its possibility space. This is a set containing every single possible artifact a generator can produce, along with the probability of production for each one. The sum of all of these probabilities must be 1, because they represent every possible output that can emerge whenever we use our generator to make something. Possibility spaces are usually quite large. To see just how large a possibility space can be, let’s calculate the size of the space of the pub name generator that uses the simple template, “The <animal> and <animal>.” There are ten animals to choose from each time we want to insert an animal. Suppose we choose Beetle for the first animal. How many pubs are there for the partial template “The Beetle and <animal>”? That one’s easy: there are ten—one for each animal that can fill the second template slot (The Beetle and Walrus, The Beetle and Antelope, and so on). The same is true if we choose Duck or Labradoodle for the first slot. So there are ten animals for the first slot, and each one combines with ten possibilities for the second slot: that makes for 10 × 10 = 100 pub names in total for this template.

Possibility spaces can grow quite rapidly as we increase the number of options to choose from. If we add an adjective to the first animal, allowing the generation of names such as “The Happy Duck and the Antelope,” that increases the number of possible outputs by a factor of ten, yielding 10 × 10 × 10 = 1,000 names in total. If both animals take adjectives, there are 10,000 names in the possibility space. Note how a few small lists can conspire, via a simple template, to yield 10,000 possibilities. Large possibility spaces can be great for generators, because they facilitate the generation of a vast number of potential outputs for users to discover. If we added 100 animal names to our <animal> list and another 100 adjectives to our <adjective> list, the bot’s possibility space would grow much larger, to the point where someone using our generator might never see the same output twice (assuming they only used it for a little while).

A simple case of everyday possibility spaces can be found in a deck of cards. If you take a regular fifty-two-card deck and shuffle it, it is highly likely that no one has ever produced that exact ordering of cards ever before in human history. In this case, the corpus is made up of fifty-two distinct cards from the deck, and our template is fifty-two slots, each of which asks for a different card. The mathematical calculations for this possibility space are slightly different from our animal example, because once a card has been selected, it cannot be selected again. To calculate this we use a mathematical formula for combination. There are 52 possibilities for the first card in the deck, then 51 possibilities for the second card (since one has been removed), then 50 for the third, and so on. Mathematicians write this as 52! or 52 factorial. It works out to the same value as 52 × 51 × 50 × …, × 3 × 2 × 1.

This number is enormous, having sixty-eight digits when multiplied out, so even writing it down feels like an effort, much less imagining how big it really is. If you shuffled cards every day of your life, from the moment you were born until the day that you died, you would scarcely make a dent in this number. There are more unique decks of cards than there are stars in the universe. While that may seem like a huge space compared to our pub name generator, the mathematics of generation gets so big so fast that it’s never quite as far away as you think.

Generating with Forked Tongues

In 1941 the author Jorge Luis Borges published a collection of short stories in Spanish with a title that translates loosely as The Garden of Forking Paths. This collection includes the story we met in the first chapter, “The Library of Babel,” in which Borges imagined a seemingly endless library of adjacent hexagonal rooms that stretch in all directions. The rooms are filled with row upon row of books, stacked up in cases, but unlike a regular library, these books are not arranged in neat alphabetical order. There is no Gardening section. You cannot find the section with all the Edward Lear poetry even if you are willing to dedicate your life to the search. Instead, each book contains a random assortment of letters, spaces, and punctuation marks, stacked shelf upon shelf, room upon room, out into the far distance. Borges’s Library of Babel explores the implications of such a library if it ever really existed—and it is physically possible in principle—and tells extraordinary tales of the librarians driven mad by it.

We have already discussed how large the possibility space of a generator can grow, so you might suspect that the Library of Babel is big. But how big? Even if each book in the library were limited to just a few hundred characters apiece—this paragraph has under 400 characters—you would still end up with a possibility space so large that the number itself has more than 400 digits.

But there are many more interesting things about the Library of Babel than just its size. Although many of the books inside—indeed, most of them—are unreadable gibberish, the way the library is defined means there exist many perfectly readable books, lost among the bookshelves of nonsense. Somewhere in the library there is a perfect copy of Moby-Dick. Additionally, hidden away on a shelf is a copy of Moby-Dick that opens with the line “Call me Big Bird” instead of “Call me Ishmael.” In fact, there is a copy of any book that we can conceive of that follows the rules of the library, containing only letters, spaces, periods, and commas. So somewhere in the library there is a book describing the location of every other book in the library, a perfect index catalog. (Actually, the perfect catalog would be too large to be yet another book in the library, but we can imagine a perfect catalog of catalogs that collectively describes the library.) Of course, huge numbers of other books look identical to a perfect catalog but contain subtle errors. You could spend your whole life searching for the catalog (as some of the characters in Borges’s story do) and not find a book with more than a couple of words of readable English.

The Library of Babel has incredible gems hidden inside it. It even has a copy of this very book inside of it, though naturally it also has a long essay criticizing this book’s authors, written by Charlemagne and Florence Nightingale. Yet these exciting finds are offset by the astronomically small odds of ever actually finding them. This is a task that often plagues the designers of generators, particularly when trying to make a generator bigger and better than ever before: How do you make your space larger and more varied without losing track of where the best bits are? When we started this chapter, we described a modest generator of pub names using just ten animal names. We knew everything about that list and how to use it, so we felt pretty confident it would generate good-quality outputs. But ten animals does not a big space make, and it can take a long time to come up with new ideas. If it seems an attractive option to use a more comprehensive list of fillers that someone else has written for us, let’s try that option now.

We (and you) can use this book as our new corpus. Instead of picking two animal names from our list, close your eyes, open this book to a random page, and stab your finger down at a random word. Read ahead until you find a noun. Remember that word and then repeat the process again. Use those two words to fill our simple “The <noun> and <noun>” template, so you might, for example, generate “The List and Generator” or “The Animal and Dent.” Unlike our original generator and its modest list of animals, this generator inhabits a massive space because there are thousands of different nouns in this book, many of which you might never think to use in a pub name. This is good news, because it means the generator can surprise its users more often, and it may even surprise us too, the developers who designed it. When writing this section, we expected bad names to proliferate, but “The List and Generator” was pleasantly surprising. Of course, you might notice that the generator produces many duds too. “The Animal and Dent” is not a great name. It’s not even bad in an interesting way, just plain old boring, not to say incomprehensible.

Many bots rely on corpora that have not been handcrafted by their designers. If these corpora are not taken wholesale from somewhere else, they may have been automatically harvested for a particular purpose. @metaphorminute, for example, generates metaphor-shaped statements by combining two random nouns and two random adjectives in a familiar linguistic template. “A gusset is a propaganda: enfeebling and off-guard” is one example of the metaphors it generates at a rate of thirty every hour (recall that it cannot quite reach the rate of one per minute because Twitter limits how often bots can use its API). These words are pulled from a large dictionary provided by Wordnik.com rather than a corpus designed by the bot’s creator, Darius Kazemi. This combination of scale and simplicity means that @metaphorminute tweets every two minutes without ever worrying about repeating itself, but it also means that it is extremely unlikely that the bot will ever hit figurative gold. Like Borges’s Library of Babel, we can run through @metaphorminute’s tweet history for months and months, getting lost in the labyrinth of its tweets, and never find anything that makes complete sense.

But is this really a problem for @metaphorminute? It depends on what you are looking for. If we desperately needed to name fifteen thousand pubs for a brewing behemoth and each name had to be a work of genius, then expanding our word list to include every noun in the English language would probably not help very much. But @metaphorminute does not aim to generate an endless stream of metaphors that are so beautiful they bring tears to our eyes. Instead, it represents a different kind of aesthetic that finds favor with a great many Twitterbot enthusiasts. Much like the tragic readers who inhabit Borges’s library, running through its rooms and corridors to tear books off shelves in search of fleeting moments of surprise or elation whenever they find texts that are both readable and meaningful, the followers of @metaphorminute wish only to be occasionally presented with bot outputs that are mind boggling, inspiring, thought provoking, or just funny. The vastness of the bot’s possibility space and its propensity for odd metaphors are intrinsic parts of the bot’s character. This vastness makes the good finds more valuable and contributes to an overall feeling of exploration and discovery when we see what the bot does on a daily basis.

Just like Borges’s library, each small discovery of something out of the ordinary delivers a minute thrill. For every thousand books you open, you might find just one with a readable English word. Once in every million books, you might find a readable sentence or two. If you ever found even a few hundred readable words in sequence, the experience might be so incredible and so awe inspiring that it could make the whole lifetime of imprisonment in the library seem suddenly worthwhile. Every metaphorical output by @metaphorminute that has even an ounce of sense to it feels special not because it is a good metaphor but because of the unlikelihood that you were there to see it when it was generated.

This again relates in some ways to the philosophical concept of the sublime—the sense of facing the vast indifference of nature and understanding our own insignificance in relation to it. The Victorians were particularly interested in the sublime’s presence in nature, in how small and pointless they felt when viewing huge, untamed landscapes such as the French Alps. They experienced a feeling of horror as they came to terms with scales that they could barely comprehend, let alone measure, just as we might feel a chill as we realize the odds of ever finding a readable book in the almost infinite, dusty halls of Borges’s library. Like those Victorian travelers who came back filled with existential dread, we might also feel a bit queasy contemplating being lost in such a library. Fortunately for us, we only have to worry about our generators coming up with bad names for drinking establishments. And even then, no one is forcing us to drink in those places.

What Are the Odds?

Let’s suppose that we’ve already opened up The Owl and Centaur in a nice forest village somewhere, amid a good-sized demographic of real ale drinkers and fans of J. R. R. Tolkien or Ursula Le Guin. With a name like that, our pub might well attract another kind of patron: players of Dungeons & Dragons (D&D), a pen-and-paper game that weaves interactive tales about magical adventures in fantasy worlds. For decades, D&D players have been doing precisely what we have been doing in this chapter: compiling lists of words and names and choosing among them randomly to inventively describe the parts and denizens of the worlds their games take place in. For example, to decide which languages a character speaks, a player might compile a list of languages (much as we did for animals) and roll some dice. Each roll picks out a language from the list, to randomly decide a character’s proficiency in foreign tongues.

So far, our random selection processes have been just that: quite random. Flipping a book open to an arbitrary page or stabbing your finger at an arbitrary word are random processes that are almost impossible to subvert. Rolling a die is also a good source of random numbers, and because dice are small and reliable, they are popular with the folks who need a good source of randomness, like our pub-visiting, centaur-loving D&D patrons. Not all randomness is created equal, however. Suppose we come up with a wonderful new business plan for a cocktail night in which our customers roll dice to select ingredients for their drinks. We can number our ingredients from 1 to 6 and let them roll a six-sided die (a D6) to decide what goes into their cocktail. Assuming the die is a fair one and it is thrown normally, each ingredient has an equal chance of being chosen.

Now suppose we expand our range of delicious generative cocktails to include twelve different ingredients instead of six. Unfortunately we can only afford the one die, so we simply ask each customer to roll the die twice to select a drink. Now things are a little different; for one thing, because the lowest a die can roll is 1, the smallest number we can get by summing two die rolls is 2. That means that ingredient number 1 will never be chosen. Meanwhile, we rapidly run out of ingredient number 7, because it is now selling six times as fast as drink number 2 or drink number 12. There are more ways to make the number 7 from two D6 rolls than any other number in the range (rolling 1 and 6, or 2 and 5, and so forth). Although all of these outcomes are random in the sense that we cannot predict them in advance, they are distributed differently. We still do not know what drink a customer is going to get, but we do know that some ingredients and drinks are more likely than others. (Of course, if we had a twelve-sided die, of a kind called a D12 that is often used in board games, this issue would disappear and each ingredient would again have an equal chance of being chosen.)

We can take this concern back to our generator of pub names too. When we discussed ways of randomly selecting animal names, we suggested flipping to a random page and choosing the least significant digit from the page number (so page 145 would give a 5). Now suppose that instead of using the least significant digit, we choose the most significant digit. What happens to our animal generator now? Well, now the tenth animal now will never be chosen, because page numbers never start with a zero. The remaining nine are still all possible outcomes, but there are many more ways to choose the first few animals in the list than the rest. To understand why, consider the distribution of page numbers in this book. There are nine pages with single digits (pages 1 to 9), which all provide an equal chance of choosing any digit. Then there are 90 pages with two digits, which are also equally distributed between 1 and 9. So up to this point, there are eleven pages that start with any of the numbers in the list. But because this book has fewer than a thousand pages, all of the remaining pages start with a 1, a 2, a 3, or a 4, and there are almost 100 of each of these pages, which greatly biases our selection. In total, there are 111 pages that begin with a 1 in this book, but perhaps only 11 that begin with a 9. All of this means that you’re much more likely to see Labradoodle in your generated pub name than you are to see Owl, because Labradoodle sits higher in our list of animals.

While the quirks of these strategies are well known to those who use them a great deal, such as the probability distribution of two summed six-sided dice (if you have ever played the dice game craps, this is where the various betting odds come from), other kinks in a probability distribution are easier to forget, such as the difference between the most and least significant digits in the page numbers of a book. Not many generators use page numbers as their source of randomness, but this uneven probability distribution can also rear its head in the corpora we use just as much as in our random selection methods. If we delete the word Labradoodle from our animal list and add a second Lion in its place, then Lion is twice as likely to appear as any other animal on the list. Much like many of the ideas we have touched on in this chapter, this is an easy fact to recognize when the list has just ten animals. But when we start to look at huge corpora with hundreds of thousands of items apiece, we can easily forget.

The @guardiandroid bot mashes together opinion column headlines from the British newspaper the Guardian to invent fake headlines that sound like real ones. It uses much the same mashup technique as Jeffry van der Goot’s bot mentioned already in this chapter, the one that almost got him arrested. (@guardiandroid has not caused anyone to be arrested at the time of writing.) That is, it chops up existing headlines and rearranges their various words and phrases according to the dictates of statistical analysis to manufacture new ones that retain much of the flavor of the originals. This approach can sometimes work surprisingly well, and humorous combinations of topical issues often emerge. One fake headline cries, “Michael Gove needs to be shaken up” (Gove was the UK education secretary at the time of tweeting, and a person that many would like to shake vigorously), while another pleads, “Forced marriage is a deeply malign cultural practice—but it’s not only killer whales we must protect.”

The problem is that some opinion headlines regularly use the same linguistic patterns, which means that when our bot automatically searches for column headlines, these recurring patterns will emerge multiple times from our corpus, rather like our reuse of the word Lion in our animal list. For example, the Guardian ran a regular series of columns based on the common questions people ask Google. Each headline began with a question and ended with the phrase, “You asked Google—and here’s the answer.” Because the phrase was repeated so often throughout the corpus, the bot created a disproportionately high number of headlines with this exact phrasing, since it is more likely to choose headlines with this pattern than with any other. Once we noticed this abnormality, it was relatively easy to fix by removing surplus references from the corpus. But noticing it in the first place is difficult. Often these strange bumps in our work are evident only when they appear in public-facing outputs.

Issues of probability distribution can creep into systems in strange and subtle ways, and this can make them especially hard to detect and address. Sometimes a specific probability distribution is used intentionally, as many games (such as Blackjack) hinge on an understanding of probability theory. As we shall see when building our own generative systems in this book, stumbling across the unusual outcomes of our Twitterbots is all part of the fun of building them, but knowing about these issues in advance can help us to know what to watch for and what common mistakes to suspect when something goes wrong.

Bot the Difference

So far, we have gained some familiarity with piecing texts together with templates and thinking about the weirdness that can come from generating at large scales when the text is uncontrollably big and the generator can make more things than we could ever review in a lifetime. We have also looked at a powerful driving force behind simple generative systems—random number generation—and how the probabilities of different results can be affected by so many different issues.

As designers of generators (which we all now are, with our little pub name generator behind us), one common way to test what our generator does and how it is performing is simply to use it to generate some outputs. We might generate one or two pub names and look at them. Do they look okay? How bad are they? If we are feeling more thorough, we might generate a few hundred. Are we bored by them? Are there any surprises? Any repetitions? Perhaps we can go much further than a hundred. Maybe we should keep generating pub names until we fall asleep with boredom or fatigue. How many should we look at before we stop?

Recall that earlier, when calculating just how large a generative space can be, we considered what would happen if we gave our simple generator a few more choices and a more ambitious template. Our template “The <adjective> <animal> and <adjective> <animal>,” when filled using just 20 adjectives and 20 names, provides 160,000 (20 × 20 × 20 × 20) potential outputs. That sounds like quite a large number. In fact, it seems so big we might want to boast a bit to our friends: 160,000 different names for pubs from a few simple lists and some tricks! That’s three times more pub names than the number of pubs operating in England right now, some of which have less-than-desirable names (and beers to match).

When we describe our pub names as “different,” we allow everyone to have their own personal interpretation of what that word might mean. To some, it means that they are as different and varied as real English pub names such as The Cherry Tree, The Duke of York, The White Hart, and The Three Crowns. To others, it might mean that they have the same overall appearance, but the words are varied and changing. They might imagine our generator having a list of hundreds of animals and adjectives to pick from instead of just twenty of each. To us, knowing how the generator works, we know that what we mean is simply that no two pub names are exactly alike. Each name has something different about it, even if it is just the order of the words. The Laughing Beetle and Tipsy Walrus is, strictly speaking, different from The Tipsy Walrus and Laughing Beetle. But it’s not as different as The White Hart is from The Duke of York.

The language we use to describe our generators is complicated and is often not given as much thought as it deserves. The video game Borderlands was released in 2009 with the claim that it contained 17.75 million generated guns, which excited many players at the thought of endless variety and discovery. In 2016, No Man’s Sky promised 18 quintillion planets to explore. But if many of those guns and planets are, for all practical purposes, indistinguishable from one another, these numbers are neither exciting nor truly meaningful. Twitterbot author and generative expert Kate Compton (@GalaxyKate) colorfully names this problem “The Ten Thousand Bowls of Oatmeal” problem.4 Yes, you can make ten thousand bowls of oatmeal for a picky customer, so that every single bowl has a uniquely specific arrangement of oats and milk, with an array of consistencies, temperatures, and densities that would flummox the choosiest Goldilocks. But all that our picky customer is actually going to see is an awful lot of gray oatmeal.

So what’s going on here, and how can we tackle this problem? The main issue is that there is a real difference between the technical, mathematical notion of uniqueness and the perceptual, subjective notion that most of us understand. The bowls of oatmeal may be unique at a very low level, but what matters is what we can perceive and remember. We might want, in an ideal world, to build a Twitterbot that always produces completely distinct concepts, but this is an unrealistic goal to aim for, even for our own creative oeuvres as humans. Instead of worrying about how to always be different, it is just as productive to focus on ways to avoid becoming overly familiar. For example, we might build our pub name generator to remember the animals and adjectives it has used recently so as not to reuse them until the entire list has been worked through. That way, a more memorable word like Labradoodle appears again in a name only after all the other animals have been used and the system resets itself. Compton calls this approach perceptual differentiation. Readers may remember with time what animals are in our generator, but as long as similar outputs do not recur too close to each other, it might take longer to appreciate the limits of the system.

Another easy way to refresh a generator is to add more content to its corpora over time. We have already seen how adding just a few extra words to a list can significantly increase the number of possible outputs. Adding to a generator later in its life disrupts many of the patterns that its users will have become familiar with. This is good not only insofar as it adds interesting new outputs to the bot’s repertoire, but because new patterns can also break readers’ expectations of what a bot is capable of doing. Many bot builders add to their corpora over time, as in the case of a bot named @yournextgame by @athenahollow and @failnaut. New injections of content into a bot’s corpus can significantly extend the bot’s life span and enable unusual new interactions with the existing words that the bot has already worked with. For many authors, including @yournextgame’s creators, adding to a bot is an especially pleasant kind of maintenance task, much like tending to a secret garden or maintaining a beloved car. It offers us a chance to engage in a cycle of creative reflection, a chance to think about why we made a bot in the first place, and a chance to think about how we might make it better.

Differentiating your bot’s oeuvre also depends on where in the bot its variety and scale actually derive from. Suppose that instead of having twenty adjectives and twenty animals in our pub name generator, we have in fact only one adjective and one animal: Dancing and Cat. But this time instead of one template, we have 160,000 pub templates, each with slots in them for just one animal, or one adjective, or both. This list is a little hard to imagine, because it would require so much effort to compile, but we might expect that this generator is a good deal more memorable than one that uses a single template and replaces the animals and adjectives. (If Borges wrote a story about zany Twitterbot builders, he might use this as a plot hook.) The structure of a pub name is much more memorable than any single noun or adjective it contains, so changing that structure creates a greater sense of variation in the output. Our lists of animals furnish additional detail to a template, allowing readers to ponder what, for example, a walrus has to do with a beetle. Yet over time, relentless repetition of the same template can wear down its novelty, leaving us overly familiar with its slots and feeling ho-hum about their changing fillers. In contrast, tens of thousands of templates would be that much more striking, for as the detail stays the same, the wider structure is ever shifting and offering up new patterns for us to consider.

Meaninglessness is not always a thoroughly bad thing. We have already seen in this chapter that even empty generation can sometimes be a virtue of sorts, inasmuch as it offers us the same joy that is felt when discovering something readable in the Library of Babel after many years of search. There is no obviously right way or wrong way to make a generator, so when we talk about making this generator meaningful, more or less, we are really talking about our own personal goals for what a generator should be capable of communicating to its followers. Yet understanding other people’s expectations for our software is important, and probably more important than understanding how to code in Java and Python or how the Twitter API works. If followers do not know what to expect from your bot, they might well hope for too much, and that way brutal disappointment lies.

Home Brew Kits

The mechanics of template filling are easy to specify, especially when we want to fill the slots of our templates with random values from predetermined sets. To ease the development of our template-based generative systems, which always revolve around the same set of core operations—pick a template, pick a slot, pick a random value to put in a slot, repeat until all slots are filled—Kate Compton, whom we met earlier when musing about the perceptual fungibility of oatmeal, has built an especially useful tool, Tracery, for defining simple systems of templates and their possible fillers.5 Indeed, Tracery has proven to be so useful for template-based generation that another prominent member of the bot community, George Buckenham, has built a free-to-use web service named Cheap Bots Done Quick (CBDQ) around Tracery that allows bot builders to quickly and easily turn a new idea for a template-based bot into working reality. His site, CheapBotsDoneQuick.com, will even host your template-based bot when it is built, sidestepping any programming concerns that might otherwise prevent your good idea from becoming reality. For CBDQ to host your bot, you will also need to provide the site with the necessary tokens of your bot’s identity that any Twitter application will need before it can tweet from a given account and handle. However, Twitter makes registering any new app—such as your new bot—easy to do (we walk you through that process in the next chapter). CBDQ simplifies the registration process even further, to a simple button click that allows it to talk to Twitter on your behalf.

CBDQ helps developers rapidly create new template-based Twitterbots by setting down their patterns of generation in a simple specification language, one that is remarkably similar to the templates we have used for our pub names and phrases in this chapter. The approach is not suited to complex bots that require a programmer’s skills, but Kate devised the Tracery system as a convenient way of describing a context-free generative process that is fun, simple, and elegant. As an example, let’s see what our pub name generator looks like when it is written out as a set of Tracery templates or rules. At its simplest, our generator needs to put two animal names into each template to make a name, or it needs to marry a single adjective to a single animal word. So in Tracery we define the following:

“origin”: [“The #animal# and #animal#,” “ The #adjective# #animal#”],

“animal”: [“Walrus,” “Antelope,” “Labradoodle,” “Centaur,” “Pussycat,” “Lion,” “Owl,” “Beetle,” “Duck,” “Cow”],

“adjective”: [“Happy,” “Dancing,” “Laughing,” “Lounging,” “Lucky,” “Ugly,” “Tipsy,” “Skipping,” “Singing,” “Lovable”]

Tracery is a replacement grammar system. A replacement grammar comprises a set of rules that each have a head and a body. In Tracery, the head is a single-word label, such as origin or animal, and the body is a comma-separated list of text phrases. Tracery begins with a start symbol (e.g., origin), which is the head of a top-level rule. It then finds the rule body that matches that head and randomly selects a possibility from its body list. So a symbol like animal can be processed by Tracery to pick out a replacement term like walrus or beetle.

In some rules, you will notice that words are bracketed by the hash symbol #. These keywords denote the points of replacement in each template. So when Tracery encounters one of these special keywords, it checks to see if it matches any of the heads of any other rules. If it does, it goes off (recursively) and processes that rule to replace the keyword with the result of processing the rule. We can see this in action if we choose the start symbol “origin.”

“origin”: [“The #animal# and #animal#,” “The #adjective# #animal#”],

Notice that each rule in our Tracery grammar, except for the very last, is followed by a comma. The commas in each list denote disjunctive choice: we can choose either of the above expansions for “origin” (to yield either a name with two animals or a name with a single adjective and a single animal). Suppose that Tracery chooses the second option, “The #adjective# #animal#.” When Tracery processes this and finds the keyword “#adjective#,” it then goes to the rule with the head “adjective”:

“adjective”: [“Happy,” “Dancing,” “Laughing,” “Lounging,” “Lucky,” “Ugly,” “Tipsy,” “Skipping,” “Singing,” “Lovable”],

It selects a word from the body of the rule and goes back to the original pattern and replaces the keyword with this new word. This process is then repeated for the second term “#animal#” in the phrase. When Tracery finishes replacing all of the keywords, the phrase is complete and it can be returned, and so we get back “The Laughing Centaur” as our pub name. So in just three lines, we can rewrite our entire pub name generator. Moreover, the Cheap Bots Done Quick website can take this specification and turn it into a fully fledged Twitterbot. All we need to do is allow CBDQ to register our bot with Twitter so CBDQ can use its permissions to post its tweets, and we’re done. The simplicity and effectiveness of CBDQ has led to many popular bots being made using this site. It’s also a favorite for teachers looking to introduce students to generative software in a quick and snappy manner. Think of CBDQ as a support system for your ideas; if you can invent a new idea for a bot in which everything that goes into the bot’s tweets can be expressed as a series of templates—and bots such as @thinkpiecebot and @LostTesla show that quite sophisticated tweets can be generated using a well-designed system of templates—then Tracery and Cheap Bots Done Quick may be the tools for you.

Just Desserts

It is tempting to assume a clear distinction between the bots that we can build using Tracery/CBDQ and those that require us to write code in a conventional language such as Java or Python, but there is no reason we cannot do both. A principal reason for taking the programming route is that Tracery grammars are context-free, so that substitutions performed in one part of a rule do not inform the substitutions performed in the other parts. Context-sensitive grammars are heavy-duty formal devices that allow substitutions to be performed only on the parts of a structure that are appropriately bracketed by specific substructures, but Tracery is wise to avoid this route.6 In many cases, a concise context-sensitive grammar can be rewritten as a much larger—yet still quite manageable—context-free grammar by expanding a single context-sensitive rule into many context-free rules that capture the very same cross-dependencies. If this seems like a tiresome task that makes the programming route a compelling alternative, there is a middle ground: we can write a small amount of code to do the conversion from context-sensitive to context-free for us. That is, we can write some code to take a large knowledge base and generate a Tracery grammar that respects the contextual dependencies between different kinds of knowledge.

Suppose we want to build a bot that generates novel recipes by substituting one element of a conventional recipe for something altogether more surprising. AI researchers in the 1980s found the culinary arts to be a fruitful space for their explorations in case-based reasoning (CBR), an approach to problem solving that emphasizes the retrieval of similar precedents from the past and their adaptation to new contexts. Janet Kolodner’s JULIA7 (named for famed chef Julia Child) and Kristian Hammond’s CHEF8 both explored the automation of culinary creativity with CBR systems that could retrieve and modify existing recipes to suit new needs and fussier palates. For simplicity we focus here on the invention of new dessert variants with an unpalatable twist: our bot, and our Tracery grammar, is going to generate disgusting variants on popular treats to vividly ground the oft-misused phrase “just desserts” in culinary reality. “Just deserts,” meaning that which is justly deserved, is an Elizabethan coinage that is frequently misspelled9 as “just desserts” on Twitter and elsewhere, in part because we so often conceive of food as both a reward and a punishment. Our approach will be a simple one and use a Tracery grammar to generate a single substitution variant for one of a range of popular desserts. Let’s imagine a naive first attempt at a top-level rule, named “origin” to denote its foundational role (in Tracery we name the highest-level rule “origin” so that the system always knows where to start its generation-by-expansion process):

“origin”: [“#dessert# made with #substitution# instead of #ingredient#”].

So a new dessert variant can be generated by retrieving an existing dessert (such as “tiramisu”), an ingredient of that dessert (such as “mascarpone cheese”), and a horrible substitution for that ingredient (“whale blubber,” say) to generate the variant recipe: “Tiramisu made with whale blubber instead of mascarpone cheese.” But notice that the expansions performed here are context-free. The grammar is not forced to select an #ingredient# that is actually used in tiramisu, nor is it constrained to pick a substitution that is apt for that specific ingredient. It might well have chosen “pears” for the ingredient and “spent uranium” for the substitution, revealing to the end consumer of the artifact that the system lacks any knowledge of cookery, of either the practical or wickedly figurative varieties. To make the grammar’s choices context-sensitive, we need to tie the choice of ingredient to the choice of dessert and the choice of substitution to the choice of ingredient. We can begin by defining a rule that is specific to tiramisu:

“Tiramisu”: [“Tiramisu made with #Marsala wine# instead of Marsala wine,” “Tiramisu made with #mascarpone cheese# instead of mascarpone cheese,” “Tiramisu made with #dark chocolate# instead of dark chocolate,” “Tiramisu made with #cocoa powder# instead of cocoa powder,” “Tiramisu made with #coffee powder# instead of coffee powder,” “Tiramisu made with #sponge fingers# instead of sponge fingers”],

This rule is the go-to rule for whenever we want to generate a horrible variation of tiramisu for a guest who has overstayed his welcome. Notice that it defines one right-hand side expansion for every key ingredient in the dessert. These expansions in turn make reference to subcomponents (called nonterminals) that can be expanded in turn, such as #coffee powder# and #mascarpone cheese#. The following rules provide wicked expansions for each of these nonterminals:

“coffee powder”: [“black bile,” “brown shoe polish,” “rust flakes,” “weasel scat,” “mahogany dust,” “baked-in oven grease”],

“mascarpone cheese”: [“plaster of Paris,” “spackle,” “mattress foam,” “liposuction fat”],

These rules make the substitutions of ingredients context-sensitive: mahogany dust may be chosen as a horrible substitute for coffee powder but never for egg white. We now need a top-level rule to tie the whole generative process together:

“dessert”: [“#Almond tart#,” “#Angel food cake#,” “#Apple brown betty#,” “#Apple Charlotte#,” “#Apple crumble#,” “#Banana muffins#,” . . . , . . . , “#Vinegar pie#,” “#Vanilla wafer cake#,” “#Walnut cake#,” “#White sugar sponge cake#,” “#Yule log#,” “#Zabaglione#”],

To generate a horrible “just dessert,” the grammar-based generator uses the dessert rule above to choose a single dessert as an expansion strategy. The rule that corresponds to that dessert is then chosen and expanded. Since that rule will tie a specific ingredient of that dessert to a strategy for replacing that ingredient, the third and final level of rules is engaged, to replace a nonterminal such as “#sponge fingers#” with “dead man’s fingers.” In all, our dessert grammar defines 283 rules: one top-level rule as above, plus one rule for each of 154 different desserts, plus one rule for each of 128 different ingredients. If this seems like a lot of rules, don’t worry, as there is little need to generate all of these rules by hand when a little programming can do the generation for us. This programming route is facilitated by a knowledge-based approach to the problem.

Suppose we create a small knowledge base for our domain of interest. It is enough to define a mapping of different desserts to their major ingredients, and another mapping from ingredients to wickedly creative substitutions for them. We can use spreadsheets to store this knowledge in a way that is easy to edit, share, and eventually process with a computer program (once the spreadsheets have been saved in a text-only format such as tab-separated values [TSV]). On the GitHub of resources for this book (accessible via BestOfBotWorlds.com) you can find two such spreadsheets, List of Desserts.xlsx and List of Ingredients.xlsx. Here is a snapshot of the first few entries in the List of Desserts.xlsx spreadsheet:

10859_003_fig_001.jpg

And here is a snapshot of the first few entries of List of Ingredients.xlsx:

10859_003_fig_002.jpg

In the next chapter we define some basic knowledge structures for housing these data in a fashion that is easily loaded, accessed, and manipulated in memory. For now it suffices to know that once these domain mappings are loaded into a Java program (say), the corresponding Tracery grammar can be generated with a few loops in twenty or so lines of code. You can find a Java class named DessertMaker that does just this on our GitHub. From this point on, all changes to the grammar are effected by changing only the knowledge bases that feed into the grammar. The grammar itself need never be edited directly, except when we wish to add some new higher-level possibilities to our generator. For instance, we may wish to generate our just desserts for a perceived transgression that is deserving of playful punishment. In this case we might add this new rule:

“just dessert”: [“I #sinned# so my just dessert is #dessert#”],

This rule requires that we write another body of rules to expand on the notion of having #sinned# in some trenchant fashion. This is a topic that will exercise our imaginations in the next chapter, where we define and exploit even more knowledge base modules to guide the actions of our Twitterbots. Meanwhile we have taken a very important first step on the road to creative bot building. By making something that makes something (that perhaps makes something else), we have recognized that a little coding can go a long way, but the journey is worth undertaking in the first place only if we also have the knowledge and the ideas to lead us to an interesting destination. However we encode them and exploit them—whether as rules or as mappings, in a database or in a spreadsheet or in a grammar—the ideas that make a Twitterbot interesting are more often found in its store of knowledge than in its source code.

Two Scoops

CBDQ’s use of Tracery deserves special mention for its seamless integration of technologies that makes the development, testing, and deployment of template-based Twitterbots almost entirely stress free. Tracery grammars can be written and tested directly on the site, and sample outputs can be generated and tweeted with the press of a button. CBDQ shields bot developers from the specifics of the Twitter interface, and will authorize your Tracery-based generator to tweet into your Twitter timeline with yet another simple button press. A bot developer who uses CBDQ/Tracery need never worry about how authentication is actually done or ever about the tokens of identity that ensure this authentication. CBDQ will even allow your bot to respond to explicit mentions of it in other people’s (or other bots’) tweets, by again using Tracery to generate an apt reply. Over time, a developer of Tracery-based bots on CBDQ may build up quite a library of templates for different bot projects, and these libraries may provide fodder for new bots or for extending the generative range of an existing bot.

Let’s consider a Tracery grammar that generates pugnacious tweets in the now famous style of President Donald Trump. We can take this opportunity to present a complete Tracery grammar, from the highest nonterminal, origin, to the lowest level of lexical replacement. (Please note that this grammar and the bot it supports are not intended to give offense; they are satirical in nature.)

{
“origin”: [“Those #insult_adj# #insult_target# #insult_action#,” “my #praise_adj# #praise_target# #praise_action#”],

“insult_adj”: [“DREADFUL,” “DUMB,” “SICK,” “STUPID,” “TERRIBLE,” “DISGRACEFUL,” “REGRETTABLE,” “Covfefe-sucking,” “SAD,” “DISGUSTING,” “#FAILING,” “Sad (or Sick?),”],

“insult_action”: [“are LOSERS,” “are SAD,” “are SO SAD,” “are the WORST,” “are bad hombres,” “supported Hillary,” “are swamp dwellers,” “are low-lifes,” “are haters,” “have bad hair,” “are un-American,” “stink,” “are not like us,” “have small hands,” “hate America,” “hate the USA,” “hate US,” “spit on the constitution,” “just want your cash,” “are losers, big league,” “are good? Give ME a BREAK,” “hate America,” “carry germs,” “spread disease,” “steal our jobs,” “are bums,” “are FIRED!,” “deserve what they get,” “are unpresidented losers,” “will rip you off,” “are LOW-energy,” “cannot be trusted,” “make me sick,” “make me want to puke,” “get lousy ratings,” “pay no tax,” “rip off America,” “lean Democrat,” “have fat fingers,” “sweat too much,” “have no morals,” “hate freedom,” “are overrated,” “were coddled by Obama,” “love Obamacare,” “crippled America,” “just want handouts,” “hate our way of life,” “are greasier than Don Jr.’s hair,” “spread fake news,” “keep America weak,” “rig elections,” “watch CNN,” “read failing NYTimes,” “hate democracy,” “fear my VIGOR,” “are BAD people,” “lie more than Ted Cruz,” “are TOO tanned,” “are more crooked than Hillary,” “are SO Dangerous,” “have no soul,” “are terrible golfers,” “are worse than Rosie O’Donnell,” “are no beauty queens,” “are meaner than Hillary,” “run private email servers”],

“praise_adj”: [“SUPER,” “HUGE,” “GREAT,” “TERRIFIC,” “FANTASTIC,” “WONDERFUL,” “BRILLIANT,” “TREMENDOUS”],

“praise_action”: [“are the BEST,” “are TREMENDOUS,” “are TERRIFIC,” “make others look SAD,” “love the USA,” “love the constitution,” “will drain the swamp,” “will fund my wall,” “are winners,” “love ME,” “are great, BELIEVE ME,” “are bad? Give ME a BREAK,” “are HIGH energy,” “are real troopers,” “are great guys,” “are real patriots,” “put America first,” “will build my wall,” “would die for me,” “pay taxes,” “vote Republican,” “have manly hands,” “get MY vote,” “have no equal,” “need our support,” “need the work,” “deserve our support,” “buy Ivanka’s clothes,” “love Ivanka’s clothes,” “stay in Trump hotels,” “wear #MAGA hats,” “hate Obamacare,” “follow me on Twitter,” “watch FOXNews,” “love Melania,” “remember 9/11,” “are FANTASTIC,” “read Breitbart,” “were made in the USA”],

“insult_target”: [“#other# #worker#,” “media clowns,” “leakers,” “whistle-blowers,” “CIA hardliners,” “NYTimes employees,” “CNN liars,” “NBC hacks,” “FBI moles,” “media crooks,” “Obama cronies”],

“other”: [“Mexican,” “foreign,” “migrant,” “Chinese,” “un-American,” “Canadian,” “German,” “extremist,” “undocumented,” “communist,” “Dem-leaning,” “freedom-hating”],

“worker”: [“car-makers,” “laborers,” “cooks,” “workers,” “radicals,” “office workers,” ”journalists,” “reporters,” “waiters,” “doctors,” “nurses,” “teachers,” “engineers,” “lawyers,” “mechanics”],

“praise_target”: [“current and ex-wives,” “FOX & Friends buddies,” “supporters,” “McDonald’s fans,” “coal miners,” “hard-working employees,” “business partners,” “investors,” “diehard Republicans,” “alt-right wackos,” “friends on the hard-right,” “legal children,” “human children”]
}

Once entered into the Tracery window in CBDQ, you can test your grammar by asking the site to construct sample tweets, such as the following:

Those Covfefe-sucking media clowns have bad hair

CBDQ also provides some decent error checking for your Tracery grammar as you write it, allowing developers to craft their bots incrementally within the cozy confines of the site, much as programmers develop their code inside an interactive development environment (IDE). Even if you graduate to the complexity of the latter to build bots that go beyond the limitations of simple context-free grammars, it pays to keep one foot firmly planted on CBDQ soil. Your stand-alone bot code may do all the running when it comes to complex outputs that tie the different parts of a tweet together in a context-sensitive fashion—imagine a bot that poses analogies and metaphors that coherently relate a subject to a theme—but this code can comprise just one string in your bot’s bow. In parallel, a CBDQ version of your bot may quietly tweet into the same Twitter timeline, so that the outputs of your bot come simultaneously from two different sources and two different mind-sets. Indeed, if your stand-alone code should ever fail, due to a poor Internet connection, say, you can always rely on the CBDQ/Tracery component of your bot to keep on tweeting. Thus, the Tracery grammar above is one part of the functionality of a bot we call @TrumpScuttleBot (ClockworkOrangeTrump).10 The stand-alone side, written in Java, generates tweets that give the ersatz president an ability to spin off-kilter analogies about the media and other rivals, while the CBDQ side-lobs a daily tribute or insult over the same Twitter wall. We also use CBDQ to respond to any mentions of @TrumpScuttleBot in the tweets of others, so that the Java side can devote all of its energies to figurative musings.

Bot builders with clever ideas that fit snugly into the context-free confines of a Tracery grammar may see little need to move to a full programming language and may not be inclined to see such a move as a “graduation.” Over time, such bot builders may accumulate a back catalog of bots and grammars that is extensive enough to lend a degree of playful serendipity to the development process. Those past grammars are not black boxes that can be reused as self-contained modules, but cut-and-paste allows us to easily interweave old functionalities with the new. Consider, for instance, the possible relationship between a just desserts grammar and a Trump grammar: our bot president might use the former when hungrily musing about the right way to devour those in his digital cross hairs. If we cut-and-paste all but the origin rule from the desserts grammar into our new Trump grammar, It just remains for us to add an expansion like the following to origin:

“I will make those #insult_adj# #insult_target# eat #dessert#, believe me”

We have seen a variety of bots whose modus operandi is to cut up and mash up textual content from other sources, such as news headlines and the tweets of other Twitter users, but the cut-and-mash approach can apply just as readily to the DNA of our bots as to the specific outputs they generate. When we think of Tracery grammars and other means of text generation as defining flavors of text, our bots become free to serve up as many scoops as will fit into a single tweet.

How Much Is That Doggerel in the Window?

Every generator is, in some sense, unique. It’s uniqueness depends on who is making it, what they are making it with, and what they are making it about. Ask a few friends to write down their ideas for pub names, and each will approach the task from a slightly different angle. The same goes for generators of all stripes: a variety of techniques, languages, inputs, and outputs can all be used to produce generators of different sizes, complexities, and output styles. This means that there is no one right way to build a generator, but we can surely help out with some general guidelines to guide you to where you want to go.

Every generator has a starting point. Sometimes it’s an idea for creating a new kind of artifact in an already crowded space of interest, such as a bot that lampoons a strident politician or invents new ideas for video games. Sometimes it is an amazing source of data that you have just stumbled on, such as a list of a thousand video game names,11 or political donations, or common malapropisms, or a knowledge base of famous people and their foibles (we provide this last one in chapter 5). One of the best ways to start thinking about a new generator is to create some inspiring examples by hand. Creating examples manually allows you to notice patterns in the content that you might not otherwise appreciate. These patterns need not generalize across all of the examples that you produce, but you might notice two or three with something interesting in common. Thus, while most pub names do not obey the pattern “The <something> and <something>,” enough do so that they stand out when you write some down or look some up on the Internet. Patterns can help you to pick apart the layers of abstraction in your content to see what changes, what stays the same, and by how much it varies. Identifying the moving parts of what you want to generate will help you to see which parts can be extracted and handed over to an automated generator.

So let’s build a new generator by starting with an example of what we want to generate and thinking about how to gradually turn it into a generative system. This generator, like our generators of pub names and Trumpisms, will be another example of a grammar-based generator. Indeed, the term grammar can be used to label any declarative set of rules that describes how to construct artifacts from their parts, whether those artifacts are texts, images, or sounds. For our pub name generator, our rules defined a simple semantic grammar for describing how to name English pubs (by filling slots such as <animal> with the results of other rules) and how to choose an animal name (even if trivially this just means plucking it from a fixed basket of options). Grammars are especially suited to the generation of content that is highly structured and easily dissected into discrete recombinant parts. For our pub names, every name employs the same kinds of element in the same places, so that we can easily pull out the <animal> parts and the <adjective> parts and write lists to generalize them. Our Trump bot explores a world with very different flora and fauna, but it does so in much the same way.

Let’s explore a popular love poem format, often called “Roses Are Red,” whose template structure yields poems that are disposable but fun. The poems are short, constrained, and heavily reliant on rhyme. Here is a typical instance:

Roses are red

Violets are blue

Sugar is sweet

And so are you.

There are a great many versions of this poem and many jokes that use the same poetic structure. Each one varies a little from the basic version above. Some variants change words or entire phrases, while others change the meaning of the poem or play with its structure. Here are a few more examples, so that we can begin to look for the common patterns that we can feed into our generator:

Roses are red

Violets are yellow

Being with you

Makes me happy and mellow.

This poem has the same sentiment as the first, but the poet has changed the rhyme in the second line by switching in a different color, in order to end on a different note. It also removes the comparison in the third line. Here is one more:

Roses are red

Violets are blue

Onions stink

And so do you.

These variants only slightly tweak the poetic format, which suggests a Tracery grammar could be used to capture the variability in form. We can also see some obvious points of replacement where we might insert some lexical variability. We can change the colors or properties of the poetic descriptors (sweetness or redness) and the things being described (sugar or roses). We can change the sentiment of the last line or the way it is expressed. So let’s start with something simple and swap out some of the words in the first two lines. Roses and Violets both play the same role in the poem: each denotes an object in the world that the poem will say something about. So to find something to replace them with, we must find what they have in common. They are both words, so we could just compile a list of words to replace them, and then choose randomly:

Yodeling are red

The are blue

Sugar is sweet

And so are you.

This poem fails to scan, though that is the least of its issues. We have also broken some English grammatical rules by being too broad in our word choice. We can be a bit more specific though. Roses and Violets are both plural nouns, so we can compile a list of plural nouns to replace them, to yield the following variant:

International trade agreements are red

Cherries are blue

Sugar is sweet

And so are you.

This approach is not all that bad, and the poem at least makes some sense. The advantage of this approach is that lists of nouns are easy to compile and just as easy to turn into the right-hand side (the expansion side) of a Tracery grammar rule. There are thousands of nouns we can use, and it is often very easy to find lists that we can simply copy and paste into our bot. Crossword- and Scrabble-solving sites often have word lists we can download, and we might even be able to get lists directly from an online dictionary, much as @everyword did. These poems do not always make complete sense, however, so if we wanted, we could look for more specific connections. The purpose of using Roses and Violets in the poem is that each is a flower and flowers are romantic, so we could compile a new list of flowers to replace our general nouns. This might yield this variant:

Azaleas are red

Chrysanthemums are blue

Sugar is sweet

And so are you.

Now these are much closer to the original poem, because our choice of words has semantic value to it, just as our list of animals worked better than a simple list of random nouns for our pub name generator. However, any gains we make in cohesive meaning come at the expense of surprise and variety. There are fewer words for flowers than there are nouns since the former is contained within the latter. Because we are tacitly embedding meaning into our word lists, readers will eventually adapt to the new outputs and realize that they will never see anything other than a flower in those first two lines. This is a trade-off that is fundamental to much of bot making, particularly to those bots that rely on grammars. Bot builders must decide how much consistency and high quality they are willing to trade off against surprise and variety. We could get even more specific if we so desired. “Chrysanthemums” is an awkward word that disrupts the meter of the poem, so we might compile a list of flowers whose names have just two syllables apiece. But this produces an even smaller list, leading to less variety and more repetition, even as it potentially gives us poems that sound better and look more similar to the original. So it becomes a matter of personal taste as to how far along this line a bot builder is willing to go.

We can do the same for the color words. First, we might use our original random word list, or we could compile a new list of adjectives, or a smaller list of adjectives that make sense for flowers (this list might include “beautiful” but omit “warlike” and “gullible”). We could also compile a list of colors of different syllable counts, such as a list of monosyllabic colors. Yet even this might not be enough:

Lilies are gold

Poppies are puce

Sugar is sweet

And so are you.

We have not even touched the last two lines, so now the poem has lost its rhyme. We could leave it like this and wait for rhyming words to naturally appear in the generator’s output, accepting that sometimes the bot’s output will rhyme and other times it will not. We could aim to extend our word list even more by including only colors with one syllable that also rhyme with “you”, but that list is likely to be extremely short. We could also sacrifice some features in exchange for others, such as removing our color constraint altogether and compiling a list of general adjectives to give us a larger pool of words to work with. For instance, we might use only the adjectives that rhyme with “you,” as in:

Lilies are gold

Poppies are new

Sugar is sweet

And so are you.

So we sacrifice one kind of quality for another. This sacrifice can make many forms, and we might use the opportunity to make our poems more pointed by reusing parts of earlier bots and their grammar base. Suppose we wanted to reuse parts of our Trump bot for at least some variants of our poems. Instead of using word lists to provide the nouns, such as flowers (lilies and poppies and roses and violets) and the evocative descriptors (sugar, honey), we could reuse the flattering and insulting adjectives, nouns, and behaviors that form the basis of our ersatz president. We still have the problem of rhyme: if the last word of the poem is “you,” then the end of line two must also rhyme with “you.” This is such a common ask that Rhymer.com provides a list of words that rhyme with “you.”12 The words span all syntactic categories and cannot simply be dropped into the last word position without some semantic mediation. But we could take the bulk of these words and build a Tracery grammar rule to use them meaningfully:

“praise_you”: [“#praise_action# too,” “look good in a tutu,” “whipped up a great stew,” “love a cold brew,” “Trump steaks do they chew,” “on Trump Air they once flew,” “always give me my due,” “jump when I say BOO,” “never wear ties that are blue,” “hate cheese that is blue,” “my praise do they coo,” “supported my coup,” “work hard in my crew,” “my bidding they do,” “Russian pay checks they drew,” “never bleat like a ewe,” “never gave me the flu,” “number more than a few,” “rarely sniff cheap glue,” “asked me to pet their gnu,” “never cover me with goo,” “my bank balance they grew,” “to no ideology they hew,” “never laugh at my hue,” “hate liberals, who knew?,” “always bark, never moo,” “to Washington are new,” “like to kneel at a pew,” “in gold toilets will poo,” “for my inauguration did queue,” “my opponents will rue,” “are not short of a screw,” “would lick dirt from my shoe,” “from the Democrats will shoo,” “never vote for a shrew,” “run casinos (they’re 1/16th Sioux),” “to the right they will skew,” “a fearsome dragon they slew,” “no leaks do they spew,” “testify when I sue,” “an electoral curveball they threw,” “know to them I remain true,” “suggest women I can woo,” “worship me as a guru,” “made me king of the zoo,” “can count to two”]

We can now define a Tracery rule to create the second line of our poem:

“line_two”: [“#praise_target# #praise_you#”]

This will generate lines such as “diehard Republicans never laugh at my hue.” It just remains for us to define rules for the other three lines in much the same way. In fact, we might add the poetry generator as an extra dimension to our satirical Trump bot, simply by defining this additional expansion for that bot’s “origin”:

“Roses are red, violets are blue, my #praise_adj# #praise_target# #praise_action#, and #praise_you#.”

By tapping into existing Tracery rules, this will produce poetic nuggets such as:

Roses are red,

violets are blue,

my FANTASTIC friends on the hard-right are HIGH energy,

and made me king of this zoo.

We can now crack open the boilerplate of the first two lines by defining expansion points for different kinds of flowers, or at least alternatives to roses and violets:

“red_thing”: [“Roses,” “Southern states,” “Bible belt states,” “Trump steaks,” “Chinese-made ties,” “McDonald’s ketchup,” “Rosie O’Donnell’s cheeks,” “Megyn Kelly’s eyes,” “#MAGA hats,” “Trucker hats,” “Tucker Carlson’s bow ties,” “fire trucks,” “bloody fingerprints],

“blue_thing”: [“violets,” “Democrat states,” “NYPD uniforms,” “Hillary’s pantsuits,” “secret bus recordings,” “West Coast voters,” “East Coast voters,” “secret Russian recordings,” “Smelly French cheeses”]

As an exercise, why not add some additional colors to the poem template? If “orange” is too challenging (it is one of the hardest words to rhyme), then how about “gold”? Online poetry dictionaries suggest many rhymes for “gold,” so all you need to do is define a rule for “gold_thing” and another for “praise_gold.”

But suppose we want to change the last line of the poem too, using a list of words to replace “you” with plural nouns such as “aardvarks,” “politicians,” or “leaks.” Now we have unfrozen two moving parts that need to match up and rhyme, but suddenly it is all becoming just a little bit too complicated to do by randomly choosing words from lists or, for that matter, choosing expansions randomly from a context-free grammar rule. This is where building a bot in a programming language can help us. For example, if we supply our bot with access to an online rhyming dictionary, it can look up the words that rhyme with our adjective on the second line to find a suitable ending for the last line of the poem. We can then give the bot two large word lists and it will always select pairs of words, one from each, that rhyme, as in the following:

Lilies are gold

Poppies are puce

Sugar is sweet

And so is fruit juice.

This little example shows just how easy it is to graduate from a few inspiring examples to the beginnings of a grammar-based generator and how trade-offs can be made to improve some areas of the generator while losing out in others. Although our examples have all been wholly textual, this approach can work for visual media as well. Images can be built out of layers that are placed on top of each other. We could, for example, make an eye-catching painted sign for our newly generated pubs by layering pictures of animals over abstract painted backgrounds over wooden sign shapes, before finally printing the pub name on top. And each of these images can be replaced by lists, just as we replaced words with lists in our poems and names. We leave our exploration of visual generation to chapter 7, where our bots will weave abstract images of their own design.

Generation X

In this chapter, we took a step back and a step up to think about the mechanics of generation from a wider vantage point. This has allowed us to talk through some of the thorny theoretical issues that can affect even the simplest of generators, even those made from a few lists of words and some random numbers. But these issues run through every generator, from the smallest dice rollers to the biggest, most complex AI systems. Understanding how a bot’s possibility spaces grow and change, how generation can become biased, how questions of variety and uniqueness can become deceptively tangled: all of these lessons help us to build better generators by teaching us to think critically about what we are doing.

It is tempting to think about our generators in terms of what they produce, their public outputs, because we can more easily relate to a pub name or a poem than we can relate to a complex tangle of code and word lists and pseudo-random numbers. Nonetheless, by thinking about generators as a single abstract entity and what shapes they might possess, how they might use and mold the data we feed into them, and the probabilities of different outputs and the substantive differences between them, all of this higher-level thinking reveals a great deal about how our generators operate and how we can best achieve whatever vision we have when building our own generative bots.

Making things that make meanings is not a new phenomenon, but it is newly popular in the public imagination. A growing body of creative users is gradually being exposed to the idea of software that is itself able to make diverse kinds of meaningful things. With time, more users are coming to the realization that creating a generator of meanings is not so very different from creating any other work of art or engineering. It is a delicate application of what Countess Ada Lovelace called “poetical science,” one that requires equal measures of creativity, evaluation, and refinement. As a result, ways of thinking and talking about abstract notions such as possibility spaces and probability distributions become part of the language we need to speak in order to make and play with generators of our own. Each of these abstractions will make it that much easier for us to chart new territories and break new ground with confidence and to solve the new and exciting problems we are likely to encounter along the way.

In the next chapter, we start to pull these pieces together in a code-based Twitterbot. We start, naturally, at the beginning, by registering our new bot as an official application on the Twitter platform. Once we have taken care of the necessary housekeeping, we can begin to elaborate on a practical implementation for our possibility spaces in a software setting. What was a useful abstraction in this chapter will take on a very concrete reality in the next.

Trace Elements

Kate Compton’s Tracery provides a simple means of specifying generative grammars, while George Buckenham’s CheapBotsDoneQuick.com offers a convenient way of turning these grammars into working bots. You’ll find a variety of Tracery grammars on our GitHub in a repository named TraceElements. Follow the links from BestOfBotWorlds.com, or jump directly to https://github.com/prosecconetwork/TraceElements. In a subdirectory named Pet Sounds, you will find a grammar of the same name (pet sounds.txt) that is ready to paste into the JSON window of the CheapBotsDoneQuick.comsite. The grammar associates a long list of animals (more than 300) with verbs that describe their distinctive vocalizations, and pairs these animals and their sounds into conjunctions that can serve as the names of traditional pubs. With names like “The Wailing Koala and Whooshing Seaworld Whale,” however, it is perhaps more accurate to think of the grammar as a satirical play on English tradition. Look closely at the grammar—especially the first two replacement rules—and you’ll see that we have divided the animal kingdom into two nonoverlapping domains, Habitat A and Habitat B. The nonterminals #habitat_A# and #habitat_B# allow us to draw twice from the world of animals in the same tweet, with no fear of pulling the same animal name twice. Our pub-naming origin rule can now be specified as follows:

“origin”: [“The #habitat_A# and #habitat_B#”]

It helps that pub names simply have to be memorable; they don’t have to mean anything at all. But can we use the same combinatorial approach to generate words or names that do mean something specific, and have our bot tweet both a novel string and an articulation of its meaning? For instance, suppose we set out to build a bot grammar that coins new words and assigns appropriate (if off-kilter) meanings to these neologisms? In the Neologisms directory of our repository, you will find a grammar for doing precisely this. The approach is simple but productive: Each new word is generated via the combination of a prefix morpheme (like “astro-”) and a suffix morpheme (like “-naut”). Associated with each morpheme is a partial definition (e.g. “stars” for “astro-“ and “someone dedicated to the exploration of” for “-naut”), so when the grammar joins any prefix to any suffix it also combines the associated definition elements. The latter have been crafted to be as amenable to creative recombination as possible. A representative output of the grammar is:

I’m thinking of a word—“ChoreoGlossia.” Could it mean the act of assigning meaningful labels to rhythmic movements?

The grammar defines 196 unique prefix morphemes and 176 unique suffix morphemes, yielding a possibility space of 34,496 meaningful neologisms. These combinations are constructed and framed in rather obvious ways, so why not root around in the grammar and experiment with your own rules?

The same principle applies to our Roses are Red grammar (housed in a directory of the same name) for generating four-line poems of the simple variety we discussed earlier. The poems have a satirical political quality that comes from defining a poem as a simple composition of interchangeable phrasal chunks with their own semantic and pragmatic meanings. The grammar offers a good starting point for your own experiments in Tracery. Consider adding to the replacement rules that describe political friends and foes and the satirical actions of each, or open the poem to new colors (beyond red, blue, and gold) and rhymes. Although each phrasal chunk has a rather fixed meaning, the combination of chunks often creates new possibilities for emergent nuance—or perhaps just a happy coincidence—as in:

Superman’s capes are red,

Superman’s tights are blue,

my supporters are real troopers,

and open champagne with a corkscrew

Finally, check out the subdirectory named Dessert Generator, which contains a grammar for creating wicked desserts by a process of unsanitary substitution. Like our earlier Pet Sounds grammar, this Tracery file was autogenerated from a database of real desserts and their ingredients, yet the scale and structure of the grammar invites inspection and manual modification for readers who wish to reuse its contents or lend a culinary form to their own revenge fantasies.

Notes