2The Best of Bot Worlds

Top of the Bots

Every hour, on the hour, the tower of the Palace of Westminster in London explodes with the sounds of bells ringing as Big Ben strikes out the time in a series of resounding bongs. At the same time, humming away on a server without quite as much applause, a script sends a tweet to @big_ben_clock’s Twitter feed, with the word BONG typed out one or more times to signify the hour. More than 490,000 people follow the unofficial Big Ben Twitterbot, which has been tweeting the hour since 2009. It even updates its profile picture with images of fireworks every New Year’s Eve and little fluttering hearts on Valentine’s Day. Moreover, it continued to tweet on the hour even as the real thing went silent in August 2017 for a projected four-year period of rest and restoration.

Getting a computer to post tweets for you goes back much further than 2009, however. In chapter 1, we recalled Twitter’s very first tweet, “just setting up my twttr,” from Twitter cofounder Jack Dorsey. But even that first tweet was sent not with a web interface or a mobile app, but through a script running on Dorsey’s computer. A few months after that very first status update, Twitter released the initial version of its application programming interface (API), a special tool kit for interacting with a particular website, technology, or program that exposes all of the public functionalities of a service. Twitter’s API would let people write programs that could tweet for themselves, whether it was just a series of bongs every hour on the hour or something much more complicated.

Today, automated Twitter users, or bots, come in all flavors, shapes, and sizes. In fact, in 2017 it was estimated that as much as 15 percent of Twitter’s users were not humans but bots. Most of those Twitterbots are of a less-than-edifying variety—little automated advertisers that wander the platform to try to convince users to click on a link or look at a picture. These advertising bots watch for specific keywords and hashtags to find the right people to target, pester, or poke. But within this enormous, writhing mess of cynicism, we can find little software gems like @big_ben_clock, as unique as they are silly, designed to make us smile, frown, or think about something else for a moment. Wandering through this weird world of strange software can feel like blazing a trail through an alien jungle, but just like botanists trying to categorize new discoveries, we too can try to name families of Twitterbots and group them together according to the features and ideas that they have in common. In doing so, we can unpack the monolithic idea of a Twitterbot into different facets, each one a little easier to understand. We’ll see that bot builders have diverse reasons for making bots, and a comparable diversity holds for those who follow them too. We’ll see that bots can be playful, aggressive, thought provoking, or entirely serendipitous. Along the way, we may even get ideas for Twitterbots that do not yet exist. Our proposed taxonomy for bots in this chapter is just one possible way of classifying this amazing family of software agents. As we explore the different categories, you might forge your own connections between bots, or invent new categories that we fail to mention, or identify bots that fall into multiple categories at once. In reality, every bot is unique, so do not worry if you occasionally disagree with our groupings.

The first and simplest kind of bot is a Feed bot. Feeds are bots that tweet out streams of data, usually at regular intervals, and usually forever. Some Feed bots, such as @big_ben_clock, tweet out their own kind of data (in this case, bongs according to the current time) in a special arrangement. Other bots tweet out information from large, richly stocked databases. Darius Kazemi’s @museumbot tweets four times a day, and each tweet contains a photograph of an item from New York’s Metropolitan Museum of Art, thanks to the museum’s open-access database of its massive collection.1 Feed bots are simple and elegant, which can make them attractive options for bot builders. One of the most famous Twitterbots ever written, Allison Parrish’s @everyword, used a dictionary as its database, and it tweeted (during its lifetime) every English word in alphabetical order, two per hour, from start to finish. Today it lies dormant, having exhausted its word list, but at its peak, the bot enjoyed ninety-five thousand followers who hung on its every word. We’ll return to the strange cult of @everyword later in this chapter.

Feeds can also create new sources of data, as well as recycle data that already exist. A common kind of Feed bot is one that mixes words and phrases together, as selected from bespoke word or phrase lists that are created by the bot builder. @sandwiches_bot, now sadly dormant, generated randomized ideas for sandwiches by combining ingredients (like shredded carrots and thinly sliced chicken, or bread types such as focaccia), presentation styles (stuffed with, topped with, garnished with), and a special list of names. So you might well end up with something like this in your lunchtime tweet: “The Escondido: Focaccia stuffed with thinly sliced chicken, brie, red cabbage and watercress topped with spicy mustard.” While not every sandwich turns out to be a winner, the Escondido enjoyed two retweets and three favorites from the bot’s followers, so perhaps a user actually contemplated making one. Every day at lunchtime, the bot produced a new sandwich, and its followers delighted at seeing what might pop up next. Good or bad, it is very unlikely to be something they will have seen before on a menu.

In 1989, the British comedy duo Stephen Fry and Hugh Laurie performed a sketch they called “Tricky Linguistics” on their TV show, in which Fry mused about the vast scale of the English language and the unique beauty this confers on any sentence.2 Fry was reveling in an insight that linguist Noam Chomsky had made famous before him, that the raw creativity of human language allows any one of us to invent, on the spot, a seemingly meaningful utterance that no other person has previously uttered or thought in human history.3 Fry’s framing is perhaps more amusing than Chomsky’s: “Our language: hundreds of thousands of available words … so that I can say the following sentence and be utterly sure that nobody has ever said it before in the history of human communication: ‘Hold the newsreader's nose squarely, waiter, or friendly milk will countermand my trousers.’” When we check Twitter at lunchtime and see that @sandwiches_bot has created another culinary masterpiece (or not), we get a little taste of what Fry is alluding to here. We get a sense that this combination may never have been seen before, and however slight that revelation might be, it tickles us. Fry’s musing also bears some resemblance to the philosophical notion of the sublime, the sense of wonder and awe that is evoked when we come face-to-face with the immensity of nature.4 Philosophers in the eighteenth century would note the extreme emotions they felt on trips through the Alps when faced with the realization of their insignificance relative to the scale of the universe and of time itself. While @everyword and @sandwiches_bot cannot compete on this romantic scale, there is an undeniable beauty associated with watching a slow and inexorable process—such as the printing of every word in the dictionary (or of every name of God) – finally come to completion. The word sex was retweeted by 2,297 people when @everyword finally reached it, and part of the reason (beyond juvenile titillation) must surely be the shared feeling that this was always certain to happen, that these people had witnessed it, and that it was never going to happen again.

This strong connection to expectation, in terms of both the data being tweeted and our ideas about how computers work, can also produce even stronger emotions. When @everyword eventually reached the end of its list of words beginning with z (with “zymurgy,” which earned a strong 816 retweets because of this sense of finality), its followers expected the ride to be over. But an hour later, @everyword tweeted a new word: “éclair.” Replies to this tweet conveyed both anger and surprise. One follower called it the “GREATEST RUSE OF 2014,” another “utter chaos.” While they were no doubt playing up their emotions for an audience, the tweet certainly came as a huge surprise. To a speaker of English, z is the last letter of the alphabet. But to a computer, accented letters such as é have internal codes that numerically place them after the letter z. @everyword continued for another seven hours before finally coming to a genuine halt.

Qui Pipiabit Ipsos Pipiodes

Feeds bots are ways to include a new kind of tweet in your feed, whether it’s daily recipes for questionable sandwiches or simply the word BONG cutting up your feed into hourly chunks. Other kinds of bots do not wait for you to come to them for information, however; rather, they come to you. We call this kind of bot a Watcher bot.

Unless your account is protected, meaning its tweets cannot be viewed without your permission, every tweet you dispatch is fired off into the void for everyone to see. Sometimes a void is exactly what Twitter feels like: an empty space where no one replies and your sentiments, no matter how desperate, vanish forever into the ether. This can be a real problem when those sentiments include a cry for help. Several Twitter users noticed this problem and wrote Twitterbots to help, such as @yourevalued by @molly0x57 and @hugstotherescue by @sachawheeler (now both inactive). @yourevalued periodically searched for the phrase “nobody loves me” in tweets, and when it found an instance, the bot replied with one of a number of random responses, including an emoji heart, or the phrase “I like you.” The bot’s profile picture is a white square overwritten in black with “You Matter.” While it’s not quite the same as human affection, the bot’s responses can often be surprising or even funny to someone who is not expecting them. The bot cannot change someone’s world or solve anyone’s problems, but for a brief moment, it can intervene in someone’s life to remind them that they are valued; they exist and matter, if only because someone (or something) else has taken notice of their tweets. Both @yourevalued and @hugstotherescue identified themselves as bots, either in their Twitter names or their profile biographies. This is important because it prevents the bot from posing as a real human and perhaps causing further pain by disappointing someone later on. Nonetheless, neither bot is operational at the time of writing, with @yourevalued’s bio citing a conflict with Twitter’s terms of service for its indefinite hiatus. This is our first encounter with what some bot authors call Twitterbot ethics, a code of conduct for people writing the software that lives on Twitter. We return to the question of ethics repeatedly in this book, including later in in this chapter. Suffice it to say that not all bots play by the rules, as we shall see.

@yourevalued quietly replied to the people whose tweets it discovered, but other bots reuse their finds in their own public tweets, in a combination of Feed and Watcher functionality. @ANAGRAMATRON by @cmyr searches for tweets that are anagrams of one another and retweets them in pairs. Because the tweets are plucked from the public feeds of real users, the results are thus unpredictable and often fascinating. It can bring a smile to your face to realize that “I hope it’s not bad man” is indeed an anagram of “time to abandon ship.” A bot we met earlier, @pentametron by @ranjit, also plays with this idea, searching for tweets that can be scanned in iambic pentameter (a poetic meter built around groupings of ten syllables, with the stresses landing on alternating syllables). It retweets these tweets as rhyming couplets that sound perfectly compatible and neat at a poetic level but also possess an unfiltered rawness that emerges from its sourcing of tweets from all corners of Twitter. While @pentametron may not see a semantic reason to pair “Sure hope tomorrow really goes my way” with “Just far too many orders to obey,” we humans are easily persuaded to find unifying reason behind the superficial rhyme.

Watchers like @ANAGRAMATRON and @pentametron are, like many other popular bots, entertainingly unpredictable. They also have another interesting quality that goes some way toward explaining their popularity, for they exude a sense that they are bringing order to the messy, chaotic, and enormous world of Twitter. While the philosophical idea of the sublime might encourage us to feel small and powerless against the scale and onslaught of Twitter’s digital Alps, Watcher bots organize and sort the millions of tweets being sent every second into neat piles. These ones rhyme. Those ones are anagrams. Even bots like @yourevalued, working away in private, are designed to fight back against the roiling flood of tweets, picking people out to remind them that they are not lost and ignored amid all the havoc of social media.

Other Watchers work with different aims in mind. @StealthMountain is not the kind of bot that you follow if you want it to notice you. When it needs to and when your behavior warrants it, the bot will find you. It searches for any tweet containing the phrase “sneak peak” (as opposed to “sneak peek,” meaning an exclusive preview) and publicly asks users if they have made a spelling mistake. It is a remarkably simple bot that repeats the same shtick time after time, but because it finds you, to point out your mistakes, the bot is not just surprising but sometimes immensely aggravating too (as one user put it, “GO AND JOIN THE GRAMMAR POLICE”). But not everyone minds, and many users reply to thank the bot. Nevertheless, @StealthMountain is a good example of a bot that does something that we humans may not feel so comfortable doing for ourselves. Watchers thus stand on somewhat shaky ground when it comes to bot ethics, because Twitter frowns on bots that send unsolicited messages to users who are not also followers. This is largely because unsolicited contact is a key tactic of the spam bots that send click bait, advertisements, or worse to thousands of users each hour. Even benign bots with a positive mission, such as @yourevalued, can fall afoul of these restrictions, as Twitter tries to grapple with where to draw the line for acceptable Twitterbot conduct. So @hugstotherescue no longer exists on Twitter, while @yourevalued remains shuttered for much the same reason.

Interaction Hero

The bots we have seen so far are relatively passive, going about their business regardless of what anyone else does. But another kind of bot is the Interactor: responsive bots with special behaviors that are designed to talk back to the users who poke them. One such Interactor is @wikisext by @thricedotted. @wikisext’s main feed is a stream of tweets that resemble the language used in sex texts, or sexts, euphemism-laden and highly suggestive texts sent privately from one person to another. @wikisext trawls the how-to pages of a website called wikiHow in its search for sentences that can be twisted into sexualized euphemisms.5 Even a how-to page about homebrewing might contain promising sentences such as, “Obtain your brewer’s license” or “Choose one or more yeast strains.” @wikisext shifts the pronouns and verb endings to rephrase these in a more suggestive style directed squarely at readers, such as, “I obtain my brewer’s license… you choose one or more yeast strains.” Because we have been told that its tweets are euphemistic, we can read sexual meanings into the bot’s most bizarre non sequiturs, which is where @wikisext gains much of its comedic power. We may not know what “you begin by touching my crucifix and praying the sign of the cross” might mean in explicit anatomical terms, but we hardly need to since readers can interpret it and visualize it to whatever degree they desire. @wikisext pushes at its limits at times, yet because it never explicitly says anything rude, it can get away with so much. This is exactly why it can be so much fun to follow on Twitter.

@wikisext is an Interactor bot because it also replies to the tweets directed to it with a new sext tweet, which is generated in the same way as the stuff of its main feed. While this may seem like a simple feature, it forms a crucial part of the bot’s appeal, because it encourages its followers to play with the euphemistic power of language too. Browse @wikisext’s tweets, and you can see conversations with the bot that extend over many tweets as replies and counterreplies shoot back and forth. Though the bot has no sense of continuity, so that subsequent tweets tend to be drawn from different how-to pages and topics, the challenge of improvising a response seems to entertain those who engage with it. While the idea of sending sexually suggestive tweets to an impersonal piece of software that reads self-improvement articles might not appeal to everyone, surely some of the aesthetics of the bot movement hinge on this joy of interaction with an unpredictable agent. For many, there is a sense of mystery in how an algorithm might work or in what it might do next. Even for seasoned programmers who can guess at a bot’s functionality, there is still a delight in poking the bot and waiting for it to poke back. Whatever will it say next?

Some bot builders take this aesthetic and make it central to a bot’s design, and one bot in particular, @oliviataters by @robdubbin, is famous (and infamous) among bot authors for this very reason. Dubbin’s intention was to create a bot that would tweet like a teenage girl. @oliviataters tweets about various teenage concerns such as dating, growing up, Taylor Swift, and selfies, as in: “i wonder by this time next year i will have asked for a selfie stick for Christmas. why? why would you?” But the bot does more than just tweet, for like @wikisext, it also responds to replies and actively seeks out new user interactions. This can mean “favoriting,” or replying to tweets that it likes or starting conversations out of the blue with its followers. While @oliviataters may have just under seven thousand followers at present, this rich interaction fosters a large measure of devotion among its fans, who personify the bot and converse with it on a regular basis. When the bot was suspended in May 2015, a small but successful campaign, complete with its own hashtag, #FreeOlivia, was launched to get it reinstated. So the Turing test be damned: @oliviataters made its followers care about a bot, which is surely one of the most powerful kinds of interaction there is.

Interacting with Twitterbots like @oliviataters and getting excited by their personalized replies is not a new affordance of technology. In the 1960s, Joseph Weizenbaum, a computer scientist at MIT, wrote a now infamous piece of software called ELIZA.6 The software, named for the leading lady in Pygmalion and My Fair Lady, was designed as an AI experiment that inadvertently became a landmark example of early interactions between humans and computers. Specifically, ELIZA was used to mimic the soothing interactions of a psychotherapist who asks pertinent questions and responds appropriately to the replies of a patient.7 ELIZA seemed quite convincing to many, drawing in otherwise intelligent humans (such as Weizenbaum’s secretary) to reveal their most private concerns. In reality, ELIZA would carefully select from a database of stock responses and lax templates, speaking in a way that frequently deflected the conversation back to the user. Responses such as, “Why do you say that?” and, “Why do you feel that way?” need little context but imply a great deal. ELIZA is a fascinating example of our relationship with software, a relationship that has evolved and become even more complicated since Weizenbaum’s day. Even when people were told precisely how ELIZA worked—that is, when they were told that the software had no real understanding of either psychology or their personal situations—they still viewed the system favorably and continued to use it.8 How we feel about a piece of software, how we personify it, and how much of ourselves we bring to interactions with it are all qualities that affect and strengthen our investment in any given piece of software. This is as true of today’s Twitterbots as it was of ELIZA in the 1960s.

We return to this idea at various junctures in this book because our strange, evolving relationship with technology is where Twitterbots sprang from, it is why they survive and flourish on social media, and it remains an integral part of their future. Some of the most exciting and unusual Twitterbot stories, as a result, emerge from Interactor bots.

Mashed Botato

Bots that tweet, bots that search, and bots that talk back: almost every bot falls into one of these categories. But there are many other labels we can apply to certain kinds of bots, to understand why they are made and what draws certain people to follow them. Instead of broad categories, these labels mark out small subgenres or niches. One populous niche is the mashup: bots that mix together different textual sources. A common mode of mashup is the eBooks-style bot. On Twitter today, the suffix ebooks in a user’s Twitter handle typically (though not always) signifies that an account is a bot account and has been set up to mimic the user whose name precedes the suffix or sounds similar to it. Thus, for example, @LaurenInEbooks is the eBooks account of @LaurenInSpace. Typically, eBooks-style bots tweet non sequitur mashups of another user’s Twitter feed, using a technique called Markov text generation (MTG).9 The MTG approach works in quite a simple and straightforward fashion: first, we feed the generator a large amount of text with the style or content we want it to mimic or replicate. In the case of a Twitterbot, we might provide our Markov generator with a list of every tweet that the target person has ever written. The algorithm then looks at each word and makes a record of the word that comes after it. In this sentence, the word the occurs twice: once followed by word and once followed by occurs. For every word the algorithm discovers, it keeps a comprehensive tally of the words it finds directly after it.

When the bot sets about generating a new tweet, a Markov generator first returns to this database of words and their tallies. It randomly picks a word to start with and looks up the following-word tallies for the word it has selected. Suppose it starts with the and then finds these following-word tallies in its database entry for the:

style 1
case 1
algorithm 2
word 2
occurs 1

In order to pick the word that comes next, the generator must randomly choose among all of the words that were tallied. The higher the tally of a particular word, the greater the chance it has of being selected, much like how buying more tickets in a lottery increases the chance of winning. Once the algorithm picks its next word, it adds it to its sentence and uses that word to look up a set of tallies for the next word after that. This process continues until the algorithm has a fresh sentence to tweet. Here are some sample sentences generated by using the Markov approach to slice, dice, and tally the text of this chapter:

“Interacting with the suffix or think about home brewing might work. Even this case, ‘BONG’s every hour on it.”

“This is important, because it never explicitly says anything, it gets away with it —and part of the Palace of Westminster in London explodes with the words ‘You Matter’ written in black text on it.”

You can see that while these texts are far from fluent, they do seem English-like, and it’s English of a kind we might associate with a human chatterbox suffering from cocktail party syndrome. The nouns and verbs are each more or less in the right place, even though the sentences themselves can sound strange, hilarious, or even nonsensical. Importantly, because the text is built out of words and patterns from a single source, much of the style, vocabulary, and tone of the original leak through. A Twitterbot that uses MTG to generate new tweets can often sound like a knockoff bootleg copy of the original Twitter user, which can lead to a serendipitous and surreal collision of words and ideas. Much of the time these bots produce unreadable nonsense, and our examples (generated from the text of this chapter) were sifted from many hundreds more that were unreadable. But Twitter is by its nature a terse medium, and a single crummy tweet is casually ignored as we scroll through our feeds. Finding a gem, however, can be extremely satisfying, a perfect collision of algorithms, humanity, timing, and chance. Witnessing a gem and having the sense of participating in a special moment is ultimately what makes following eBooks-style bots so much fun.

By far the most famous example of this phenomenon was @horse_ebooks, the Twitterbot that first gave rise to the suffix ebooks as a colloquial marker of bots that remix tweets from other textual sources. @horse_ebooks began life as an advertising bot, seemingly designed to promote e-books about horses by tweeting links to online stores. Twitter has little love for the Twitterbots that do this and is always on the lookout for accounts that might be trying to pester and spam its users with annoying links to commercial ventures. Yet there exists a whole passel of tricks to avoid detection, with one common strategy being the use of ordinary-looking text to disguise embedded advertising links. It is a gambit that worked well for @horse_ebooks, which would randomly select phrases from the books it was pushing to tweet alongside its commercial links.

Because of the haphazard nature of these excerpts and the fact that many become non sequiturs when robbed of their context in a book, @horse_ebooks’s tweets took on a very strange sheen indeed. Some would appear as reflective, calming statements (“Suddenly, I saw the beauty and wonder of life again … I was ALIVE!”) while others were much more surreal (“Make a special sauce so your dog can enjoy the festive season” or, more simply, “How to throw a horse”). The bot quickly grew from a small spam account with few followers into a Twitter funny farm with tens of thousands of followers, all enjoying the randomness and absurdity that the bot would periodically produce. But the tale of @horse_ebooks bot was about to take a rather surreal turn. In late 2013, it was revealed that the account had been purchased in 2011 by a writer-artist named Jacob Bakkila in the service of his envisioned art project. Thereafter, it was alleged that Bakkila had spent the intervening time pretending to be an algorithm and writing the account’s tweets himself, even going so far as to read cheap e-books in search of inspirational non sequiturs. The last three tweets by the account were, in chronological order, a link to a YouTube video advertising the artist’s pet project, a phone number, and the words “Bear Stearns Bravo,” the title of the artist’s next work.10

Responses to these final tweets and to revelations about the account vary, but many followers expressed a mix of disappointment and disgust, in contrast to the delighted surprise and confusion that @everyword engendered. “How could you do this” asked one plaintive follower, while many others insulted and swore at Bakkila’s e-front. @horse_ebooks remains one of the most fascinating landmarks in Twitterbot history, precisely because it was not always a bot. It reveals just how fragile our relationship with these little algorithms can be. It also reveals, perhaps, how we make ourselves vulnerable when we engage with an Interactor bot such as @oliviataters or @wikisext. We are doing something that we know is mildly silly, such as talking to our houseplants or explaining our problems to our cat. If anything disturbs that delicate relationship, it can result in a painful bruising of feelings. But @horse_ebooks will go down in history as a significant bot, and its legacy lives on in our ongoing fascination with chopped-up, mashed-up, butchered, and restitched texts. However, it is not just eBooks-style bots that mash together content on Twitter. In fact, we might consider mashups to be yet another label in our bot taxonomy. For example, @autovids by Darius Kazemi (@tinysubversions) cuts together different Vines (six-second videos, now embedded into Twitter’s Vine camera) and splices the whole together with a music soundtrack and some superimposed text. The resulting video is uploaded to YouTube and then posted to Twitter. Not only is this seamless combination of media technically impressive, it lets its various media sources contrast and play off each other, as when clips of kittens are spliced together with images of teenagers dancing, all seemingly choreographed to the music played in the background. This kind of mashup does not use a statistical algorithm such as MTG to blend its content. Instead, it relies on the content itself to deliver the magic, like cutting and pasting pictures from magazines into a scrapbook.

Even when one is cutting and pasting texts willy-nilly, as opposed to using statistics to guide the construction of texts (as in MTG), there remains a certain art in choosing what to combine and where to make the incisions. An example of a bot that excels in this regard is @twoheadlines, also by Darius Kazemi. The bot pulls its headlines from popular news sources such as CNN and then intelligently finds and replaces names in the headline with trending names (of people, places, or groups) from other headlines. Because the structure of the original headline is retained, the result is a text that not only reads like a natural English sentence but delivers an additional kick from the collision of two or more figures from popular culture. Sometimes this process results in an extremely plausible headline, such as “Istanbul’s Top 10 Most Streamed Songs on Spotify Revealed” (where ‘Istanbul’ has been inserted in place of ‘The Beatles”), and these understandably garner less attention in terms of retweets and favorites as a result. After all, the goal of @twoheadlines is not the generation of plausible headlines but of meaningfully incongruous headlines. Yet when the right combination of headline and target appear, the result delivers a perfect shot of surrealism. One of the bot’s most popular tweets reads: “This town has resisted Pelicans for 18 months. But food is running low.” One of the more interesting side effects of @twoheadlines’ mode of selective editing, where most of the original headline remains unchanged, it that it becomes relatively easy for readers to reverse engineer the algorithm in their heads. This is in stark contrast to the statistical eBooks-style algorithms that can be so much harder to unpick. So while a tweet from @twoheadlines might read as hilarious, it can also hint at the real horror of the underlying headline: for example, that some town is running out of food and is presumably dealing with something horrific. This balance of sweetness and darkness adds extra depth to the account’s fanciful contrast of different news topics, one that is sometimes provocatively resonant. Here is another dark gem from @twoheadlines: “Hostages freed from Donald Trump recount gruesome torture, mock executions.”

After repeated call outs to @tinysubversions, the main Twitter presence of bot builder Darius Kazemi, it seems timely to include a quote from Darius himself to close this section on mashup bots. Darius is a seasoned Twitterbot developer who, with many others, has contributed greatly to the community of Twitterbot builders. In one interview he described his work as “giving people a glimpse of the future by rearranging bits of the present,” a sentiment that bears some resemblance to a claim by William S. Burroughs, the codeveloper of the cutup method (with Brion Gysin), that “when you cut into the present the future leaks out.”11 Either can serve well as a charming description of Darius’s work, and each is just as apposite for our tour of the Twitterbot world. In some sense, almost all Twitterbots can be thought of in this way: each is a piece of software that hints at a future (or a parallel universe) where software can do even more than it does today, a future where we can talk to bots, send texts or sexts to bots, joke with bots, and even be inspired by bots. All of the bots we have seen so far are unique in how they work, and each aims to provide tweet-sized glimpses into an alternative world of dreams and nightmares. We’ll hear more from Darius Kazemi later, but for now we should continue on our way.

Apt Pupils

Most Twitterbots are designed to play in the broadly defined enclosures that we, their designers, create for them, either by our choice of texts for them to mash up, the specific phrases they watch for, or the rules of generation that they obey. As AI grinds ever more success stories from the technology of machine learning generally, and deep learning in particular, we can expect to see a growing army of Twitterbots that are designed to learn and evolve on the job. These Learner bots will be gradually defined by their interactions with other users of Twitter, so that what they say and the style with which they say it will develop over time. As bot builders who are familiar with our own worst instincts and the worst behaviors of others on social media, we naturally hope for the best but expect the worst when we design our bots to interact with strangers. Though we may build our Learner bot with the best of intentions, when it goes live on Twitter, we may as well be sending a shiny-shoed altar boy into the grimy world of a Mos Eisley cantina. The playful spirit of bot construction extends to those who interact with our bots, so we should prepare for our bot to learn some unpalatable realities. A case in point, as if one were needed, is provided by Microsoft’s Tay, a learning agent that was briefly embodied in a Twitterbot named @TayAndYou. Tay was designed to showcase Microsoft’s statistical language processing technologies and was thus released with enough fanfare to draw out the subversive in us all. But what befell Tay—and it all happened so quickly, in the Twitter equivalent of a car crash—was bizarrely foretold by a Hollywood film about AI, Stealth.

The movie concerns an unmanned fighter plane controlled by a learning AI named EDI and shows what can happen when an AI learns too quickly from the worst examples on offer. Early in the film, EDI learns that a good soldier can disobey orders in the right context, and from this one instance, it generalizes that it too can follow its own course and choose its own targets (in Russia) to bomb. As EDI goes rogue, its military minder anxiously calls its inventor on the phone:12

Military man:

Look, when we gave you this contract, you said shit like this couldn’t happen.

Inventor:

Once you teach something to learn, you can’t put limits on it. “Learn this, but don't learn that.” EDI’s mind is going everywhere. He can learn from Hitler. He can learn from Captain Kangaroo. It’s all the same.

If you want your shiny new AI to learn from Hitler and Captain Kangaroo, the Tay experience suggests that you do not have to build it into a fighter plane first. Interaction with those who wanted to undermine the system meant that Tay’s statistical model was quickly corrupted by racist and sexist patterns of language that turned the fresh-faced altar boy into a nasty-minded troll. Microsoft soon deleted Tay’s account, leaving us with only secondhand reports of the bot’s most unfortunate outputs, which were laden with conspiracy theories, racist allusions, and far-right political statements.13 Tay’s slide to the dark side was so rapid that Microsoft was forced to shut down the bot within a day of going live. The lesson here is not the fragility of machine learning, which happens to offer a rather robust technology for building natural language systems, but the unpredictability of real human interaction and the naïveté of Tay’s designers, a point we return to in chapter 9.

At Swim Two Bots

We’ve sampled just a small corner of the Twitterbot universe so far, and in most cases, what we have seen are bots that work in isolation as they go about their business, whether that means looking for people to hug, bug, or tickle or mashing up the past to invent a dubious present. But the Twitterbot community is not merely composed of individuals. Bustling communities of Twitterbot authors, like the one centered around the hashtag #botally, are full of energy and life, and their members share their code, provide constructive feedback, offer ideas, and celebrate the creations of one another. This sense of playful interaction also extends to their creations, with some bots being designed specifically to respond to, or enhance, other bots, particularly those of different bot builders. For example, @botgle by @muffinista is a popular Twitterbot that periodically hosts games of Boggle, a word game where players compete to make words out of a grid of random letters. After each game has been played and @botgle’s followers have submitted their words and see who has won, a bot named @botglestats posts a metareply to the game, containing an image filled with statistics and information about the game, including a summary of the longest words, whether any player found those words, what percentage of words that were found, and more. The two bots are by different authors (@botglestats is a bot by @mike_watson) but together they create a niche community that enhances both bots and draws their shared followers together.

If @botgle’s and @botglestats’s interplay can hardly be considered pistols at dawn, other bots show more combative spirit in their interactions. @redscarebot is a Twitterbot with a very clear agenda: it searches for tweets that contain words associated with left-wing politics such as Marx or socialism and then publicly quotes those tweets along with a random choice of prebaked commentary, such as “radical beatniks” or “connect the dots.” The bot seems to have a jovial intent—its name is Robot J. McCarthy, after all—and its avatar is the infamous American politician who initiated the paranoid witch hunts that the account appears to parody. Yet in flouting bot norms, it also flaunts its disrespect for bot etiquette in a way that rubs many bot builders the wrong way and can often mark people discussing socialist politics for targeting by very real and much less jovial right-wing Twitter users. Instead of complaining, Darius Kazemi (@tinysubversions) took a more interesting approach: he built a bot whose only purpose is to trick @redscarebot into responding and muddy the waters the bot hunts in. The bot, @redscarepot, is named for a play on the word honeypot, and its tweets employ a selection of hot-button words that call @redscarebot to action. It offers a good example of how bots can be used to influence and play with each other in their own ecosystems.

Writing a bot to generate a statement about an issue is not uncommon; in fact, we might consider Statement bots to be another entry in our bot taxonomy, deserving of a place next to feeds, mashups, and watchers. Sometimes these bots target a very particular topic, much like @redscarebot, while other times they may strive to use the power of Twitter as a platform to amplify another kind of statement-making software. This mode of amplification can be remarkably powerful. In 2014, the Wikipedia article on the MH17 Malaysian airlines disaster was edited, removing text that cited Russian terrorists as the cause of the disaster and adding in its place text that shifted blame onto the Ukrainian military. The source of this edit was a computer owned by the Russian government, a fact that first came to light when a bot spied the change and tweeted it to the public. @RuGovEdits, by @AntNesterov, tracks changes made to Wikipedia articles and matches them against computer addresses that are thought to be associated with the Kremlin, tweeting out the details of any edits that match. It is part of a family of Twitterbots—from @ParliamentEdits in the United Kingdom to @CongressEdits in the United States—that aim to record and notify people when members of a country’s government try to anonymously edit one of the world’s most important open information repositories. These metadata are hardly a secret because Wikipedia already stores details of every edit. But identifying the source of an edit gives it a special meaning in this case, and amplifying it in public using Twitter gives the information much greater potency.

A good Statement bot need not reflect real-world data; it can also paint a counterfactual world that encourages readers to consider an alternate worldview. @NSA_PRISMbot is one such example of a speculative statement-making bot. Here is a representative tweet:

***FLAG*** @Okey_Robel mentioned “IRA” on Twitter. ***FLAG***

@NSA_PRISMbot is named for NSA’s infamous PRISM surveillance program, which covertly collects and processes data about Internet use in the United States, including information about file transfers, online chats, emails, and, yes, use of social media such as Twitter. The scale, complexity, and numbing banality of the program can make the concept of mass state surveillance difficult for many of us to process, so @NSA_PRISMbot strives to communicate what this might mean in a different way: it tweets fictional reports about the kinds of small, everyday actions that PRISM might monitor as a way of making people think about how the very nonfictional PRISM is operating right now. Yet while @NSA_PRISMbot is a clever idea, it might seem that the main thrust of the message is in the idea of the bot itself, and that following it wouldn’t really be any more effective or useful than, say, simply reading the previous paragraph and thinking about it for ten seconds. Nonetheless, there is an added frisson to be had when following @NSA_PRISMbot and its ilk, in that Statement bots sprinkle little reminders of their core message among our regular Twitter views. As we scroll past photographs of friends and idle thoughts from our favorite celebrities, we suddenly see: “Isobel Rippin of Bashirianshire, Vermont uploaded a video called DISENFRANCHISED!!! to Instagram.” In this way, the message becomes a drip-feed of reminders that everything we do and everything we read may be watched by someone else. Another Statement bot, @NRA_Tally, operates on a similar basis, but instead of tweeting about Internet monitoring, it posts fictional mass shooting reports to which it appends stock responses from the NRA, America’s pro–gun ownership National Rifle Association. The bot will contrast the horror of “11 moviegoers shot dead in South Carolina with a 7.62mm AK-47” with the cold indifference of a triumphalist nonapology such as, “The NRA reports a five-fold increase in membership.” @NRA_Tally provides an interesting clash for the bot’s followers to contemplate, which is not such a poor trick for a mere bot to pull off.

Beyond the Tweet

Bots crop up everywhere, and while this book is all about the botting that gets done on Twitter, we should look everywhere for inspiration, including outside the Twittersphere. Many other sites have APIs just like Twitter that allow for automated posting, downloading of data, or accessing important functions. One particularly popular home for bots is Reddit, a vast web community of people who share links and stories and vote on which ones should earn more prominence. Bots can do many things on this site, from posting updates to submitting links and messaging users; these actions are analogous to replying to tweets, tweeting, and directly messaging users on Twitter. Reddit bots are often used as handy assistants to burnish online discussions with incidental chunks of information. For instance, one bot scans for YouTube video links, finds the top YouTube comment for that video, and appends it as a comment on the Reddit page. Another bot scans popular Reddit threads with more than five links to YouTube and creates a YouTube playlist of all of these videos, posting it along with a summary of each to the thread. Branded “Reddit’s Coolest Bot,” astro-bot searches for people posting photographs of space, identifies the region of space depicted and then replies with an annotated version of the image showing major stars, planets, and clusters.14 While Twitter discussions can quickly fade because of the platform’s emphasis on brief exchanges and streams of changing information, Reddit posts have, in contrast, a good deal more permanence than Twitter updates, and this allows Reddit bots to serve a longer-term purpose beyond an initial burst of comments. Reddit also hosts a panoply of bots that are designed to interject themselves into conversations, in opposition to Twitter’s general guidelines (although Reddit does have its own set of API restrictions, they mostly warn against sending too many messages). The PleaseRespectTables Reddit bot watches for people using the “table flip” emoticon, and replies with a similar emoticon depicting someone setting the tables down normally and glaring into the screen. This bot is, sadly, now suspended in circumstances that are best described as ironic, for the bot eagerly replied too many times during a Reddit discussion that celebrated good bots.

For most websites, bot has been a dirty word for a very long time. Most bots have not been designed to create new meanings or new artifacts, and neither have they been gifted with a mission to help or to amuse. Many simply send unsolicited advertisements to people, while others artificially inflate the follower counts of deceptive users, allowing reprobates to sell their bot followers to others for a few fractions of a cent each. One bot written for the blogging service Tumblr allows people to automate the process of fishing for the “follow backs” that arise when users reciprocally follow those who follow them. The bot spaces out its follows through the day to avoid detection and may even unfollow people after they have followed it to improve the ratio of followers to users followed. While many of these bots automate user activities that are perfectly legal, it is hardly unsurprising that social platforms have cracked down on exploitative behaviors. Everyone has their own expectations for bot behavior, including the bot makers themselves, and Twitter is no different.

How Not to Bot

Our whirlwind tour around the hot zones of the Twitterbot world has attempted to group bots by their behavior, the ideas behind them, or the way people enjoy them. There are a great many bots and a great many ideas for bots, and some species of bot have undoubtedly slipped through our butterfly net. We will, however, cover more bots and more bot builders in the rest of this book, and the joy of Twitterbots is that they are always evolving and showcasing new ideas. Twitterbot builders are an inventive lot, and there is always new ground to be explored, raising new questions that someone will be curious to answer. The tricky thing about breaking or exploring new ground, particularly when it concerns technology and humans mixing together in a vast public forum like Twitter, is that there are often substantial ethical issues to think about too. While bot creation might not be quite as terrifying as the stereotypical mad scientist playing God in a monster movie, letting autonomous software loose in society can have serious implications. The day-to-day world of Twitter is a tissue of fragile social situations, of people whose emotions are easily manipulated, and our Twitterbots are not always (or easily) created with a built-in sense of etiquette, good taste, or common sense. Where should the line be drawn for Twitterbot behavior? Bot builder Darius Kazemi, whom we’ve already met in this chapter as the creator of @twoheadlines and @museumbot, has set out some guidelines. for acting ethically as a bot author.15 Each is worth considering in turn.

Darius Kazemi’s first guideline is don’t @mention people who haven’t opted in. This is a rule that @redscarebot breaks every time it pesters someone for mentioning Marxism and the like. Unsolicited mentions can be annoying, since it generates notifications, and even Twitter agrees that this is bad behavior. Many bots are banned for directly messaging users who do not already follow the bot. But unsolicited mentions can do more harm than just making your phone buzz at odd times, especially if the mentions are public, as is the case with @redscarebot. If your bot is drawing attention to specific Twitter users, it can make those users a target for very real human harassment.

His second guideline is related to the first: don’t follow Twitter users who haven’t opted in. Though human users typically welcome human followers, the automated following of someone who did not ask for it can feel as invasive as getting pinged with unsolicited messages, and Twitter may flag this kind of behavior as poor form. Advertising bots often seek out users who tweet salient keywords so as to follow them en masse (you may have encountered this phenomenon for yourself if you have ever mentioned marketing, iPhones, or other advertising buzzwords in your tweets). So these first two guidelines are really about making sure that your bot stays within its fenced-off enclosure. We have already seen some big exceptions to these rules, though, and not all of them are as questionable as @redscarebot. Even though @yourevalued searches for users and replies to them without asking for their permission, it is hard to consider the bot a nuisance. After all, @yourevalued is replying to people who are arguably crying for help. Even with these simple guidelines, we can see that there is no one-size-fits-all policy for Twitterbot ethics.

Kazemi offers two other guidelines for those of us who build Twitterbots: don’t use a preexisting hashtag and don’t go over your rate limits. Hashtags create virtual discussion spaces where users congregate to discuss a topic, making these spaces a great place for a cynical advertiser to erect a billboard. Nefarious advertising bots thus post links with popular click bait hashtags to lure people into clicking on them, but this guideline is about more than not acting like an ad bot: it is about respecting other people’s conversations and staying out of them if not invited to participate. If your bot wants to see other people, it should be interesting and fun enough to attract others to it. It should not wander over to random users like an attention-seeking toddler and foist itself into the lives and conversations of others.

Kazemi’s last guideline about rate limits is another important issue for bot builders to bear in mind. Each time a Twitterbot does anything, from following a user to posting a tweet, Twitter makes a note of it. If Twitterbots do too many things in too short a time period, Twitter will slow them down, temporarily suspend them, or even permanently ban them. Rate limiting a Twitterbot means ensuring that the bot is sufficiently self-regulating to monitor how busy it is becoming, so that it can automatically slow itself down before Twitter takes punitive action. While this guideline is important for avoiding a shutdown, it also allows bots and their builders to show respect for the environment in which they operate. If an indifferent Twitterbot is blithely posting three hundred messages a minute, each one abusing a popular hashtag or mentioning a random user, it is quickly going to become a pest. Even if Twitter allowed such things to happen, Kazemi argues that this behavior should still be avoided. Twitterbots should be a welcome addition to a community and always aim to be on their best behavior.

Twitter has its own ideas for what makes a good bot. Some of the rules are very similar to the guidelines provided by Darius Kazemi, because Twitter is sensitive to online behaviors that look like advertising or spamming, such as sending unsolicited links to other users or repeatedly sending the same message to many different users with perhaps many different hashtags. Of course, some of its motivations differ greatly from the concerns of Twitterbot makers, and they can result in some rather peculiar decisions in the name of keeping Twitter clean. Consider the tale of two very real bots whose authors noticed the same pattern of human behavior on Twitter but responded in very different ways. The pattern in question is the rather unwise trend of users posting photographs of their new credit or debit cards on Twitter. While this might seem preposterous to some, it is not an uncommon activity: naive users often send the photos to Twitter accounts run by banks, while exuberant users may simply be showing off their brand new credit card or personalized card design. @CancelThatCard is a bot that automatically detects unintentionally revealing photographs using an algorithm that can identify credit cards and numbers in images. It then replies to the user with a message alerting them that their card has been seen online, with the added suggestion that they should cancel it. It even provides them with a link to a website with more information. @NeedADebitCard is another bot that, like @CancelThatCard, detects images of credit cards online. However, it makes those images even more public by retweeting them to its more than seventeen thousand followers. The account has been featured on Forbes, the Huffington Post, and tech security company Kaspersky’s blog. Though currently suspended, many of the bot’s retweets have prompted replies from Twitter users claiming to have ordered products online using the names and numbers revealed in the photographs.

Both bots are impudent—or educated and insolent if you will—though one is very clearly more malicious than the other. Nonetheless, you might be surprised by how Twitter chose to direct its ire. @CancelThatCard tries to quietly warn a card owner without drawing additional eyeballs to a potentially costly faux pas, while @NeedADebitCard seems to revel in tough love. It teaches through harsh punishment and exacerbates a rookie’s mistakes by advertising them so widely. Ironically, it was @CancelThatCard that was first suspended by Twitter because its frequent dispatch of links to strangers, its unsolicited mentions of others, and its repeated postings of the same warnings all conspire to make its outputs read like spam. By contrast, and to a simple-minded algorithmic censor at least, the bot @NeedADebitCard operates with what seems like good etiquette. It retweets other users, thereby engaging in the social media world; it never follows anyone; and it never pesters other users directly. So when the detection of poor etiquette is automated, some rather strange judgments are sure to follow. Fortunately, at the time of writing, it is @NeedADebitCard that is suspended by Twitter, while @CancelThatCard continues to warn exuberant cardholders of their naïveté.

The fact that Twitter was at first unable or unwilling to act differently in the case of the two card bots shows how important it is that Twitterbot authors develop their own code of conduct; they need to think about how their bots will act, ask what kinds of rules they want to set for themselves, and decide when it is acceptable to break them. But as we have already seen in this chapter, the Twitterbot community is vast and full of diverse and interesting people. People use Twitter technology to analyze and disseminate data; they use it as a political tool; they use it as a playground for software; they use it as a canvas for art; and they use it for many more purposes and combinations thereof. This greatly complicates the question of ethics, because in the real world, different people play by different rules. Comedians are allowed to insult members of the audience, but it is much less acceptable for lecturers to insult their students.

Leonard Richardson, another Twitterbot author, explored this issue in a memorable essay, “Bots Should Punch Up.”16 He compares a Twitterbot to a ventriloquist’s dummy. Although society might let the dummy say things that the ventriloquist would never be allowed to say directly, the ventriloquist ultimately takes responsibility, and so there are always lines that cannot be crossed, even by a wooden doll. As Richardson puts it, “There is a general rule for comedy and art: always punch up, never punch down. We let comedians and artists and miscellaneous jesters do outrageous things as long as they obey this rule.” By “punching up” or “punching down,” Richardson is referring to those who suffer at the expense of a work, whether it be comedy, art, or social commentary. Sometime the subject is obvious, as in an off-color joke about a religious group. Other times it is less predictable. @NeedADebitCard targets people who have made a newbie mistake by posting images of their card and its numbers online and encourages us to laugh at the mistake, or even to take advantage of it. Richardson believes this is a good example of punching down: “Is there a joke there? Sure. Is it ethical to tell that joke? Not when you can make exactly the same point without punching down.”

Richardson, like many other Twitterbot builders, isn’t averse to the idea that a Twitterbot can intentionally offend or provoke, and we can imagine many reasons why we might want to do this. Bots with a statement to make, like @NSA_PRISMbot, may well raise eyebrows or make people uncomfortable, but that is their makers’ intention. Problems emerge when authors either do not consider the people their bot is affecting, such as @redscarebot and its focus on the left-wing politics of others, or when bots are given too much autonomy and accidentally exceed what their author intended for them. The transgression of boundaries is not uncommon, as the most creative bots are designed to do precisely this, and a bot’s ability to surprise its own creator should be taken as a sign that the bot is interesting and noteworthy. This drive to build bots that can surprise us, however, is also a drive to make them unpredictable, and this can naturally yield problems in some circumstances.

Whenever we write a computer program, we naturally experience a desire to seek out useful patterns and abstractions. We have already seen ample evidence of this in the Twitterbots surveyed in this chapter. Thus, the inherent patterns of wikiHow pages allow @wikisext to manipulate boring English sentences—the linguistic equivalent of putting up shelves—into some delightfully euphemistic innuendo, and the reliable structure of a news headline allows @twoheadlines to play with cultural figureheads like so many Barbie or G.I. Joe dolls. Programming is a process that is replete with abstractions and patterns, because they yield programs that are more concise and more efficient. Yet when we try to apply the same kind of thinking to the real world, it can cause problems we may not foresee as programmers when thinking about the cold, rational world of data. The real world is not made from shiny bricks of LEGO. It has a great many gaps, bumps, and holes that are hard to imagine when we are thinking about an idea in the abstract. These pitfalls may become truly obvious only when an idea is let loose on thousands or millions of people.

The Story So Far

This chapter has set out to provide an overview of the world of Twitterbots as it stands today. This world is a complicated mix of ideas, people, and creative potentials. Even in the time between the final editing of this chapter of the book you have in your hands today, many hundreds of Twitterbots will have come to life, covering new ground and breaking old preconceptions about what can be done with software or with the medium. At the same time, the communities of bot builders will also be evolving and updating their opinions on where the medium is headed and what its standards should be. This is both the difficulty and the beauty of writing about technology.

Twitterbotics is an inchoate technology that is still in the early stages of its development, and if you will pardon the pun, their current stage can be likened to that of another developing technology two centuries ago. Photography is now a staple medium of the digital age, and social media like Twitter are full to the brim with indelible visual records of the things we do, the places we go, and the people we meet. For affluent Western technology users, photography is as natural a form of communication as writing a text (and even more so than writing a letter), with apps such as Snapchat and Instagram encouraging us all to communicate primarily through this visual medium. Depending on where you are reading this chapter, there is a very good chance that you are within five feet of a camera. If you are reading a digital copy of this book, you might well be staring into one right now.

When photography was developed in nineteenth-century France, it was a very different kind of technology to that which now lets us take a snap of our dog and send it halfway across the world in less than a second. Early photography was a complicated, messy process that imposed few accepted standards other than a need for a great deal of money and time. Practitioners were forced to adopt the role of part-time chemists, experimenting with their own ways of developing photosensitive film stock. Each approach required a different combination of chemicals that were expensive and even dangerous. But as photography grew in popularity and photo subjects became photo takers, standardized approaches to taking and developing pictures emerged. The technology would soon find its way into the hands of nonspecialists such as journalists, artists, and scientists.

Once new users gain access to a developing technology and grow in familiarity with it, two interesting milestones are reached. The first is that they are soon encouraged to graduate to newer and more complex systems that build on this acquired knowledge. So today we do much more than merely take photographs: we also edit and modify them in situ to improve the way they look. Even a simple camera app on our smartphones can readjust the lighting, balance the colors, and transform the substance of our images with fancy filters. The second milestone occurs soon after we master a new technology, when we want to subvert it too. So artists don’t simply use photography to replicate the world as it is; some find ways to create abstract images by manipulating light and shadow, while others use photography to freeze moments in time so they can better depict events and bend them to their own aesthetics and style. Each of these developments—the evolution of a new technology, its growth, its elaboration, and its eventual subversion—flows from having greater access to the technology, but they also go hand-in-hand with a deeper understanding of the original concept.

Generative software is currently still at that mid-nineteenth-century stage, where its practitioners mostly need to be part-time technologists—part chemist and part alchemist—to make sense of how it all works. Many bot aficionados work with their own custom-built tools, and though they may not present the physical dangers of volatile chemicals, they can certainly explode metaphorically if mixed without due care. We are entering a world where nothing is truly set in stone, and we still have no idea what generative software can do for the world or who might want to use it or what they might want to do with it. In some ways this is terrifying, and it can feel as if we are fumbling in the dark and unsure of where to go next. But it is also exciting, energizing, and a source of great optimism and joy, because every day, we can each go out and think of new ways to make systems that make meanings. We can build tools to help even more users to get involved, even if we may never be entirely sure of what we’ll be doing in six months’ time.

The goal of this book is to show you just one possible future for Twitterbots, just one axis along which we can develop our bots and extend their ideas and technology into something brilliant and exciting. We are going to show you how this future fits snugly alongside the many other ways that Twitterbots are being developed by other builders and how all of these strands are working together to push this medium along. We hope that in doing this, we will convince you that this world of Twitterbots is something special, something different from a silly distraction on social media, that it is in fact a blueprint for how technology and future society can integrate with one another on a larger scale.

Trace Elements

The community of bot-builders opens its arms to all comers, regardless of programming proficiency. Even if you have never written a line of code in your life—nor have any intention of ever writing one in the future—the community provides easy-to-use tools that allow you to build and launch your own Twitterbots with a minimum of fuss. In the next chapter we will look at two of these tools, as provided by two of the community’s leading lights – that reduce the task of building and deploying a text-generating Twitterbot to the specification of a grammar for the bot’s textual outputs. We provide a store-cupboard of such grammars in the GitHub for this book, in a repository named TraceElements. In the chapters to follow you will find a section named Trace Elements that introduces the grammars we have placed online for expressing many of the ideas we are soon going to explore. When it comes to building bots quickly and simply, there really is no time like the present.

Notes