Storytelling and Artificial Intelligence

Adrienne Mayor

Alvin Toffler observed that if we divide the past 50,000 years of human existence into life spans of about 62 years each, our species has experienced about 800 lifetimes. For about 650 of those lifetimes, we were cave dwellers. Writing only appeared during the last 70 lifetimes. But storytelling is as old as language itself. Stories have been shared ever since the first human beings huddled around a fire together.¹

As Toffler knew, our 800 lifetimes have had a cumulative impact on ideas, beliefs, and values.² Stories contain wisdom and humor. They shape culture and instill values, cooperation, and identity. Humans have an “instinct” for storytelling that “shapes our cognitive and behavioral development.”³ As narrative creatures, we are hard-wired to hear, tell, and remember stories. Stories are retold over thousands of years as long as they summon strong, complicated emotions, as long as they still resonate with real dilemmas, and as long as they are good to think with.

About 44 lifetimes ago (more than 2,700 years ago), hopes and fears about creating artificial life, surpassing human limits, and attempting to imitate and improve nature were imagined in ancient Greek myths, which drew on even older oral tales. One could compare such myths to maps drawn by early cartographers. As Toffler noted, the first mapmakers set down bold, imaginative concepts about uncharted worlds they’d never seen. Despite inaccuracies and conjectures, those maps guided explorations into new realities. Toffler suggested that we navigate the future’s perils and promises in the same spirit as the cartographers and explorers of the past. Stories have always charted human values—might storytelling play a role in guiding artificial intelligences?

Exuberance mixed with the anxiety that is evoked by blurring the boundaries between nature and machines might seem to be a uniquely modern response to the juggernaut of scientific progress in our age of high technology. But ambivalence surrounding the notion of making artificial life emerged thousands of years ago, in the ancient Greek world. Classical myths about Prometheus, Jason and the Argonauts, Medea, Daedalus, Hephaestus, Talos, and Pandora raised basic questions about the boundaries between biological and artificial beings. The tales helped the Greeks contemplate the promises and perils of staving off age and death, enhancing mortals’ capabilities, and replicating nature. The Greek myths spin thrilling adventures well worth knowing for their own sake. But when we recognize that some stories are inquiries into biotechne (bios, life; techne, craft), they take on new significance and seem startlingly of our moment.

Homer’s Iliad (ca 700 BC) tells how Hephaestus, the god of invention and technology, built a fleet of self-propelled carts that delivered nectar and ambrosia to the gods’ banquets, returning when empty. Animated guard dogs, automatic gates, and self-regulating bellows were some of his other productions. Talos, the great bronze killer robot powered by an internal conduit filled with ichor, was constructed by Hephaestus to defend the island of Crete. Hephaestus also made a crew of life-sized golden female assistants. These androids resembled “real young women, with sense and reason, strength, even voices” and were “endowed with all the learning of immortals,” making them the first artificial intelligence agents in Western literature. Two thousand years later, AI developers aspire to achieve what the ancient Greeks imagined that their god of technological invention was capable of creating in his workshop.

Hephaestus’s products of biotechne were dreamed up by a culture that existed millennia before the advent of robots that win complex games, hold conversations, write poems, analyze massive mega-data, and infer human desires. But the big questions we face today are as ancient as mythological concerns: Whose desires will AI entities reflect and carry out? How and from whom will they learn?

The mythic fantasies of imitating and augmenting life inspired haunting theatrical performances and indelible illustrations in classical vase paintings, sculpture, and other artworks. Taken together, the myths, legends, and lore of past cultures about automatons, robots, replicants, animated statues, extended human powers, self-moving machines, and other artificial beings, and the historical technological wonders that followed, constitute a virtual library of ancient wisdom and experiments in thinking, a unique resource for understanding the oncoming challenges of biotechnology and synthetic life. Mythic tales about artificial life provide a historical context for warp-speed developments in artificial life and artificial intelligence—and they can enrich our discussions of the looming practical and ethical implications.

Consider Hephaestus’s fabrication of Pandora, a myth first recounted by the poet Hesiod (ca 700 BC). Pandora was a female android—“evil disguised as beauty”—commissioned by the tyrannical god Zeus to punish humans for accepting the technology of fire stolen by Prometheus. Designed to entrap humans, Pandora’s sole mission on Earth was to unseal a jar of suffering and disasters to plague humankind for eternity. She was presented as a bride to Epimetheus, known for his impulsive optimism. Prometheus urged him to reject this dangerous “gift” but, dazzled by Pandora’s deceptive beauty, Epimetheus ignored the warning. Modern Prometheans include Stephen Hawking, Bill Gates, and other thinkers who warn scientists to slow the reckless pursuit of AI, because they, like Prometheus, foresee that once AI is set in motion, humans will lose control. Already, deep learning algorithms allow AI to extract patterns from vast data, extrapolate to novel situations, and decide on actions without human guidance. Some AI entities have developed altruism and deceit on their own. As AI makes decisions by its own logic, will those decisions be empathetic or ethical in the human sense?

Much like computer viruses let loose by a sinister hacker who seeks to make the world more chaotic, misfortune and evil flew out of Pandora’s jar to prey upon humans. In simple fairy-tale versions of the myth, the last thing in the jar was hope, depicted as a consolation. But in the original myth, “blind hope” prevented humans from looking ahead realistically. Deprived of the ability to anticipate the future, humankind resembles Epimetheus: foresight is not our strong point.

Yet foresight is crucial as human ingenuity, curiosity, and audacity continue to breach the frontiers of biological life and death and the melding of human and machine. Our world is, of course, unprecedented in the scale of techno-possibilities. But the tension between techno-nightmares and grand futuristic dreams is timeless. The ancient Greeks understood that humankind will always try to reach “beyond human,” and neglect to envision consequences. Our emotions are a double-edged sword.

In the original myth, “blind hope” prevented humans from looking ahead realistically. Deprived of the ability to anticipate the future, humankind resembles Epimetheus: foresight is not our strong point.

In 2016, Raytheon engineers gave classical Greek names to three miniature solar-powered “learning” robots. Zeus, Athena, and Hercules possessed the ability to move, a craving for darkness, and the capacity to recharge in sunlight. The little robots quickly understood that they must venture into excruciating sunlight in order to recharge, or die. This simple learning conflict parallels human “cognitive economy,” in which emotions help the brain strategize and allocate resources. Other experiments are aimed at teaching AI computers how humans convey goodwill to one another and how people react to negative and positive emotions. Can AI be expected ever to possess compassion or empathy for its makers and users? Artificial empathy (AE) is the phrase used for developing AI systems—such as companion robots—that would be able to detect and respond appropriately to human emotions. The Human-Robot Interaction Laboratory at Tufts University, one of the first labs to study this conundrum, maintains that we need to program AI with the human principle of “do no harm.” The HRI lab developed a software system called DIARC (Distributed Integrated Affect Reflection and Cognition) to try to endow AI robots with a sense of empathy that would guide them to care about humans.

Will AI entities be able to foresee consequences or recognize their own shortcomings in dealing with humans? Will AIs be aware of their own moral and emotional limits? Will AI systems know when to ask for and trust human help? These questions are crucial, especially in view of the negative effects of built-in bias in AI algorithms.

Some scientists propose that human values and ethics could be taught to AI through stories. “Fables, novels, and other literature,” even a database of Hollywood movie plots might serve as a kind of “human user manual” for AI computers. One such system is named Scheherazade, after the heroine of One Thousand and One Nights. Scheherazade was the legendary Persian philosopher-storyteller who had memorized myriad tales from lost civilizations and saved her life by reciting these enchanting stories to her murderous captor, the king. The first stories uploaded into the Scheherazade AI were simple narratives to provide examples of how to behave like good rather than psychotic humans. With the goal of interacting empathetically with human beings and responding appropriately to their emotions, more complex narratives would be added to the computer’s repertoire.¹⁰ The idea is that stories would be valuable when AI entities achieve the human mental tool of “transfer learning,” symbolic reasoning by analogy, to make appropriate decisions without human guidance. One obvious drawback of storytelling for AI is that humanly meaningful stories also describe negative behaviors that will be implanted in the system’s data archives.¹¹

Another concern is that human minds do not work just like computers. Cognitive functions, self-reflection, rational thinking, and ethical decisions depend on complex emotions. Stories appeal to emotions, pathos, the root of empathy, sharing feelings. Empathy is an essential human trait that so far has not been simulated in robots and AI. One intriguing approach is being studied by Chinese AI researcher Yi Zeng, Research Center for Brain-Inspired Intelligence, Chinese Academy of Sciences, who proposes that AI systems should be modeled on the human brain to ensure a better chance at mutual understanding between us and AI.¹²

Stories, over the ages, are “the most powerful means available to our species for sharing values and knowledge across time and space.”¹³ The Greeks and other ancient societies spun tales about artificial life to try to understand humankind’s perpetual yearning to exceed biological limits—and to imagine the consequences of those desires. More than two millennia later, our understanding of AI can still be enriched by the visions and dilemmas posed in the age-old myths. Humans are experts at infusing old stories with new meaning and making up new stories to fit novel circumstances. This raises an intriguing possibility. Might myths and stories about artificial life in all its forms and across many cultures play a role in teaching AI to better understand humankind’s conflicted yearnings? Perhaps someday AI entities could benefit from knowing mortals’ most profound wishes and fears as expressed in ancient Greek mythic narratives about AI creations. Through learning that humans foresaw their existence and contemplated some of the quandaries the machines and their makers might encounter, might AI entities comprehend—even “empathize” with—the dilemmas that they pose for mortals?

Scientists also wonder whether AI can ever “learn” human creativity and inspiration. AI has learned to write (bad) “poetry,” but could AI weave narratives that strike human chords, evoke emotion, and enlighten?

Scientists also wonder whether AI can ever “learn” human creativity and inspiration. AI has learned to write (bad) “poetry,” but could AI weave narratives that strike human chords, evoke emotion, and enlighten? As early as 1976, James Meehan developed TALESPIN, an AI storytelling program that could create simple tales about problem-solving, along the lines of Aesop’s fables and fairy tales.¹⁴ But what about tragedies, jokes both silly and clever, irony, repartee, improvisational comedy, and cathartic black humor? These abilities turn out to be essential for human mental and emotional well-being, for rapport and cooperation. Accordingly, futurists recognize that good storytellers and comedians should be included in space exploration teams.¹⁵ Will our companion AI systems be able to participate and contribute in these crucially human ways?

The rise of a Robot-Artificial Intelligence “culture” no longer seems far-fetched. AI’s human inventors and mentors are already building the Robot-AI culture’s logos (logic), ethos (moral values), and pathos (emotions). As human minds and bodies and cultural outpourings are enhanced and shaped by technology and become more machine-like, perhaps robots might become infused with something like humanity. We are approaching what some call the new dawn of Robo-Humanity.¹⁶ When that time comes, what myths and stories will we—and AI—be telling ourselves and each other?

Adrienne Mayor is a research scholar in the Classics Department and History and Philosophy of Science Program at Stanford University. Her books include Gods and Robots: Myths, Machines, and Ancient Dreams of Technology (2018); The Amazons: Lives and Legends of Warrior Women across the Ancient World (2014); The First Fossil Hunters: Dinosaurs, Mammoths and Myths in Greek and Roman Times (2000); Greek Fire, Poison Arrows & Scorpion Bombs: Biological and Chemical Warfare in the Ancient World (2003); and The Poison King: Mithradates, Rome’s Deadliest Enemy (2009 National Book Award nonfiction finalist).

1 Alvin Toffler, Future Shock (Random House, 1970), 14. Parts of this essay are adapted from the Epilogue in Mayor, Gods and Robots: Myths, Machines, and Ancient Dreams of Technology (Princeton University Press, 2018). Other cultures besides the Greeks also imagined automatons and AI-like entities in myths.

2 Toffler 1970, 16-17.

3 Lucy King, “Storytelling in a Brave New World,” Artificial Intelligence Magazine, Dec 3, 2018.

4 Toffler 1970, 6.

5 Mayor 2018.

6 Toffler 1970, 196-97, discusses melding human and machine.

7 Raytheon: http://www.raytheon.com/news/feature/artificial_intelligence.html

8 Shannon Fischer, “AI Is Smart. Can We Make It Kind?” Tufts Magazine (Spring 2019). Isaac Asimov, I, Robot (New York: Gnome Press, 1950).

9 Sarah Scheffler; Adam D. Smith; Ran Canetti, “Artificial Intelligence Must Know When to Ask for Human Help,” The Conversation, March 7, 2019. https://theconversation.com/artificial-intelligence-must-know-when-to-ask-for-human-help-112207

10 Kanta Dihal agrees that “stories can teach compassion and empathy,” but letting AIs read fiction will not “help them understand humans,” because compassion and empathy do not arise from “deep insights into other minds.” Dihal, “Can We Understand Other Minds? Novels and Stories Say No,” Aeon, Sept 5, 2018. Our own “capacity to imagine other minds is extremely limited,” especially in science fiction in which descriptions of extraterrestrials and AI entities resort to anthropomorphic fantasies. “Embodiment” is how we understand one another, so it is very difficult for us--and for AI--to imagine inhabiting the physical body of someone else if one cannot actually feel the sensations experienced by another body. Instead, Dihal maintains that only a “glimpse” of the felt experience of otherness allows us to empathize with a conscious lifeform profoundly alien to us, to have an impulse to keep it from harm and even communicate in some way. But how this key “glimpse” might be extrapolated to programming AI entities toward ‘“feeling” empathy with alien humans remains to be seen. Compare the famous mind-body thought experiment by Thomas Nagel, “What Is It Like to Be a Bat?” Philosophical Review 83 (Oct 1974): 435-50.

11 Scheherazade AI: Alison Flood, “Robots Could Learn Human Values by Reading Stories,” The Guardian, Feb 18, 2016. http://www.news.gatech.edu/2016/02/12/using-stories-teach-human-values-artificial-agents. http://realkm.com/2016/01/25/teaching-ai-to-appreciate-stories/ Adam Summerville et al., “Procedural Content Generation via Machine Learning (PCGML),” 2017, arXiv preprint arXiv:1702.00539, pp 1-10. https://arxiv.org/pdf/1702.00539.pdf

12 Yi Zeng, Enmeng Lu, Cunqing Huangfu, “Linking Artificial Intelligence Principles,” Safe Artificial Intelligence (AAAI SafeAI-2019) Workshop, Jan 27, 2019 http://ceur-ws.org/Vol-2301/paper_15.pdf. Yi Zeng’s webpage: http://brain-eng.ia.ac.cn/~yizeng/

13 George Zarkadakis, In Our Own Image: Savior or Destroyer? The History and Future of Artificial Intelligence (New York: Pegasus, 2015), 27, 305.

14 James R. Meehan, “Tale-spin, an interactive program that writes stories,” Proceedings of the 5th International Joint Conference on Artificial Intelligence, vol 1, ICJAI (San Francisco: Morgan Kaufmann, 1977), pp 91-98. Mark Owen Riedl and Vadim Bulitko, “Interactive Narrative: An Intelligent Systems Approach” AI Magazine 34, no. 1 (2013). https://www.aaai.org/ojs/index.php/aimagazine/article/view/2449

15 Ian Sample, “Jokers Please: First Human Mars Mission May Need Onboard Comedians,” The Guardian, Feb 15, 2019. King 2018.

16 “Dawn of RoboHumanity”: Faith Popcorn, “The Humanoid Condition,” in The Economist special issue “The World in 2016,” pp 112-13.