Ted Chiang’s novella “The Lifecycle of Software Objects” (2010) tells the story of what it calls digients: intelligent “digital organisms” – avatars or embodied agents – that inhabit an online virtual world known as Data Earth. These entities are produced by a startup called Blue Gamma, using a software “genomic engine” called Neuroblast that “supports more cognitive development than anything else currently out there”. The digients are initially designed to be digital pets: artificial beings that “you can talk to, teach to do really cool tricks… All the fun of monkeys, with none of the poop-throwing”. Digients are able to sense, respond to, and interact with the objects they encounter in their virtual environment. People can adopt them, train them, play with them, and even hold conversations with them. In order to appeal to buyers, the digients are designed to have “charming personalities”. They are also given “cute avatars”, appearing as anthropomorphic baby animals or “neo-Victorian” robots.
“The Lifecycle of Software Objects” takes place at an unspecified time in the very near future. It follows the careers of two Blue Gamma employees as they help to develop, and then care for, the company’s digients. Ana Alvarado is a former zookeeper. She leverages her experience with animal training and primate communication in order to mold, or at least influence, the digients’ behavior. Derek Brooks is a professional animator; his job is to design the digients’ bodies – which is to say the look and feel of their avatars – “in a way that people can relate to”.
Over the course of years, Ana and Derek find that their lives, both personal and professional, are increasingly focused on the digients – even to the detriment of their “real life” relationships with other human beings. Ana cares for a robot avatar named Jax; Derek for two panda-bear avatars, Marco and Polo, who look the same, but “have distinctly different personalities”. Ana’s and Derek’s involvement with their digients only gets stronger in the course of the novella; they are “motivated by love” for the entities in their charge, much more than just by the requirements of their jobs. “The Lifecycle of Software Objects” is concerned both with the nature of these unusual sentient beings, and with the ethics of how we might relate to them.
The novella’s digients are recognizable descendants of such currently-existing “software objects” as Tamagotchis, chatbots, non-player characters in video games, and “digital assistants” like Siri. Chiang extrapolates from recent developments in the gaming industry, as well as in artificial intelligence (AI). The story is speculative, since it envisions the development of software intelligence well beyond its actual current capabilities. Yet the extrapolation seems entirely plausible, since all the things the digients do in the novella have ample precedents today. In Chiang’s account, intelligent software doesn’t require a major new technological breakthrough; it comes into being as an incremental extrapolation of what we already know how to do.
The difference between actually existing software and the type imagined in the novella is largely a matter of generality. Chiang’s digients have a well-rounded overall sensibility, rather than any particular skills. In contrast, digital agents today are limited to certain specialized tasks, like recognizing faces, translating text, driving cars, and playing games like chess and Jeopardy(r). Indeed, software agents have done all of these things remarkably well. But the abilities of such programs cannot easily be transported from one realm of expertise to another. The program for playing chess at a grandmaster level is of no help when it comes to writing a program that can play Go.
Also, despite their particular successes, AI programs today still have enormous difficulties in dealing with shifting contexts and with ambiguities. Our current expert systems still largely operate by brute force, through the intensive processing of massive, pre-given datasets. As the novella puts it, in “old-fashioned AI” the machines’ “skills are programmed rather than learned, and while they offer some real convenience, they aren’t conscious in any meaningful sense”. In short, these systems have little flexibility or spontaneity. They work deductively, without a great deal of imagination. Siri only seems smart and responsive if you don’t push the boundaries of what she has been specifically programmed to do. Digients today do not have anything like a general, all-purpose intelligence; and this is probably why they are not really conscious.
In contrast, “The Lifecycle of Software Objects” imagines a near future in which software-based intelligence has become general, rather than domain-specific. The digients can easily shift their attention from one context to another. Their Neuroblast software “genes” do not contain massive amounts of information or hardwired symbolic instructions. “A character’s gait and its gestures” are not prescribed and programmed in advance. Rather, the digients’ behaviors and forms of action are “emergent properties of the genome”. The digients are endowed with the capacity to learn gradually, much as human and animal babies do. They are able to interact with their virtual environment, and to modify themselves by learning from experience.
When we first meet the digients, they are speaking babytalk and playing with simple objects. Indeed, “newly instantiated” digients know almost nothing:
It takes them a few months subjective to learn the basics: how to interpret visual stimuli, how to move their limbs, how solid objects behave.
But they gradually “learn through positive reinforcement, the way animals do, and their rewards include interactions like being scratched on the head or receiving virtual food pellets”. This learning is then bootstrapped as the digients’ horizons expand, and as they become more mature. After a while, they are able to move around and engage in more complex behaviors. The digients spontaneously show a basic curiosity about their environment. They make friends with one another, as well as with human-controlled avatars. They understand and respond to suggestions from their human handlers, even if, as one of the developers confesses, “we aren’t always able to get these guys to do what they’re told”.
Eventually, the digients learn how to read, and how to surf the Internet. By the end of the novella, they are much more than digital pets or toys, for they have developed rich social lives. They are concerned about their place in the world and their future prospects. And they meet and interact with human beings who do not care that they are just software:
The digients are socializing with human adolescents in various online communities… The adolescents who dominate these communities seem unconcerned with the fact that the digients aren’t human, treating them as just another kind of online friend they are unlikely to meet in person.
The relative autonomy of the digients conforms to “Blue Gamma’s philosophy of AI design”. This states that “experience is the best teacher, so rather than try to program an AI with what you want it to know, sell ones capable of learning and have your customers teach them”. The digients are capable of quite a lot, but educating them requires considerable time and patience. You can shorten the process somewhat by first running the software in a “hothouse”, at accelerated speeds. In that case, the digients’ “subjective” time is compressed into a shorter amount of real time. But beyond a certain point, such acceleration is “not a viable shortcut” any longer. In order for the digients to develop properly, “someone is going to have to spend time with them”. For now, this “someone” has to be an avatar controlled in real time by a human player. Obviously, this entails that the training must take place on a human time scale. “Complex minds can’t develop on their own… For a mind to even approach its full potential, it needs cultivation by other minds”. Training the digients is therefore extremely hands-on and labor-intensive.
Indeed, when AI researchers experiment with leaving groups of digients on their own, in the hope that they will learn from one another without human contact, the results are disappointing:
Every test population eventually goes feral. The digients don’t have enough aggression in them to descend into “Lord of the Flies”-style savagery; they simply divide into loose, nonhierarchical troops. Initially, each troop’s daily routines are held together by force of habit – they read and use eduware when it’s time for school, they go to the playgrounds to play – but without reinforcement these rituals unravel like cheap twine. Every object becomes a toy, every space a playground, and gradually the digients lose what skills they had.
What this means is that pretty much the same logic applies to raising digients as it does to raising children and training pets. Both digients and children have an innate capacity to learn; but neither can develop this capacity without some sort of organized social guidance. You don’t start out being self-sufficient; it’s an ability that needs to be nurtured and developed. In either the physical world or the virtual world, a lot of experience is necessary. You cannot properly wire neurons or generate code without it. And a sensitive and intelligent entity, whether carbon-based or code-based, cannot flourish without some sort of guidance from more-experienced elders:
Experience isn’t merely the best teacher, it’s the only teacher… there are no shortcuts; if you want to create the common sense that comes from twenty years of being in the world, you need to devote twenty years to the task. You can’t assemble an equivalent collection of heuristics in less time; experience is algorithmically incompressible.
Chiang thus gives us an account of software-based intelligence that is mundane, low-key, gradualist, and continuist. There is no special turning point in the course of the story: no dramatic moment at which artificial intelligence passes a threshold and becomes self-aware for the first time. Intelligence is rather a matter of degree, as well as of developmental process. The digients’ mentality exists on a continuum with that of animals and human beings, as well as with that of less complex machines. Presumably the digients could pass the Turing Test; but there is no reason to give them such a test, as they function just fine in human environments without it. The digients’ intelligence is broad rather than deep; and it is also socially-based, rather than solitary. The digients are different from human beings in many respects; but they are able to operate, and even thrive, in the large and complex context of Data Earth, and then on the Internet more generally.
There is an important distinction here, which sheds a new light on the question of experience as I discussed it in Chapter One. In his response to the story of Mary, David Lewis explicitly mocks the notion that “experience is the best teacher”. He argues that, insofar as the brain state of having had a particular experience is different from the brain state of not having had that experience, anything that creates such a change of brain state – for instance, “super-neurosurgery” or “magic” – will work just as well as actually having the experience in question. It’s just a matter, in effect, of reconnecting the neurons, or rewriting the brain’s software, in the proper way. Lewis therefore argues that “finding out what an experience is like” need not be limited only to cases in which one actually “has the experience”. In a way, Lewis’ argument is even stronger in the case of digients than it would be in the case of human beings. Changing a few lines in a software program is more transparent and straightforward than rewiring synaptic connections and neurons. There is a lot we still do not know about how neural connections work, and how the physical brain is connected to the experiential mind. But as the digients are entirely determined by their source code, we need not be distracted by worrying levels of functioning that we do not understand.
But Chiang’s implicit formulation of the problem also shows what is wrong with Lewis’ approach. If “experience is algorithmically incompressible”, then there is no way to shorten the time and effort, or to run the process more efficiently. Having experiences just is the way that the electrochemical circuits of the brain get rewired, or the digients’ software code gets rewritten. In Chiang’s novella, the code of a digient can easily be copied or cloned; similarly, in many science fiction works (as in Lewis’ speculations) a brain state or mental state can be copied or transferred from one embodied entity to another, or even from an organic body to a dispersed computer network. But none of this obviates the necessity of generating the code or the brain state – by having the actual experiences – in the first place.
Lewis is able to imagine magic or neurosurgery taking the place of actual experience, because he does not think that the experience is anything in and by itself. As we saw in Chapter One, for Lewis “experience” only matters insofar as it helps to produce a new mental disposition, like the ability to recognize a particular color. Lewis is not even particularly interested in how experience works as the proximate cause of the disposition. This is because Lewis holds, following Hume, that “if we ignore the laws of nature, which are after all contingent, then there is no necessary connection between cause and effect: anything could cause anything”. His position here is very close to Quentin Meillassoux’s formulation of “Hume’s problem”:
The same cause may bring about a hundred different events (and even many more)… the obvious falsity of causal necessity is blindingly evident.
Both Lewis and Meillassoux, following Hume, appeal to the principle that anything not logically contradictory is therefore possible. As I argue elsewhere, this principle confuses mere logical possibility with virtuality (to use Deleuze’s vocabulary), or general potentiality with real potentiality (to use Whitehead’s vocabulary). Logical possibility or general potentiality encompasses everything that is not ruled out by logical contradiction. Virtuality, or real potentiality, involves more than this. It means that there is an explicit way to get from here to there, that there could be a pathway or “historical route” (Whitehead) between them. Virtuality or real potentiality really exists in the present, as mere logical possibility does not. Potentiality is “real without being actual”, as Deleuze says; or, “the future is merely real, without being actual”, as Whitehead puts it.
Speculative extrapolation – or the exploration of real rather than general potentiality – is the very basis both of actual scientific research, and of science fiction. Meillassoux notes that “every science fiction implicitly maintains the following axiom: in the anticipated future it will still be possible to subject the world to a scientific knowledge”. The flow of causality still holds. And science fictional extrapolation also works this way: as Meillassoux shows by the analyzing of a short story by Isaac Asimov, and as remains the case in Chiang’s fiction.
Getting around the constraints of extrapolative speculation would involve engaging not in science fiction per se, but rather in the almost-nonexistent genre that Meillassoux calls extro-science fiction. This latter genre is concerned, Meillassoux says, with worlds “whose irregularity is sufficient to abolish science, but not consciousness”. The unreliability of cause and effect would make scientific experimentation unreliable; but “daily life could always build on stabilities that are certainly very relative, but still sufficiently powerful to allow a conscious existence”. Meillassoux shows a certain embarrassment as he struggles to find any actual literary instance of extro-science fiction; he finally comes up with an obscure French novel written during World War II by a Vichy collaborator.
Though Meillassoux himself seems unaware of it, his idea is actually taken up (in advance) by Joanna Russ, in her science fiction short story, “What Did You Do During the Revolution, Grandma?” (1983). In this story, parallel universes are gradated by a factor called Ru, which measures their degree of causal consistency. At 1.0 Ru, “the relation of cause to effect is absolute and absolutely reliable”. However, at lesser values of Ru, “the meshing of effect and cause goes loose and sleazy”. People can still live in worlds whose Ru is 0.877 or higher; beyond that, “we find unpeopled Earths”. The narrator believes that her own world stands at Ru 1.0; but in the course of the story, she discovers that this is actually not the case. Causality may fall apart for her as well, if not to such a degree as it does in the lower-Ru worlds. Russ’ story in effect inverts Meillassoux, by folding the possibility of extro-science fiction back within a still-science-fictional context.
Russ’ story indicates the difficulty of actually moving from guided science fictional extrapolation to the absolute randomness of Meillassoux’s “hyperchaos”, or Lewis’ principle of contingency. Regardless of whether the so-called “laws of nature” are necessary or contingent in principle, we cannot in practice just wave away what is happening right now. To a large degree, even if not absolutely, we need to accept what Whitehead calls the “conformation of present fact to immediate past”, and therefore recognize that speculation is always constrained by what Whitehead calls “stubborn fact which cannot be evaded”. When Lewis reasons on the basis of mere logical possibility, he simply skips over the truly difficult part of his argument: the need for him to actually describe a plausible causal process that could change the wiring of the brain, or the software code, in the same way as experience does.
There is an implicit analogy in Lewis’ discussion of experience. He is conceiving human minds in terms of computer functioning; he seems to be thinking of the ease of replicating software. In the world of Chiang’s novella, as in actual computer technology, you can always copy the file that instantiates a digient, and thereby get a new entity that is absolutely identical to its original. You can also “suspend” a digient (turn it off for a while, so that no subjective time passes for it), or even obliterate some of its experiences altogether, by rolling back the state of the digient’s software to a previous digital “checkpoint”. When Lewis imagines alternative ways of instilling an experience-based disposition, he assumes that biological minds work in roughly the same way as software does.
But “The Lifecycle of Software Objects” questions this line of reasoning, even when it comes to actual software. If you make an exact copy of a digient, the identity between the two instantiations only lasts for a moment. Once the two digients have gone their separate ways, they have different experiences, and hence they are no longer the same. Although Blue Gamma is happy to sell replicas of its already-developed avatars, “the expectation is that most people will buy younger digients, when they’re still prelinguistic. Teaching your digient how to talk is half the fun”. In other words, you can always avoid having to teach your digient, by purchasing one that is already trained. But the learning process cannot be dispensed with altogether; it has to have taken place at some point. When you clone a digient,
even though it’s possible to take a snapshot of all that experience and duplicate it ad infinitum, even though it’s possible to sell copies cheaply or give them away for free, each of the resulting digients would still have lived a lifetime. Each one would have once seen the world with new eyes, have had hopes fulfilled and hopes dashed, have learned how it felt to tell a lie and how it felt to be told one.
It is because the novella’s digients are generated and grounded in this way, that they must be characterized by what I am calling overall sensibility, a general way of being in the world, rather than by any particular collection of skills, dispositions, and items of knowledge. The novella never shows us things from the digients’ own points of view. But it is clear to Ana and Derek, and to any other human being who encounters them, that the digients have intentions, goals, preferences, and motivations. They display a considerable degree of self-awareness. Also, the digients’ human minders are able to converse with them in the same way, and pretty much on the same level, as they do with other human beings – or at the very least, as they do with children. Everything that the digients say and do implies that they have rich inner lives. The novella implicitly asks us to adopt the charitable principle that, if an entity seems sentient, then we should take it to actually be so.
Now, the question of what I am calling overall sensibility is still very much under debate today. Just as Lewis reduces experience to a matter of particular dispositions, so a number of philosophers and cognitive scientists deny that such a thing as general, all-purpose intelligence even exists. Steven Pinker, for instance, argues that the human mind is composed of many “computational modules”, each of which is dedicated to one specific cognitive task. This picture is not entirely wrong, but I find it dubious for a number of reasons. In the first place, even Jerry Fodor, one of the originators of the modular theory of mind, nonetheless notes that this theory cannot account for how the mind determines which module to call upon in any given situation. In the second place, the very notion of “modules”, each of which presumably runs a particular algorithm, is too formalized and too linear to account for the messy ways in which thinking actually works, and mental capacities gradually develop. Given the principle that “experience is algorithmically incompressible”, we need something that better accounts for the flexibility, spontaneity, and creativity of intelligent behavior than the theory of modules does. In the third place, the notion of mental modules hard-wired in our DNA seems to require concrete physical instantiation, and therefore an unsustainable correlation between particular mental functions and particular areas of the physical brain: in other words, a neo-phrenology. The module theory is too rigid to account for widely distributed processes.
It is true that there are some mental abilities and tasks that do seem both to be quite domain-specific, and to be correlated with particular regions of the brain. So much is suggested by fMRI scans. Certain mental functions do indeed break down as a result of damage to particular cerebral areas. One skill is impaired, without other forms of mental activity being compromised. Facial recognition, for instance, is apparently a separate ability from general visual acuity. Oliver Sacks notes that failures of facial recognition seem to be correlated with “lesions in the underside of the occipitotemporal cortex”, and especially with “damage in a structure called the fusiform gyrus”. This need not mean that the fusiform gyrus is the location in the brain where facial recognition takes place. We can infer that the fusiform gyrus is necessary for facial recognition; but this doesn’t mean that it is sufficient. The overall process is most likely a widely distributed one. The evidence for localization thus remains ambiguous.
Indeed, Sacks suggests that, even in the cases where the localization of mental functions can clearly be traced, there are good reasons to be skeptical about the modular theory of the mind:
The neuropsychologist Elkhonon Goldberg questions the whole notion of discrete, hardwired centers, or modules, with fixed functions in the cerebral cortex. He feels that at higher cortical levels there may be much more in the way of gradients, where areas whose function is developed by experience and training overlap or grade into one another… Goldberg speculates that a gradiential principle constitutes an evolutionary alternative to a modular one, permitting a degree of flexibility and plasticity that would be impossible for a brain that is organized in a purely modular fashion.
Given what Catherine Malabou calls the brain’s overall plasticity, it is probably more helpful to adopt a looser means of expression. Rather than invoking mental modules, we should rather speak (as Chiang already does in the passage quoted above) of heuristics. For the notion of heuristics is much more vague and fuzzy than that of modules. Heuristics are “algorithmically incompressible” procedures that arise both from innate dispositions and from experience. They are rough rules of thumb, or procedures that tend to be inexact. They are not necessarily logical or rational; they may or may not operate algorithmically; and they are not necessarily “hard-wired” into our genes. Heuristics are justified solely on pragmatic grounds: they have evolved because they have tended to work well in particular situations. With heuristics, then, we get the domain-specificity of mental modules, without the pre-programmed rigidity.
Heuristics are also highly flexible, which means that they can easily be transferred from one domain of experience to another. And this transferability is itself the best indication that something like general intelligence actually does exist. Of course, the wide-ranging applicability of heuristics also means that they will tend to mislead us, when we generalize them too far, or try to use them in inappropriate contexts. Scott Bakker’s extremely reductionist Blind Brain Theory (which I discuss in greater detail in Chapter Four) claims that our intuitions about our own mental processes are unreliable, precisely because “cognition is heuristic all the way down”. We don’t have any more reliable sources of insight. In other words, human beings are not anywhere near as rational as all too many philosophers and theorists have made them out to be. And there is no reason why we should expect artificially intelligent beings to be any more rational than we are.
In “The Lifecycle of Software Objects”, then, intelligence is heuristic; which also means that it is always finite, situational, and embodied. This is precisely what makes it a matter of overall sensibility, rather than one of special cognitive skills. In the novella, the mind operates within, and remains intrinsic to, some particular physical and material context. This is so regardless of whether that mind is biological or virtual. Cognitive powers are necessarily limited. They do not simply overwhelm the world around them. Rather, intelligence consists in finding ways to operate immanently, within the world, and in concert with other entities in the world.
Intelligence works by enlisting and forming alliances with other intelligences – as Bruno Latour might put it. It is therefore necessarily a matter of degree, rather than some sort of absolute. The human characters in “The Lifecycle of Software Objects” have more flexibility and spontaneity than the Neuroblast digients do. But the digients have far more flexibility and spontaneity than any digital agents that actually exist today. And these agents, in their own turn, are more flexible and spontaneous than older, non-computational machines.
Chiang’s account of AI is not far from the bottom-up, embodied, experience-based, behavioral approach to intelligence favored by the roboticist Rodney Brooks, among others. In the early days of computing, intelligence was usually defined as the ability to manipulate representations and symbols, and to draw proper inferences from them. AI systems were therefore organized from the top down, and emphasized propositional logic and massive data crunching. By the 1980s, however, this approach had come to an impasse. Researchers turned instead to connectionist and learning-based strategies, which are somewhat closer to the ways that biological brains actually develop. Intelligence cannot be programmed in advance, or given as a whole. Rather, it emerges piecemeal, in the course of multiple tests and trials. Computer scientists have been quite successful in using connectionist methods to produce expert systems with particular abilities – though less so in fostering general intelligence.
Brooks pushes this line of approach further, by cultivating machine intelligence in robots, rather than in software simulations. Robotic intelligence is necessarily embodied, and keyed to a specific physical environment. Brooks’ robots are not given complicated instructions. Rather, they learn by doing. Instead of relying upon symbolic models and rules, they “use the world as its own model”, and gradually develop the capacity to avoid obstacles and navigate the spaces in which they find themselves. Brooks argues that embodiment and embeddedness are necessary for the emergence of any sort of real intelligence.
Chiang’s digients are software simulations, not physical robots. But they are in effect embodied, since they “live” in a virtual environment, where they have something like an autonomous existence. The novella’s Data Earth resembles (or is extrapolated from) actually-existing online worlds like Second Life. In this environment, the digients’ “bodies” interact with one another, with human-controlled avatars, and with simulated physical objects. They learn the equivalent, not just of mental abilities, but also of physical skills like walking, running and performing acrobatics. For Blue Gamma, as much as for Rodney Brooks, it ultimately isn’t meaningful to divide physical abilities from mental ones; both are best understood as adaptive ways of getting along in the world.
All this flexibility and transferability muddies the question of how the digients are both like and unlike biological organisms. Despite Ana’s use of her background in animal training, she insists that “the digients don’t behave like any real animal. They’ve got this non-animal quality to them”. Since they can speak and read, the digients are perhaps better compared to human children. But they do not develop in the way that young human beings do either:
The digients inhabit simple bodies, so their voyage to maturity is free from the riptides and sudden squalls driven by an organic body’s hormones, but this doesn’t mean that they don’t experience moods or that their personalities never change; their minds are continuously edging into new regions of the phase space defined by the Neuroblast genome. Indeed, it’s possible that the digients will never reach ‘maturity’; the idea of a developmental plateau is based on a biological model that doesn’t necessarily apply. It’s possible their personalities will evolve at the same rate for as long as the digients are kept running.
“The Lifecycle of Software Objects” therefore leaves open the question of whether the digients will ever become capable of full independence from their human minders, “able to make responsible decisions about [their] future”. Even at the end of the novella, after people “have devoted years of [their] attention to raising these digients”, the latter are still more like human “teenagers” than like fully mature adults.
The digients’ intelligence is not different from the intelligence of organic entities in any fundamental sense. In maintaining this, Chiang carefully separates the question of sentience from the question of life. The digients have the former, but not the latter. They can feel and sense, and also reflect on what they feel and sense, just as we can. But not being alive, the digients do not replicate or reproduce themselves. In the absence of hormones, they are asexual. It is also unclear whether they even have anything like a “survival instinct”, or a Spinozian conatus, or any other sort of drive towards self-preservation. The digients are also incapable of feeling pain: they are “equipped with pain circuit-breakers, which renders them immune to torture and thus unappealing to sadists”. This is done, in other words, for the digients’ own protection. But if they were alive as well as sentient, then they would need some sort of aversive mechanism. It is only because of their peculiar status as non-living intelligences that they can do without it.
However, all this changes in the course of the novella. At one point, a clandestine group called the Information Freedom Front releases a hack “for cracking many of Data Earth’s access-control mechanisms”. In the wake of this, a “griefer” uses the hack “to disable the pain circuit-breakers on a digient’s body”. He is then able to torture the digient, and make it feel pain. Of course, he posts a video of the process online; the digients in Data Earth find out about it and watch it themselves. This is just one of a number of creepy and disturbing things that the digients’ human minders are forced to deal with, as the digients become more capable and more autonomous.
This ugly incident also leads to another important point. Chiang does not just extrapolate from actual advances in virtual world design and in artificial intelligence. He also extrapolates from the sociology of the Internet, and from the economic conditions under which software startups actually exist today. The “lifecycle” of the story’s title is not only that of the digients themselves, but also of the corporations that build and develop them, and try to sell them. In a very real sense this “lifecycle” is a commercial product cycle. Despite their sentience, the digients are threatened with obsolescence like any other bit of software. Less than halfway through the novella, Blue Gamma goes out of business. The “customer base” for digients “has stabilized to a small community of hardcore digient owners, and they don’t generate enough revenue to keep Blue Gamma afloat”. And so, the company announces that it
will release a no-fee version of the food-dispensing software so those who want to can keep their digients running as long as they like, but otherwise, the customers are on their own.
At this point, most people simply “suspend” their digients, painlessly terminating their existence. Since the digients are not really alive in the first place, it’s a process “with none of the implications that euthanasia would have”. Former customers move on to other software and other platforms; and most Blue Gamma employees “feel that keeping [a digient] as a pet now would be like doing their job after they’ve stopped being paid”. But Ana and Derek, together with a few others, keep their digients running. They cannot bear to let go. They love their digients – or, what is really the same thing – they feel a Levinasian sense of obligation towards them. And so they set up hobbyist email lists and online forums; and they search for other possibilities of corporate backing.
The second half of the novella is focused on this search. Things become even more urgent when the Data Earth virtual world, within which the digents “live”, also shuts down and goes out of business. Everyone moves on to a new virtual world called Real Space. Ana and Derek keep running a private Data Earth server, so that their digients can continue to function. But the digients’ wider social lives are disrupted; they no longer have other people, or other sorts of digients, to interact with. What’s needed is to port the Neuroblast digients’ code to the Real Space platform. But Ana and Derek cannot do this themselves, and they cannot afford to pay a team of programmers to do it. In order to fund the change, they desperately need to find some sort of new corporate sponsorship.
Chaing uses this development as an opportunity to explore different potential approaches to creating AI. Ana and Derek make pitches to a number of other corporations, who seek to generate artificial intelligence in a different way from how Blue Gamma and Neuroblast did it. One of these is a company called Exponential Appliances, which is interested in superintelligence. Their ultimate goal is
to conjure up the technologist’s dream of AI: an entity of pure cognition, a genius unencumbered by emotions or a body of any kind, an intellect vast and cool yet sympathetic. They’re waiting for a software Athena to spring forth fully grown.
The researchers from Exponential Appliances are “not looking for human-level AI; we’re looking for superhuman AI”. And even more to the point: “we aren’t looking for superintelligent employees, we’re looking for superintelligent products”. Blue Gamma’s digients are clearly not suitable for such a purpose. They aren’t superintelligent, and are unlikely ever to become so, no matter how long their education continues. Their virtual bodies, and their emotions, get in the way of optimizing performance. And even worse (from Exponential Appliances’ point of view), the Blue Gamma digients “think of themselves as persons”, which means that they cannot be treated just as objects, or as commodities. The people from Exponential “want something that responds like a person, but isn’t owed the same obligations as a person”. Ana and Derek, from their years of experience with the digients, know that this is a self-contradictory demand, and that therefore it is impossible. As Ana reflects,
The years she spent raising Jax didn’t just make him fun to talk to, didn’t just provide him with hobbies and a sense of humor. It was what gave him all the attributes Exponential was looking for: fluency at navigating the real world, creativity at solving new problems, judgment you could entrust an important decision to. Every quality that made a person more valuable than a database was a product of experience.
The arguments of the Exponential Appliances researchers remind me of those of David Levy in his book Love + Sex With Robots (2007). Levy proposes, on the one hand, that in the near future robots will be advanced enough that they will be entirely indistinguishable from human beings in sexual relationships. They will give their human partners just as much, physically and emotionally, that human lovers do. However, at the same time Levy also presents as an advantage the fact that robots – unlike actual human beings – are infinitely programmable, so they can be guaranteed never to have desires that differ from what their owners want. Therefore,
you don’t have to buy [a robot] endless meals or drinks, take it to the movies or on vacation to romantic but expensive destinations. It will expect nothing from you, no long-term (or even short-term) emotional returns, unless you have chosen it to be programmed to do so.
This is clearly a fantasy (in the most pejorative sense of that term). Levy wants to have things both ways. The robots cannot be both entirely like us, and yet utterly subordinated to our will. If they do our bidding entirely, then they will not seem autonomously intelligent, and they probably won’t even be self-conscious; we will never be able to forget that they are not human. On the other hand, if these sexbots are really as similar to human partners as Levy claims, then they will need to have a degree of autonomy such that we will not be able to completely program them.
It is worth pointing out just how unusual Chiang’s gradualist and experientially-based vision of artificial intelligence is. Science fiction, futurist speculation, and analytic philosophy alike tend either to deny that strong AI is possible at all, or else to present it in apocalyptic terms. John Searle, with his famous “Chinese Room” argument, exemplifies the former alternative. For Searle, mental intentionality “is a biological phenomenon, and it is as likely to be as causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena”. Because of this dependency, intelligence cannot be emulated in software. “No [computer] program by itself is sufficient for thinking”, Searle argues, because no software can make the leap from syntactic rules to semantic content. Evidently Searle would reject the very idea that the sort of extrapolation that takes place in Chiang’s story is possible.
At the other extreme, the futurist Ray Kurzweil is fully confident that he will soon be able to download his mind into the computer network, and thereby live forever. He claims that “the AI revolution is the most profound transformation that human civilization will experience”, and that it will inevitably take place before the middle of the 21st century. For Kurzweil, the development of general artificial intelligence will lead to a massive break in human history, a Singularity after which everything in the world will be totally transformed – and supposedly for the better. Meanwhile, scientists like Stephen Hawking, entrepreneurs like Elon Musk, and philosophers like Nick Bostrom have all issued warnings that intelligent machines might well be a threat to humanity. Their “superintelligence” will extend so far beyond ours, Bostrom says, that we will never be able to understand them, let alone control them. Their unchecked growth will menace us with extinction: “once humans develop artificial intelligence it would take off on its own, and re-design itself at an ever increasing rate… This is quite possibly the most important and most daunting challenge humanity has ever faced”.
“The Lifecycle of Software Objects” doesn’t entirely rule out the possibility of machine superintelligence. But it implicitly suggests that the Neuroblast approach is much more likely to succeed in creating any sort of machine-based mind. The problem with the vision of superintelligence is that there is really no way to extrapolate it from what exists today. Kurzweil, of course, claims to be doing just this, but his presuppositions and his account of mind are too simplistic to be convincing. Kurzweil explicitly claims, for instance, that the Singularity “is the inexorable result” of Moore’s Law. As computing power becomes steadily cheaper, we will eventually make computers with as many connections as the number of synapses in the human brain. And Kurzweil simply waves away the question of structure and organization. Once this quantitative equality is achieved, he thinks, the rest will just automatically follow.
These visions of superintelligence, whether for good or for ill, derive from an overly grandiose and inflated view of what cognition and consciousness actually are, and what they are actually able to do. We tend to be self-congratulatory about our own cognitive powers. We tell ourselves that our “sapience” is vastly superior to the “mere sentience” of all other organisms. And we also usually privilege our capacity for abstraction, and ignore the ways that our own mentality (like that of other entities, organic or machinic) is emotively based, embodied, and situational. All this provides the ground for our picture of ultra-rational and superintelligent posthuman machines. In short, we imagine “intelligence” to be a kind of irresistible comic-book superpower; and we project a posthuman future on that basis. Thus Kurzweil maintains that “intelligence is more powerful than physics”, able to “maneuver and control” all material forces, and thereby “engineer the universe it wants”. This is a fantasy vision of intellect: without limits, without finitude, without contextual grounding, and without friction.
In Chiang’s novella, in contrast, there is no Singularity, and no real prospect of superintelligence. The digients’ cognitive powers are neither special-purpose, nor vast beyond measure. The novella extrapolates its vision from the state of currently-existing software; and, in just the same way, the digients within the story develop their powers by in effect extrapolating from an already-existing base of software performance. By grasping mentality in this way, Chiang has no need to posit – as Kurzweil and Bostrom both do – that the development of intelligent software must take place at an ever-expanding, exponential rate, and end by leaving us behind.
This also means that virtual existence will not be as different from physical existence as we sometimes imagine. For both sorts of existence require a certain degree of embodiment. Human intelligence is not just located in our brains; it also necessarily involves some degree of extension into the outer environment, in the form of what David Chalmers and Andy Clark call the “extended mind”. It is therefore impossible to disentangle biological intelligence from its “artificial” prosthetics and extensions – which range, in the case of human beings, all the way from drawing pictures in the sand, to writing technologies, to the latest computational innovations.
This is yet another reason why Kurzweil’s vision of infinite intelligence is ludicrous. We may well develop sophisticated, radically nonhuman forms of intelligence; but the necessity for embodiment and extension will mean that this intelligence is still subject to biological and energetic limitations, or – as with Chiang’s digients – to virtual equivalents of these. Kurzweil imagines that downloading his mind into the network will free it from all physical constraints. But even if he succeeds in this task, he is bound to be disappointed. Since physical and sensory interactions, embodied feedback mechanisms, and other extensions into the environment are crucial parts of mental functioning, Kurzweil will need to take them all along with him when he disperses into the network.
In fact, “The Lifecycle of Software Objects” presents us with an exact inversion of Kurzweil’s scenario. Instead of human beings downloading their minds into the network, we get the digients temporarily “uploading” themselves into the physical world. This is made possible by the manufacture of
a robot body, newly arrived from the fabrication facility. The robot is humanoid in shape but small, less than three feet in height, to keep the inertia of its limbs low and allow it a moderate amount of agility. Its skin is glossy black and its head is disproportionately large, with a surface mostly occupied by a wraparound display screen.
The digient software is simply redirected, so that it controls this physical body instead of its usual virtual one. Jax is able to do this easily, “because the test avatar isn’t radically different from his own; it’s bulkier, but the limbs and torso have similar proportions”. Instead of seeing, hearing, and feeling its virtual body and virtual surroundings, the digient is able to feel a sense of presence in the actual physical world, thanks to the robot unit’s cameras, microphones, and “tactile sensors”. All this is something of a trick, but it works well enough: Ana “knows that [Jax is] not really in the body – Jax’s code is still being run on the network, and this robot is just a fancy peripheral – but the illusion is perfect”.
Indeed, it isn’t just that the Neuroblast digients have a limited and bounded intelligence. It is also that they are too autonomous, and too playful, to work for businesses or individuals as digital assistants. They don’t like tedious and repetitious jobs any more than human beings do. If the point is to automate such jobs, so that conscious beings don’t have to endure them, then developing self-conscious AIs is economically and ethically counterproductive. Generalizing from the results of Neuroblast and other competing programs, “many technology pundits declare digients to be a dead end, proof that embodied AI is useless for anything beyond entertainment”.
All this changes with “the introduction of a new genomic engine called Sophonce. Where the Neuroblast digients are programmed to be “adorable”, so that owners will bond with them, the ones produced by Sophonce are single-minded, task-oriented, unsympathetic, and utterly charmless. The Sophonce digients are deliberately manufactured so as to exhibit “asocial behavior and obsessive personalities”, ideally suited to business contexts. Since they do not need to develop conviviality, playfulness, and other social skills, they do not require anywhere as much human intervention and training as the Neuroblast digients do. The problem with them is that they are so unengaging “that few people want to engage in even the limited amounts of interaction that the digients require”.
Enter a company called Polytope. It is trying to produce a new breed of smart digital assistants. The plan is to augment the capabilities of the Sophonce digients by hands-on training of the sort that Ana provided to Jax and other Blue Gamma/Neuroblast digients. This would supposedly give the Polytope’s digients the best of both worlds. The problem, of course, is that the Sophonce digients cannot establish emotional relationships with their trainers, or with anyone. The Polytope people hope to get around this by requiring the human trainers to use something called InstantRapport: “one of the smart transdermals, a patch that delivers doses of an oxytocin-opioid cocktail whenever the wearer is in the presence of a specific person”. The company reasons that “the only way trainers will feel affection for Sophonce digients is with pharmaceutical intervention”. The digients will not ever feel any sort of empathy on their end, but the human trainers will forcibly develop affection for, and empathize with, those digients nonetheless.
Ana is tempted to take the job, despite its creepiness, if in return Polytope agrees to port the Neuroblast digients to Real Space. But of course, this raises a whole set of questions. Will Ana still be able to care for Jax in the same way, once she starts spending most of her time with a Sophonce/Polytope digient instead? More generally, what does it mean to freely agree to a procedure that changes the very basis of who you are, in a way that you cannot control? Moreover, what does it mean to do this as a condition of employment? The situation is quite different from voluntary procedures like getting plastic surgery, or LASIK eye surgery, or (in the near future) intelligence augmentation through chemical or genetic means. It may well be that chemical intervention can change our personalities, in the same way that rewriting code can change the personalities of the digients. But the question of consent, both for human beings and for digients, remains murky and troublesome.
The other alternative for rescuing Ana’s and Derek’s digients is equally creepy. A company called Binary Desire will gladly port the Neuroblast code to Real Space, in return for being allowed to license the digients as sexbots. The Binary Desire people emphasize that they are actively trying to get away from cheap and sleazy exploitation. “As long as there have been digients, there have been people trying to have sex with them”; but this has usually happened at a very low level. In the world of the novella, there are already digital entities like Sophonce digients “dressed in Marilyn Monroe avatars, all bleating Wanna suck dick. It’s not pretty.”
Binary Desire seeks instead to develop virtual “sex partners with real personality”:
As the digient gets to know a human, we’ll enhance the emotional dimension of their interactions, both sexual and non-sexual, so they’ll generate love in the digient… For the digient, it will be indistinguishable from falling in love spontaneously.
Also, Binary Desire promises to “retain the circuit-breakers” that prevent the digients from feeling pain, so that they will never become the victims of sadists:
The digients won’t be subjected to any coercion, not even economic coercion. If we wanted to sell faked sexual desire, there are cheaper ways we could do it. The whole point of this enterprise is to create an alternative to fake desire. We believe that sex is better when both parties enjoy it; better as an experience, and better for society.
The playful and emotional nature of the Blue Gamma digients makes them perfect candidates for the Binary Desire plan. Ana notes that it is a bit “like a Neuroblast version of Instant-Rapport”. The difference is that here it is the digients’ personalities that are manipulated, rather than those of the human trainers and partners. If digients are programmed this way, they won’t have “any choice about what they enjoy”. But the Binary Desire people deny that the situation is “any different for humans… We become sexual beings whether we want to or not”. It’s just that biological human beings are programmed, or re-programmed, by electrochemical changes, instead of by rewritten code.
“The Lifecycle of Software Objects” doesn’t offer an answer to any of these dilemmas. There is no way to resolve the ageold debate between free will and determinism, for instance. The novella instead suggests that whatever is true for us must also be true for the digients. To the extent that we can make spontaneous, unforced decisions, so can they. And to the extent that the digients are susceptible to being manipulated from outside, the same is true for biological human beings. This is not changed, in principle, by the fact that we have access to the digients’ source code, but not to our own – regardless of what, if anything, the biological equivalent of “source code” might turn out to be.
Chiang’s novella also makes the point that technological developments can never be separated from social and economic ones. No research program can be pursued without sufficient funding. In our current neoliberal climate, this means that the development of artificial intelligence is necessarily subject to corporate control, and can only be pursued if, and to the extent that, it promises profit. (The one exception to this, not discussed in the story, is research conducted secretly by the military and the security services.) Ultimately, play and pleasure – the initial endowment of the Neuroblast/Blue Gamma digients, and the reason why people like Ana and Derek are so attached to them – must be subordinated to economic considerations. Blue Gamma goes out of business, and the three alternatives Ana and Derek must face in order to keep their digients going all involve monetizing, and restrictively channeling, the digients’ abilities.
Along these lines, it is ironic, but not particularly surprising, that the only way to give the digients legal rights – to endow them with any degree of autonomy, or with the legal status of personhood – is to register them as corporations:
Artificial-life hobbyists all agree on the impossibility of digients ever getting legal protection as a class, citing dogs as an example: human compassion for dogs is both deep and wide, but the euthanasia of dogs in pet shelters amounts to an ongoing canine holocaust, and if the courts haven’t put a stop to that, they certainly aren’t going to grant protection to entities that lack a heartbeat. Given this, some owners believe the most they can hope for is legal protection on an individual basis: by filing articles of incorporation on a specific digient, an owner can take advantage of a substantial body of case law that establishes rights for nonhuman entities.
This makes a grim sort of sense when we consider that corporations are not only recognized as “persons” by the United States courts, but even granted freedoms and rights that biological persons do not enjoy. At one point in the novella, Marco and Polo ask Derek if he will register them as corporations, so that they “can do whatever [they] want”. At another point, Jax asks Ana if he can get a job, so that he will be able to pay for her to continue taking care of him. The digients’ dependence on their human trainers, and their prospective independence from those trainers, are both financially mediated in the long term.
The largest tension running through the novella is that between the digients’ sheer existence and their economic utility. Where Sophonce digients have particular marketable skills, the Blue Gamma/Neuroblast digients are characterized, above all, by their playfulness and curiosity. They exhibit what Alfred North Whitehead calls “a certain absoluteness of self-enjoyment”. Their sentience is far more a matter of feeling, than it is one of cognition. And this is why their existence is so precarious. The things that they do are gratuitous rather than functional, which means that – short of turning them into sexbots – they cannot really be monetized. Most recent philosophical accounts of mind are entirely functionalist and cognitivist. Feeling and emotion only play secondary roles. As Robert Zajonc summarizes it, for cognitivism “affect cannot be independent of cognition because by definition cognition is a necessary precondition for affective arousal”. But “The Lifecycle of Software Objects” insists that this is wrong. The cognitive skills of the Neuroblast digients are secondary to their emotions. If “experience is the best teacher”, this is because it is only through the adventures of affect that Jax and the other digients are able to learn to perform cognitive tasks in the first place. If they can walk, talk, read, and otherwise evaluate and negotiate their way through their environment, it’s because they already have a certain basic sensitivity. The analytic-philosophical privileging of cognition over affect is of a piece with the economic privileging of the digients’ business skills (or for that matter, sexual skills) over their own self-enjoyment. “The Lifecycle of Software Objects” doesn’t suggest that we can ever escape these sorts of constraints, but it does tell us that they aren’t the last word.