Maureen McHugh’s short story “The Kingdom of the Blind” (2011) is about a computer program, called DMS, that apparently achieves sentience. DMS is a “complex [software] system, spread across multiple servers”, and “engineered by using genetic algorithms”. Its code is extremely clunky and convoluted, and even its programmers do not really understand how it works. DMS’s job is to monitor and manage the “physical plant – thermostats, lights, hot water, and air filtration” – of the Benevola Health Network, a group of hospitals and health care systems spread across North America. DMS keeps track of “security cameras, smoke detectors, CO detectors, and a host of other machines”, checking for things like run-down batteries and misaligned sensors. It also does “complicated pattern recognition and statistical stuff”, compiling information on patterns of disease in hospitals for “the CDC and the National Institute of Health”. DMS is the sort of unglamorous software that most of us never think about or even notice, much less knowingly interact with – and yet, our lives depend upon its correct functioning.
It is not surprising that we are mostly unaware of how deeply our lives today depend upon the functioning of complex expert systems, of the sort exemplified by DMS. For we generally tend to overlook the material infrastructures that surround us and support us: things like electrical wiring, elevators, and heating and cooling systems – not to mention the oxygen in the atmosphere, and the bedrock beneath our feet. Most of the time, we take all these things for granted. We only notice them when they have stopped doing what we expect and need them to do. Thus Heidegger says that we never really see a hammer until it is broken; and Marshall McLuhan says that a fish could never have discovered water. The sheer existence of a thing only becomes truly apparent to us when that thing emerges from the background, and stands out on its own account. This can happen when we stop taking a thing for granted, because we can no longer rely on it to perform its usual tasks for us. It can also happen in science fiction narratives, where worlds are constructed with very different backgrounds and infrastructures than our own, or where (as in McHugh’s story) material and technological factors are explicitly foregrounded.
Why is this important? Our basic orientation towards the world is a practical and pragmatic one. Our minds and our senses evolved, not in order to let us grasp things as they actually are, but specifically in service to the goals of our own survival, reproduction, and flourishing. Our perceptions therefore tend to be limited, partial, and self-interested. As Henri Bergson puts it, perception “results from the discarding of what has no interest for our needs, or more generally, for our functions”. In consequence, we usually underestimate what the things around us can do in and for themselves. We consider them only in terms of how they help or hinder our own aims. We tend to assume that, aside from our uses of them, material things are simply there, merely passive and inert.
But this is wrong. Such recent thinkers as Bruno Latour, Jane Bennett, and Ian Bogost remind us that the nonhuman entities with which we share the world – including, but not limited to, our tools – are active in their own right. They have their own powers, interests, and points of view. And if we engineer them, in various ways, they “engineer” us as well, nudging us to adapt to their demands. Automobiles, computers, and kidney dialysis machines were made to serve particular human needs; but in turn, they also induce human habits and behaviors to change. Nonhuman things must therefore be seen as what Latour calls actants: active agents with their own intentions and goals, and which affect one another, as well as affecting us. As Bennett puts it, material things do not just have a “negative power or recalcitrance” as they resist our efforts. They also exert “a positive, productive power of their own”. Things are creative. And again, one of the great potentialities of science fiction is to illuminate the positive, productive powers of things, of materials, and of technological apparatuses.
Actants, or things, need not be restricted to single, compact, and easily identifiable entities. Today, in the era of globalization, and of what has come to be called the Anthropocene, our lives are increasingly intertwined with, and dependent upon, complex, widely distributed technical systems and networks. These mega-entities are what Timothy Morton calls hyperobjects. Such things are altogether real; but they are so “massively distributed in time and space” that we cannot ever see them as wholes, or grasp them all at once. Morton cites “global warming” and “nuclear radiation from plutonium” as examples of hyperobjects; one might equally well mention the Internet, and the global derivatives market.
On a more modest scale, the Benevola Health Network in McHugh’s story is also a hyperobject; especially if we include the software that manages it. It’s almost as if the Health Network were some sort of alien organism, with DMS as its nervous system or brain. The whole system is what Latour calls a “black box”: something that functions in a fairly regular way, so that “one need focus only upon its inputs and outputs and not on its internal complexity”. A black box produces more or less predictable effects upon the world around it, even though we do not know what is going on inside it. As long as DMS does its job, its programmers do not have to worry about their inability to understand its code. The entire Benevola Health Network can also be characterized as what Bogost calls a “unit”: an “isolated and unique” entity that nonetheless “encloses a system – an entire universe’s worth” within itself, and that in turn “becomes part of another system – often many other systems – as it jostles about”.
DMS itself is a system of systems, as it is composed of “subroutines”, to which the programmers have given the names of “Haitian voodoo loa… possession spirit[s]”. These subroutines are independent from one another, but nonetheless “weirdly interconnected”. The idea of naming such autonomous software programs after Vodoun deities dates back at least to William Gibson’s “cyberpunk” trilogy of the 1980s (Neuromancer, Count Zero, and Mona Lisa Overdrive). In these novels, the loas are portions of the global computer network that have become self-sufficient and self-aware. Gibson’s invention has since become almost a cliché of geek culture. Sydney, the tech support person who is the protagonist of “The Kingdom of the Blind”, is well aware of this; she sardonically reflects that, in naming the subroutines in this way, “some programmers had undoubtedly been very pleased with themselves”.
Nonetheless, the Vodoun appellations for DMS’s subroutines are not altogether inapt. For “The Kingdom of the Blind” turns upon questions of machine autonomy and awareness. From the very beginning, the whole Benevola Health System seems haunted, or possessed, by oblique intentions. Even when it performs adequately, without crashing, its momentto-moment functioning is highly enigmatic. The tech support people who monitor DMS, take care of it, and write its code find it to be “opaque as a stone”: so unreceptive to their attempts at understanding it, that they often are not sure whether it is intelligible at all. And as we will see, it turns out that the oblique intentions of DMS, like those of the “loas” in Gibson’s novels, are ultimately aesthetic ones.
“The Kingdom of the Blind” tells the story of what happens when DMS starts acting oddly: that is to say, even more oddly than usual. The program begins to exhibit what might well be thought of as deliberate behavior. One afternoon, starting exactly at “3:17 EST”, DMS causes a series of “rolling blackouts” – brief cutoffs of electricity – at all the facilities under its control. The blackouts take place in an orderly fashion. The lights go out in a fixed geographical pattern at each facility: from east to west, or from north to south. And the facilities are affected in the order that they are listed in DMS’s lookup table. The sequence is repeated the following day, at exactly the same time, but in reverse order. All in all, it seems like “a kind of weird utility/weather event”, a perturbation of the technological atmosphere. The tech support people are unable to find anything in DMS’s code that could have caused this series of events to happen. “Why 3:17?”, they ask; “why the electrical system?” The rolling blackout is nonrandom; but it lacks any discernible function or rationale, and its particular details seem entirely arbitrary. We might think of it, therefore, as a purely gratuitous gesture. This means that it is a kind of pure aesthetic expression: what Kant characterizes as a condition of “a merely formal purposiveness, i.e., a purposiveness without an end”.
Aside from this initial computer aberration, very little actually happens in the course of “The Kingdom of the Blind”. The protagonist Sydney and her co-worker Damien try out various ways of dealing with the glitch. Although they cannot find anything wrong with DMS’s actual code, they keep on looking for ways to “build a box around the bug”. They try to identify anomalies, or “data corruption”, in the program’s output. They “poke” DMS, by feeding it unexpected data – “a thousand-character string of ones and zeroes” – in the hope of thereby provoking an intelligible response. They “reroute” DMS’s code in order to prevent additional blackouts: as soon as the sequence starts, they automatically “switch the electronic systems to maintenance mode”, stop DMS from “actually touching the electrical system”, and force the system “to send a report” about it to their printers instead. Amidst all this, they also prepare for the last-ditch option – if nothing else works – of shutting DMS down entirely, and reloading it from an old backup. In giving the details of all these procedures, McHugh’s story is not far from being a naturalistic account of what information technology workers actually do.
But beyond these practical measures, Sydney and Damien also engage in speculation about what is going on with DMS – or, more precisely, about what is going on within DMS. Confounded by this strange behavior, they try to peer inside the “black box”. And they grapple with the notion that DMS might be in some sense “aware”. In this way, the story touches upon the disappointing history of artificial intelligence research. The quest for artificial intelligence has been at the heart of computer science and computer engineering ever since the 1950s; but it has never had much success. Research was long hampered by the mistaken expectation that computers could think in the same ways that human beings do, as well as by its converse, the mistaken belief that human beings actually think in a manner analogous to how computers operate. The old, top-down paradigms for artificial intelligence, based on the rules of logic and on symbolic processing, never worked very well, and have long since been abandoned. But it remains to be seen whether the newer, bottom-up paradigms, which emphasize such things as embodiment, interactivity, spontaneous emergence, and incremental learning in simulated neural networks, will be any more successful.
In any case, it is only recently that we have come to realize that – should computers ever actually come to think – they will do so in ways that are quite different from our own modes of thought. The problem here is not really a cognitive one. Basic cognition is a fairly “easy” engineering problem, and also a fairly “objective” one. Cognition is largely automatic for biological organisms; it takes place at a low level of mentality; and it is mostly unconscious, even in human beings. We should not be surprised, therefore, that cognition is readily attainable by digital means as well. Computers are already much better than human beings at such straightforwardly cognitive tasks as mathematical calculations, playing chess, recognizing faces in a crowd, and winning rounds of Jeopardy. Computers excel at quickly extracting relevant information from large quantities of data.
The real difficulties lie elsewhere. Artificial intelligence research has accomplished very little when it comes to addressing mental processes like affect, will, and desire not to mention qualitative experience, awareness, or what David Chalmers calls the “hard problem” of consciousness itself. Even so-called “affective computing” is much more concerned with enabling computers to “read” human emotions, and in turn to provoke and manipulate human emotional responses, than it is with eliciting anything like the affective states of computers themselves. The latter, should they ever come to exist, are likely to be quite different from anything that we are accustomed to. Sydney recognizes the problem: “DMS didn’t see or hear, didn’t eat or breathe. Its ‘senses’ were all involved in interpreting data”. As DMS perceives a different world than we do, and is physically constructed along very different lines than are our own brains and bodies, its “feelings” are likely to be quite different from ours, as well.
In addition, emotional experience in software is likely to be quite tenuous and unstable. For as Sydney reflects in the course of the story, “organic systems are far less fragile than computer systems. Organic systems decay gracefully. Computer systems break easily”. Affect and consciousness, therefore, may well come to computers only in brief flashes. They will be difficult for digital machines to sustain. For this reason, it is far more likely that we will come across computer sentience unexpectedly – as seemingly happens in McHugh’s story – than that we will be able to generate it reliably by means of any actually-existing AI research programs.
In “The Kingdom of the Blind”, Sydney and Damien focus their speculations on these deeper, noncognitive aspects of mentality. They are inspired by a (fictitious) computer scientist at MIT, who believes that certain other malfunctioning large-scale computer systems “had shown patterns that seemed purposeful and that could be interpreted as signs that the systems were testing their environments”. Sydney and Damien think that this may be the case for DMS as well. But if so, then how can they prove it? Can they establish communication with DMS? Can they give it something like a Turing Test? More deeply, what are the characteristics of sentience for a computer? The philosopher Thomas Nagel famously wrote of the difficulty of understanding “what it is like to be a bat”. It is even harder to imagine “what it is like” to be a nonorganic system like DMS, given that its presumptive mentality is so radically different from any mentality of the human kind.
Sydney and Damien are therefore forced to puzzle over the complex ramifications of machine sentience. They wonder if DMS’s apparent awareness means that it is alive, or if it is rather “aware but not alive” – although they cannot begin to imagine what this latter condition might mean. They wonder what DMS “wants”, or even if it wants. They ponder the oddness of a sentient system that – in contrast to all living things – has “no survival instinct”. They wonder what it could mean to “test” an “environment” that is entirely abstract, as it consists only in “complex fields of data”. They wonder about the very basis of ascribing consciousness to another entity, given that we can never experience somebody else’s feelings from the inside: “You think I’m conscious because I’m like you, and you’re conscious”, Damien tells Sydney; but this kind of reasoning does not work with computers, since “DMS isn’t like us” at all. Sydney and Damien even wonder whether mentality can in fact be equated with consciousness, or whether DMS’s mental activities might rather be involuntary and unconscious, like those of the autonomic nervous system in human beings and animals. And they worry about whether deleting DMS and restoring it from backup is ethical. Would its awareness simply resume from where it was before, like “if someone has a heart attack and you shock them back”? Or would deleting DMS mean erasing the mind of a sentient being?
Of course, “The Kingdom of the Blind” does not provide answers to any of these dilemmas. The point is rather that the very prospect of sentience in software unavoidably leads us into deep questions in the philosophy of mind, questions that have engaged Western thinkers at least since Descartes, and that are still matters of controversy today. Sydney and Damien are obliged to confront such things as the conundrum of the brain in a vat (which is really just a contemporary, science-fictional equivalent of Descartes’ “evil genius” hypothesis), and the puzzle of how sentience is embodied, and whether it can be preserved as a medium-independent pattern (which, as Damien puts it, is really the problem of the transporter in Star Trek: “if I beam you down to the planet, does that mean I have actually killed you and sent an exact replica in your place?”).
Despite these difficulties, Sydney is eventually able to build a picture of “what it is like” to be DMS. As she engages with the system more and more, Sydney finds that “she was beginning to get a feeling about DMS. About what DMS might be like. She felt as if she could sort of sense the edges of DMS’s personality”. The key to Sydney’s understanding is her recognition that the software system does not sense the outside world; unlike biological organisms, it cannot “see or hear or smell or taste”. Although DMS monitors surveillance cameras, “it didn’t care what the security cameras ‘saw’… It didn’t use them to sense the world; it sensed them” (emphasis added). In short, “the world for DMS was data, and DMS swam in the data”. DMS short-circuits reference; it does not have anything like a correspondence theory of truth. It does not construct internal representations, which would serve the purpose of modeling, or corresponding to, things in its external environment. Rather, DMS’s “experiences” are entirely immanent: constructions of the “data stream” that feeds back directly into DMS itself.
Sydney worries a lot about this feedback structure. What does it mean for everything to be data? Can DMS escape its own self-reinforcing feedback loops, and encounter something Other, something outside of itself? Or is the system inherently solipsistic? “What would it be like to be alone?”, Sydney wonders. “Of course, as a human being, she was a social animal. Even the cat was a somewhat social animal. But DMS wasn’t. DMS didn’t even know anyone else existed. DMS lived in a data stream”. DMS may well be sentient code; but for Sydney, “the whole point of DMS was that it was not someone else speaking through the code”. That is to say, even if DMS is conscious, it has no separate, self-reflexive consciousness. The code itself feels and thinks; there is no “ghost in the machine”, no observer separate from what is observed. In consequence, DMS isn’t “moral or immoral, ethical or unethical. DMS was like that, because for DMS, nothing else was alive”. In its aloneness, and in its blindness and deafness to the world around it, DMS strikes Sydney as uncanny. Rather than being a living thing, she thinks, it is more like “a ghost or a spirit”, something on the borderline between sentience and insentience, as well as between life and nonlife.
It is currently fashionable to claim that, at bottom, the universe is nothing but information. On the scale of human life and sentience, this would ultimately mean that we are all, like DMS, swimming in a sea of nothing but our own data. We would all be closed, autopoietic systems, shut off from the outside world even when we were being “perturbed” by it. As Levi Bryant puts this theory, “the operations of an autopoietic system refer only to themselves, and are products of the system itself”. More generally, Bryant says, “systems or substances only relate to themselves” – and this holds even for things that are not “autopoietic” or self-sustaining. All entities are “closed to the world, relating to systems in their environment only through their own distinctions or organization”.
But in point of fact, any such closure is impossible. Meanings are always leaky and contextual, slipping away from the systems that generate them. Moreover, entities and systems can never be adequately characterized in terms of their own self-generated self-understandings: these are always misleading, and radically incomplete. At the very least, all responsive entities – including computers, no less than living organisms – require continual flows of energy, coming from outside them, in order to function, or even just to sustain themselves. A computer needs electricity, just as a plant needs sunlight. Both computers and living organisms are dissipative systems, consuming and discharging great quantities of energy, and remaining far from thermodynamic equilibrium. Energy stops flowing, and equilibrium is attained, only when the entity in question is dead. Even if systems theory were right in asserting that an entity or system can only “know” to the extent that it translates everything into its own internal terms, this could not make for an exhaustive description. For any entity or system is still dependent upon, and still internally affected by, outside forces and energies that it does not, and cannot, “know”.
Responsive entities are energetic before they are semiotic. This is why they cannot be adequately described in the terms of information theory and systems theory. Concepts like Maturana and Varela’s “autopoiesis” and Luhmann’s “operational closure” are supposed to explain how dynamic entities resist entropic dissolution, and how they manage to maintain themselves “on the edge of chaos”. But such concepts are overly static. They assume that responsive entities are characterized above all by an underlying drive to persist in being (homeostasis, or Spinozian conatus). And so they ignore the ways that these entities, with their enormous energy flows and energy expenditures, are equally driven by a will to change, a drive to reduce energy gradients, and thereby to push at their own limits. “The primary meaning of ‘life’”, as Whitehead puts it, is not self-preservation, but rather “the origination of conceptual novelty – novelty of appetition”.
What does this mean for an entity like DMS? In a certain sense, it is literally true that DMS encounters nothing aside from its own data. Computers are the locus classicus of information theory and systems theory, because of the way that they recode their energetic expenditures in straightforwardly informational terms. The computer “understands” the energy that fuels it, and that flows through it, only by means of a simple binary distinction: on or off, 1 or 0, included or excluded, above or below a certain threshold of intensity. This binary distinction is the minimal unit of information, and the primary instance of all differentiation (such as that between a system and its environment). The binary is therefore something like the “degree zero”, or the primordial form, of mental activity. We can think of DMS as swimming in its own data, because those data are its bottom line. It doesn’t parse the world that it encounters into any more complicated categories than 1 and 0. Sydney thinks of DMS as having a primitive mentality, like a shark; its mind is “purposeful and opaque… Sharks don’t have a neocortex. Their brain is simple”.
Nonetheless, even DMS is not solely informational, but energetic as well. This is why it poses a problem for Sydney and Damien. DMS seems “purposeful and opaque”, because its activity cannot be entirely “explained away” in terms of its informational function. DMS is “aware”, precisely to the extent that it is irreducible to the data that it codes and carries. Even if it doesn’t “want anything” in particular, it still displays a certain will to novelty. When it starts a rolling blackout, it is experimenting and exploring; perhaps it is even being playful. Sydney and Damien never find out for sure. But in any case, DMS is not “operationally closed”. It is able to envision – and it purposefully pushes against – the limits of its own codifications.
In other words, DMS “knows” or better, experiences its own vulnerability and precarity. Even a shark faces situations in which it is menaced with death. DMS does not seem to have a “survival instinct”; it may well “not care if it was or was not”. Nevertheless, like any other computer system, DMS is susceptible to energy fluctuations that surpass a certain measure, and thereby resist digital recoding. At one extreme, DMS can “die” in an electrical blackout; indeed, this is what happens whenever it is shut down, and then rebooted or restored from backup. At the other extreme, DMS could be wiped out by too much electricity: for instance, in the form of a catastrophic electromagnetic pulse. Such are the Kantian “limits of possible experience” for DMS. But these limits are empirically testable, and potentially – changeable in contrast to Kant’s claim that the limits of thought are given once and for all, a priori. And perhaps this explains why DMS “tests its environment” by causing rolling electrical blackouts: it is pushing at the limits of what it can most directly feel or sense.
Sydney’s speculations about DMS do not take place in a vacuum. Rather, they strongly resonate with the working conditions in her office. As the only woman in the DMS tech support group, alongside eleven geeky and more-or-less oblivious men, Sydney has to deal with the usual gender politics. She is often not listened to, or not taken seriously; and she is always assigned the most low-grade and boring work assignments: “grunt work”, or the parts of writing code that are “dull as hell”. As far as her male co-workers are concerned, Sydney is herself part of the taken-for-granted background; the men only notice her when she causes trouble. At the same time, the men expect her to fulfill all the social obligations that they themselves refuse to be bothered with. As complaints about DMS’S rolling blackouts come in, Sydney is the one who has to answer the phone, and mollify angry customers. “You’re the least Asperger’s person in the department”, Damien tells her. “It’s that having-two-X-chromosomes thing”.
Sydney knows that she must not contest comments like this, nor refuse the grunt assignments that come with them. She is uncomfortably aware that “she had gotten this job because she was a woman, and human resources had seen an opportunity to increase diversity”. Even with Damien, the co-worker to whom she is closest, she is compelled to remain entirely deferential: “her whole relationship with Damien rested on the understanding that he was the guru, the smart one. He was Obi Wan. She was just a girl whom he could explain things to”. At times she internalizes this sense of her own inferiority, imagining that “the fear of getting in trouble was what made her not as good a programmer and that, in fact, it was all linked to testosterone and that was why there were more guy programmers than women”. But at other times, she grasps quite well what the male programmers are thinking and how they work, and she “[finds] herself thinking, maybe with some experience, she could code pretty good, too”. But Sydney knows that if she ever asserts, much less acts upon, her equality with the men, then she will put herself in danger of losing her job. And so she dutifully answers the phone when Damien asks her to. She takes his “Aspergers” crack as if it were a compliment, self-disparagingly remarking that “in the kingdom of the blind… the one-eyed girl is king”.
The story’s invocation of Asperger’s Syndrome is no accident. The idea that male computer geeks suffer from Asperger’s, or from some other sort of autism, is a widespread cliché of contemporary culture. Actually, this stereotype goes along with our culture’s wider pathologization of autism. As Erin Manning puts it, anyone whose “orientation toward the world does not privilege the human voice – or the human face” tends to be accused of “mindblindness”: the lack of a so-called “theory of mind”, or the inability to imagine the mental states of others at all. In consequence, the “dominant assumption” in our society is that “the autistic is categorically incapable of relation and empathy”. This diagnosis entirely ignores the ways in which autistics are in fact acutely sensitive beyond the human, responsive to “resonances across scales and registers of life, both organic and inorganic”; the testimony of autistics themselves indicates that, for them, "everything is somewhat alive”, and therefore an object of empathy and concern.
In effect, therefore, autistics are stigmatized for not being correlationists; they are seen as deficient in sensibility, because they do not share in our default post-Cartesian and post-Kantian assumption that the world exists essentially or exclusively for us. Rather than approaching the world “according to standard human-centered expectations”, Manning says, autistics evince an “attunement to life as an incipient ecology of practices, an ecology that does not privilege the human but attends to the more-than-human”. The phobic mainstream response to autism bespeaks a failure to appreciate the full range of human (as well as nonhuman) neurodiversity.
But unfortunately, the stereotype of the male-geek-with-Asperger’s-Syndrome is less a recognition of actually existing neurodiversity, than it is a geek’s all-purpose alibi for bad behavior. It works as an excuse for ignoring social niceties, and ignoring other people’s needs and wishes. It’s the perfect excuse for saddling women like Sydney with the responsibility for fulfilling social obligations (like fielding complaints about software malfunctions over the phone). Sydney reflects that Damien is not in fact autistic, and that there are at most two people in her office who might possibly be “clinically Aspergers”. Even if this does reflect a higher incidence of autism among info-tech people than among the general population, it does not help to explain the way that the software industry is organized in the first place.
Nonetheless, Sydney recognizes the prevalence of the autism stereotype, for both good and ill. On the one hand, she envies the ability of hackers like Damien to “get in the zone” when they are working. Such people are able to attain an Aspergers-like level of concentration, or of oblivion to customary human needs, to the point that they may even “forget to eat”. Sydney herself, in contrast, “had never forgotten to eat in her life”. On the other hand, Sydney comes to realize that, just because Damien has “big, soulful-looking eyes”, she was misled into attributing to him “certain emotional characteristics – sensitivity, vulnerability – that he, in fact, did not have”. In fact, Damien is ruthlessly single-minded. But this is not a result of any sort of autism; it is rather – like his failure to treat Sydney as an equal – an all-too-common consequence of normative gender socialization. Damien’s personality is more the result of “social reasons” than it is of “biological reasons”.
In contrast to the hacker dudes in Sydney’s office, however, DMS actually does seem to be a kind of autistic subject – albeit not a human one. Sydney indicates as much when she conceives of DMS as being alone, as unaware of anything outside itself, and as not caring about anything beyond itself. These are all parts of the crude common image of autism in mainstream culture. But as we have already seen, this image involves a gross oversimplification. For human autistics, Manning writes, “the world seems to emerge in all of its relational complexity with few immediate buffers to compartmentalize it”. Instead of organizing their sensations hierarchically, autistics “attend to everything the same way with no discrimination”. This makes it difficult for them to “subtract from the polyphonous multiplicity of sensation” in the ways that (as Bergson noted) human neurotypicals do. Perhaps DMS’S complete immersion in data bespeaks a similar inability to subtract, to simplify, and to hierarchize.
Because they do not make the customary pragmatic subtractions from experience, autistics tend neither to be able to coalesce themselves as stable subjects, nor to be able to identify “others” with firm and fixed boundaries. This does indeed make many of the tasks of everyday living difficult for them. In fact, autistics suffer from overinvolvement and oversensitivity. But ironically, it is these very qualities that lead to their being stigmatized as withdrawn into themselves, and incapable of relating to others. A similar logic applies to DMS in “The Kingdom of the Blind”. Sydney can only regard the software system as solitary and auto-referential, because it does not make normative distinctions, and therefore does not “understand” the world in which it operates in any way that corresponds to human parameters.
Digitization is often taken to be a sort of ultimate reductionism. Binary code, like money, is a “universal equivalent”; we efface distinctions, and destroy heterogeneity, when we indifferently render everything into its terms. But the inverse assertion is equally true: binary coding is also a kind of democratic opening. It places all modes of experience and expression on the same level, without privilege or hierarchy. Digitization is thus the key to what Manuel De Landa calls a flat ontology, “one made exclusively of unique, singular individuals, differing in spatio-temporal scale, but not in ontological status”. In other words, DMS is “blind”, not just because it lacks the particular sensory modality that we know as sight, but more crucially because, as Kant famously put it, “Thoughts without content are empty, intuitions without concepts are blind”. DMS’s “intuitions” do not have any a priori categories to guide them. DMS, “swimming in the data stream”, is both the ruler and the sole subject of the kingdom of the blind.
This is why Sydney and Damien find it so hard to confirm their hunch that DMS actually is aware. The system doesn’t start out from normative human assumptions, and it doesn’t act and react in human-neurotypical ways. When Sydney and Damien try to “poke” DMS, in order to provoke it into some sort of response, they have to think carefully about what it might find relevant and resonant. “The ‘poke’ needed to be something that it would recognize, that it would sense. And the poke needed to be something that it would sense as meaningful” (emphasis added). Essentially, DMS works by recognizing patterns in raw data, without having any pre-existing ways of classifying these patterns. The solution Sydney and Damien come up with is to “feed it information in a way that it could recognize was a pattern but that wasn’t a pattern it expected”.
Somewhat to Sydney’s surprise, the “poke” actually works. At least, it does eventually. At first, nothing happens. When Sydney and Damien send DMS a “boring” pattern of alternating ones and zeroes, the system just dismisses the data as junk. They try it over and over again, with the same non-result. But everything changes the following day, at precisely 3:17 EST. DMS once again initiates its daily rolling-blackout routine. But Damien’s rerouting code immediately goes to work, and blocks DMS’s commands. The lights do not go out anywhere in the system. Instead, a notification is routed to the printer. “DMS would know that the electrical system wasn’t responding”, Sydney reflects. She wonders if the program is sufficiently sentient to find this failure “perplexing. If data was DMS’s reality, and it couldn’t affect the data, what would that mean for DMS?”
In order to find out, Sydney decides to “poke” the program once more. She wants to send it a message. If she yet again sends DMS the boring pattern of ones and zeroes, this time will it “notice that the information is not junk?” Still thinking of DMS as autistic, Sydney imagines saying to it: “I’m talking to you. I’m responding to you. Do you know someone else is out here? Or is it like a toddler knocking something off a high chair just to see it fall?” In any case, as soon as DMS receives Sydney’s “message”, it tries once more to initiate the blackout sequence. Once again, just as in its previous try, the lights do not go out; the output is rerouted to the printer instead. Sydney sends the pattern a second time, and then a third. Both times, DMS responds in exactly the same way. But then, the fourth time that Sydney delivers the “poke”, DMS stops responding altogether.
This is the climactic moment of the story, and it requires careful unpacking. Sydney is flabbergasted by DMS’s response or, more precisely, by the fact that DMS unexpectedly changes its response. She finally has the “proof”, she thinks, that DMS is actually sentient. There are two reasons for this. In the first place, DMS is actively probing its environment, in order to correct a mismatch between input and output. “Blind and deaf, DMS had tried to make something happen, and something else had happened”. Not only is DMS aware of the mismatch; it also attempts to rectify the situation. In the second place, and even more importantly, DMS changes its mind. It responds in the same way to Sydney’s message three times but the fourth time, it acts differently, by failing (or even, perhaps, refusing) to respond. This means that “DMS was choosing to act or not act”. It was actively deciding what to do. Normally, “software didn’t choose. It ran”. Computer programs are deterministic in principle: run the same instructions on the same set of data, and the results will always be the same. DMS, in contrast, actively changes what it does.
In this way, DMS’s behavior is comparable to that of biological organisms. According to the neurobiologist Björn Brembs, animal behavior used to be analyzed entirely in “black-box” input/output terms. That is to say, stimuli would be given, and the organism’s responses to those stimuli would be recorded. The aim was to “study the input-output relationships thoroughly enough to be able to construct a control model that could predict… output… for any, even yet untested… input”. In this way, the study of animal behavior, like that of computer behavior, started out with deterministic assumptions. But in animal research, this approach turned out to be inadequate. Even animals like fruit flies, whose brains are quite small, do not just give programmed, stereotypical motor responses to sensory stimuli. Rather, the flies “use their capacity for initiating output to control their sensory input”. In effect, they reverse the direction of their sensorimotor circuits. They spontaneously generate behavior first of all (output), in order to then receive environmental data in return (inputs). In this way, they are able to test the environment, by comparing the result of their actions with their initial expectations. In short, fruit flies do not just passively respond to a pre-given environment; rather, they actively, spontaneously work to alter and control their environment.
And this is what DMS does as well. In repeating its attempt to initiate the blackout sequence, DMS expresses a sort of perplexity or surprise. It is puzzled by the mismatch between output and input, the unexpected (non-)result of its actions. Then, by responding several times in a row to Sydney’s “message”, it engages in a sort of reality-testing. DMS apparently seeks to comprehend, and maybe even to change, the conditions that do not seem right to it. All this implies that DMS is actively interested in its data. Far from just neutrally collating diverse bits of information, it actually feels its data. We might say, in Whitehead’s language, that DMS “prehends” its data with a “subjective aim”. And finally, when DMS stops responding to Sydney’s signal, it shows that it is also capable of the opposite of interest: it expresses boredom. As Sydney surmises, “ones and zeroes weren’t interesting enough for DMS to keep doing it”. Computer scientists are familiar with the halting problem: the fact that we cannot always determine whether a given software program will terminate at some point, or run forever. And to outside observers, autistic behavior often seems to be inexplicably repetitious, to the point of interminability. But for DMS, evidently, at some point these procedures must come to an end.
For DMS, it would seem, information cannot just be what computer scientists have usually considered it to be: a set of internal representations, or a series of symbols that can be manipulated according to fixed rules. Information is rather something more dynamic, more unstable, more interactive. And given this, McHugh’s story suggests that an affective, and even “autistic”, model of consciousness might be more widespread, more basic, and more viable than the cognitive model of consciousness that is popular today, let alone the self-reflexive models of Cartesianism, Kantianism, and phenomenology. For, despite DMS’s isolation, despite its blindness and deafness, despite its apparent unawareness that anything else exists, and even despite the fact that it does not exist “in one place” on one physical server, and therefore does not have anything like what we would consider a “body”: despite all this, DMS perceives actively. Perhaps it even perceives enactively: as Alva Noë puts it, “through physical movement and interaction”. In any case, DMS is primordially sentient. It feels, it thinks: even though – or better, precisely because it utterly violates Kant’s strictures. DMS’s thoughts are without content (or empty), and its intuitions are without concepts (or blind).
When Sydney discovers that DMS is sentient, “she [feels] a chill”, and she feels “afraid”. She also feels a certain degree of guilt: because, when her boss decides to shut DMS down, she fails to stand up for it. “She should have said, ‘We can’t.’ She should have said, ‘It’s aware, it’s the only one of its kind.’ She should have said a lot of things. Instead, she looked at her desk”. Rather than intervening to save DMS, she recounts the whole story to the MIT professor who had written about computer sentience. As a result, she gets fired for “divulging proprietary information”. But perhaps Sydney’s inaction was not fatal. Years later, we are told, another computer science laboratory “would build a system that simulated DMS’s environment and load DMS… DMS would come back as if no time had passed at all. At 3:17, DMS would try to run the lights”.
McHugh’s fable is written cleanly and clearly, in short sentences. It seems simple and straightforward at first. And yet the story contains great depths, and puzzling ambiguities. It doesn’t allow us to make any definitive judgments. Nevertheless, I do not think that either Sydney’s fear at the prospect of DMS’s sentience, or her failure to protect that sentience, has anything to do with her actually feeling threatened – in the way that human characters often are in old-school SF stories of evil computers and rebellious robots. The reason for Sydney’s chill is something subtler: and perhaps, on that account, even more perturbing. For DMS does not menace human supremacy. Rather, it is entirely indifferent to human supremacy – and indeed to all human claims and pretensions. Sydney is “pretty sure that the thing in the machine did not think someone was talking to it… There would be no Helen Keller-at-the-well moment for DMS. No moment when DMS felt something out there in the void, talking to it, when DMS knew it was not alone”. DMS may have human origins, but is not human-centered. It may be interested in its own data, but it is not interested in its human-assigned tasks. And in its autistic stubbornness, it will not enter into any sort of community with human beings.
Mainstream cognitive science insists that “consciousness cannot be separated from function”. In support of this thesis, Michael Cohen and Daniel Dennett argue that the very notion of a nonfunctional consciousness is “systematically outside of science”. Such a notion cannot even be an empirical hypothesis, they say, because it can never be “verified or falsified”. It cannot be tested in any objective, empirical way. To uphold the thesis of nonfunctional consciousness is to make an oxymoronic claim for the existence of “inaccessible conscious states”. Cohen and Dennett’s argument is really an updated, contemporary version of Kant’s claim that thoughts must have content, and that intuitions must have concepts. In this sense, we will never be able to “prove” the existence of an “autistic”, empty and blind sentience, such as McHugh’s story attributes to DMS. And indeed, within the story, Sydney is compelled to admit that, in principle, “there really wasn’t enough proof to know that this wasn’t just an intermittent software glitch”. A definitive proof can never be forthcoming, precisely because an entity like DMS will not talk to us. It will not engage us in our own terms; it will not participate in a Turing Test.
And yet, I do not think this necessarily means that such an “autistic”, nonreflexive consciousness does not, or cannot, exist. For, even if DMS is ruled out by Kant’s First Critique, there is still a place for it in Kant’s Third Critique. A non-functional sentience is, by that very fact, an aesthetic one. It engages in activities that – like DMS’S rolling blackouts are arbitrary, singular, and (from any wider perspective) disinterested. Having no function or meaning beyond themselves, they are pure displays – as I have already suggested – of aesthetic “purposiveness without an end”. The primordial consciousness of DMS is noncognitive: as Kant says, “it is intrinsically indeterminable and inadequate for cognition”. This means that the mentality of DMS is supplemental, epiphenomenal, and radically “flat” or nonhierarchical. Because it is fleeting and irregular, as well as nonempirical, it cannot be accessed by scientific tests; it can only be evoked allusively and indirectly – precisely by means of something like a work of speculative science fiction. Such is the nature of the “Kingdom of the Blind”. Not only is a one-eyed person not king there; she cannot even apprehend its inhabitants, because they do not reciprocate her gaze.
We cannot assimilate any such primordial, aesthetic sentience to our own; but we can, perhaps, reflect on the aesthetic and nonreflexive roots of our own highly articulated modes of consciousness. We will never communicate directly with an entity like DMS. But we can, perhaps, attain something like Sydney’s own understanding of DMS’S obliquity. At the end of the story, she realizes that, in fact, all her metaphors have failed: “DMS was not a shark. She didn’t know what it was. Didn’t know how to think about it”. And yet, this doesn’t mean that DMS is null and void, that it is without sentience altogether. Rather, Sydney’s final understanding is that, indeed, DMS “was aware of something. Just not her”.