The year before, when Dick was leaving Point Reyes on the verge of a mental breakdown, he had consulted the I Ching and drawn Hexagram 49, Ko. “Revolution,” it had told him, and “molting,” and that is what he had observed both in the society around him and then in his own life. He had suffered and made others suffer, too, but now when he looked around him he felt certain that the wheel had turned and that he had entered a new and more favorable phase.
Every day he congratulated himself for having found Nancy and thus broken the pattern of failure that had governed his emotional life up until this point. Now he had someone to care for and protect, a child-woman who loved him in return without trying to change him. Their relationship respected the balance of the sexes. He was yang—corpulent and bearded, filled with creative energy—and she, frail, watery, a creature of the shadows, was yin. The Tao would smile on them. They laughed together, played jokes on each other, and had pet names for each other. Like the lovers in The Magic Mountain, who, instead of exchanging photos, trade X rays of their tubercular lungs, Phil and Nancy shared their phobias, diagnosing each other’s psychiatric symptoms and marveling at how well they understood each other. He was constantly comparing his new life with everything he had known before. He vastly preferred the charming disorder of Nancy and his rundown place by the canal in San Rafael to the starched and paranoid whiteness of Anne’s modern house at Point Reyes Station. That house was cold like Anne herself, who expressed her sensuality in fits of erotic furor that made him seize up in fear; it was cold like his mother, whose absurd notions of pediatric hygiene had made her afraid to touch Phil. Nancy was different. She was warm and innocent, given to infantile fits of giggling, which Phil approved of wholeheartedly, taking them as a sign of healthy polymorphous perversion. She liked to pull at his already graying beard, to jump into the bathtub with him and rest her head on his belly. In her hands, the body that he had watched, in consternation, spread and thicken over the past few years became something soft and warm, something to love, and therefore something lovable. After a wild (if sterile) yearlong binge, his life had stabilized and now, pampered and surrounded by friends who admired his work and let themselves be seduced by his ideas, he returned to his old routines. He started to write again, and since Nancy had shown him what it was like to be authentically human—tender, compassionate, vulnerable—his writing began to extol the glory of the human being.
But to extol the glory of the human being, Phil first had to define and flesh out the opposite of the human, which for Phil was not the animal or the thing but what he called the “simulacrum”—in other words, the robot.
From the earliest science fiction on, the robot—like the golem and Frankenstein’s monster before it—had been cast in the role of villain, its human creator’s most cunning adversary. In the fifties, Isaac Asimov had tried to impose a code of good conduct on robots and their writer-creators, to reduce the theme of robot rebellion to the scientific absurdity and cheap literary convention it was, but he did not succeed. As the fictions became more and more plausible and the possibility of “thinking machines” aroused interest not only among the visionary set, the writers and philosophers, but in the scientific community as well, fear of the robot grew in the popular imagination. The term cybernetics, coined by an American mathematician, Norbert Wiener, caused a great stir, and the ideas it represented raised two interrelated questions—first, whether it would one day be possible for a machine of man’s creation to think like a human being and second, what exactly it meant to think “like a human being.” Or, to put it another way, what was it about the way we think and act that could be called specifically and exclusively human? The debate over artificial intelligence had begun, with the materialists on one side and the spiritualists on the other. The former were convinced that at least in theory all mental operations could be broken down into their component parts and were thus reproducible, whereas the latter maintained that the human mind harbors, and always will harbor, something nonquantifiable, something that cannot be reduced to a set of arithmetic operations and that, depending on which church one belonged to, one might call the ghost in the machine, reflexive consciousness, or perhaps simply the soul.
Phil followed this debate as well as anyone could who divided his reading mainly between theological tracts and popular science magazines. Eventually, while leafing through a collection of essays, he came across a groundbreaking article written in 1950 by Alan Turing, the English mathematician. The book’s introduction gave a bit of biographical information on Turing, whom Phil found fascinating: one of the founders of modern computer science, he was credited with helping win World War II by designing a machine for the British Secret Service that broke the Luftwaffe’s secret codes. Later he committed suicide under strange and disturbing circumstances. But what interested Phil most was that Turing had thought a great deal about thinking machines, which were one of Phil’s obsessions.
In that famous article, Turing takes up the range of objections that had been raised against the possibility of artificial intelligence—that what computers do is too specialized to be called thinking, that they lack spontaneity, moral sense, desire, and taste, and so forth. Turing dispatches these arguments one by one, and proposes instead a single criterion by which to answer the question of whether a machine can think. That criterion is whether the machine is capable of making a human being believe that it thinks as he does.
As Turing points out in his essay, the phenomenon of consciousness can only be observed from the inside. I know that I have a consciousness, and indeed it is because of it that I know this, but as to whether you have one or not, nothing can prove to me that you do. What I can say, however, is that you emit signals, gestural and verbal for the most part, from which, by analogy with those I emit, I can deduce that you think and feel just as I do. Sooner or later, Turing argues, it will be possible to program a machine to respond to all stimuli with signals as convincing as those emitted by a human being. By what rights, then, can we reject its bona fides as a thinker?
The test that Turing devised to implement his proposed criterion involves a human examiner and two subjects—one of them human, the other a computer—each in a separate room and thus isolated from the other two. The examiner, who communicates with the two subjects by typing on a computer keyboard, bombards them with questions intended to allow him to determine which is the human and which is the machine. For example, he might ask his subjects about the taste of blueberry pie, or their earliest memories of Christmas, or their erotic preferences, and so forth; alternatively, he might ask them to perform mathematical calculations that a computer would normally perform more quickly than a human could. Anything is fair game—even the most intimate or the most off-the-wall questions; Zen koans are a classic means of creating confusion. Meanwhile, each subject tries to convince the examiner that he, or it, is the human being. One of them does this in good faith, the other by recourse to the thousand and one tricks contained in its programs—for example, by deliberately erring in its calculations. In the end, the examiner must render his verdict. If he is wrong, the machine wins, and, Turing argues, one has no choice but to admit that the machine was thinking. If a spiritualist wants to maintain that what the machine has demonstrated is not really human thought, the burden of proof, says Turing, is now on him.
Phil loved the Turing Test, as Turing’s thought experiment came to be called. As someone who prided himself on his ability to throw sand in the eyes of any psychiatrist who crossed his path, he would have been thrilled to play the role of the machine. He subjected his friends to endless variations on this theme, notably in the course of those complicated telephone conversations where they had to prove they were who they said they were.
The novel he wrote during his honeymoon with Nancy—they married in 1966—takes up the theme of robot intelligence in earnest. Setting the action of Do Androids Dream of Electric Sheep? in 1992, twenty-five years in the future, Dick describes a world in which the production of androids has advanced to such a point that there are as many different types as there were models of car in the United States during the 1960s. The androids are deployed mainly in the Mars colonization effort, and some are rather rudimentary—simple mechanical tools with human faces or else fake families of neighbors designed to make the colonists, scattered across the Martian wastelands, feel less isolated and alone. For a modest sum, you can buy an entire family of Smiths or Scruggs and have them installed next door to you: George, the father, steps out every morning in his bathrobe onto the front steps to pick up the newspaper and on weekends mows the lawn, while his wife, Fran, puts blueberry pies in the oven all day long and their two kids, Bob and Pat, throw sticks to be fetched by Merton the German shepherd, who comes as an optional accessory. Though they can produce only a dozen or so preprogrammed replies, these machines can at least give you the feeling that there are people around. Besides, the manufacturers point out, would you really be more involved with actual human neighbors?
But androids like those are strictly low-end merchandise, and people who can afford to do so go in for the most sophisticated models, the kind that can’t be told apart from real humans. So long as these perfect imitations keep to their place and perform the tasks assigned to them, everything is fine. But every now and then, some of them rise up against their owners and flee Mars for Earth, where they try to live free. They are dangerous, and it becomes the job of specially commissioned bounty hunters (which in Ridley Scott’s film adaptation of Dick’s novel became the eponymous “blade runners”) to track them down and destroy them. It’s difficult, nerve-racking work, and the bounty hunters live in constant fear of making a mistake and “retiring” a human instead of an android. To minimize the risks, they submit suspected androids to tests that seem to combine the Turing Test, standard Psych 101 personality inventories, and that peculiarly American and long-discredited institution the lie detector. The problem is, the methods quickly become outdated, as the manufacturers of androids keep improving their models by incorporating the test parameters into the androids’ computerized brain units.
Thus, he began with the proposition that the best-equipped androids of the 1992 model year would be capable of passing the Turing or any other test. He was not, however, about to welcome the androids into the human community, as Turing said one would have to do when a machine finally managed to pass his test, and so he did something that Turing would have considered cheating—exactly the sort of cheap trick that spiritualists were known to drag out when pushed against the wall: he added a new criterion, another ability that a subject would have to demonstrate in order to qualify as human.
This criterion was empathy—what Saint Paul called “charity” and considered the greatest of the three theological virtues. Phil, who liked to speak to God in Latin, preferred the term caritas, but whatever name it went by—empathy, caritas, or agape—it came down to the same thing: respect for the Golden Rule, for the commandment to “love thy neighbor as thyself”; the capacity to put yourself in the other person’s place, to desire his happiness, to suffer with him and, if necessary, in his stead.
Turing would surely have found this additional criterion laughable, and with good reason. He would have pointed out that plenty of humans were incapable of charity and that in theory nothing prevented someone from programming a machine to carry out behaviors that one would normally attribute to charitable feelings.
But Phil was not the kind of man who, once the line in the sand had been drawn, was content to stand astride it spouting pious humanistic platitudes. On the contrary, his job, he felt, was to keep pushing that line forward and to ask the difficult questions that ensued, and it was precisely this attitude that turned a science fiction thriller like Do Androids Dream of Electric Sheep? into a cybernetic theological tract of truly dizzying implications.
The first question that Dick grapples with is the following: If the simulacrum is the opposite of the human being, what is the opposite of empathy? Cruelty, pride, disdain? Those were merely effects. The source of all evil, he thought, was withdrawal into the self, into one’s shell—a symptom, in psychiatric terms, of schizophrenia. The issue, then, was the troubling resemblance between an “android” personality and the “schizoid” state, which Jung describes as a permanent constriction of emotion. A schizoid thinks more than he feels. His comprehension of the world and of himself is purely intellectual and abstract, his awareness an atomistic aggregation of a number of disparate elements that never cohere into an emotion or even truly into a real thought. A schizoid speed freak would never say, “I need to take amphetamines in order to hold a conversation with someone,” but rather, “I am receiving signals from nearby organisms, but I cannot produce my own signals unless my batteries are recharged.” (Phil claimed to have heard someone utter this sentence, but it is possible he said it himself.) The schizoid is the kind of person who, like Jack Isidore, the protagonist of Dick’s mainstream novel Confessions of a Crap Artist, can never quite shake the thought that his body is 90 percent water or that what he thinks of as his body is actually just a survival module for his genes. Whereas most people encounter the world through their feelings, apprehend these feelings through their thoughts, describe these thoughts in sentences, and use words to form those sentences, the schizoid spends his time endlessly combining letters and numbers—twenty-six letters if he is human or, if the schizoid is a machine, two numbers, zero and one. The schizoid does not even believe that he thinks; for him, cogitation is a matter of activating his neurons. But if he thinks about that, he will say that in fact his neurons are not activating but rather following the laws of organic chemistry. This must be what an intelligent machine thinks or believes it thinks: it is in any case the kind of thought that can be programmed into it and filed in the application folder called “Reflexive consciousness.” In sum, the schizoid thinks like a machine. Phil would have been thrilled to have learned that one of the first artificial brains capable of passing a not too demanding version of the Turing Test was a program developed at MIT called “Parry,” which simulated a paranoiac. There was no magic to it: like a psychoanalyst, Parry replies to all questions with other questions or repeats them. One wag proposed that without too much trouble the program could be rigged up to produce a flawless simulation of catatonia.
For the Blade Runner, the problem with such tests—what makes them so untrustworthy and therefore so agonizing to administer—is that even though schizoids think like machines, they nevertheless are human. Phil knew this firsthand, torn as he was between his keen need to empathize with others and the powerful paranoid tendencies that made it impossible to. These two poles of empathy and paranoia stood, in his mind, for good and evil; they were his Jekyll and Hyde, and thus he knew exactly what Saint Paul meant when he said that “the good that I would I do not: but the evil which I would not, that I do.”
Phil delighted in having found in Nancy an empathetic spouse who provided him with warmth, joy, and attention and who had saved him from the clutches of a woman he considered schizoid—a hating machine who had made him, in turn, schizoid and full of hate, locking them both in a nightmare of self-involvement and mutual mistrust. Nevertheless, he was honest enough to recognize that he might not have been an innocent victim at all and if Anne was crazy, then it might have been he who had awakened the madness within her. He knew that Anne had suffered as much if not more than he had, and partly because of him. If she was the crazy one and not he, the principle of charity he had been making so much of demanded that, instead of vilifying her, he should put himself in her place and help her. As the church might say, sin is a sickness of the spirit, and those who are sick are to be comforted in their distress. Christ had come to redeem us, but most of all he had come to heal us. If the schizophrenic suffers, then perhaps so does the android. In Turingesque terms, if a computer program permits an android to simulate suffering convincingly, then how can we refuse to believe it is genuine and withhold our sympathy? Therein lies the second question.
It is the one Rick Deckard, the Blade Runner, faces when, for reasons more erotic than evangelical or philosophical, he comes to feel empathy for his quarry, or more specifically for one among them. It doesn’t help matters that the androids’ manufacturers have played a particularly vicious trick on their high-end models, implanting in their brain units a store of artificial memories that make them think they are human. They can recall their childhood, have humanlike emotions, experience moments of déjà vu. Not only can others not tell they are not human beings but even they believe they are human. They simply don’t know they are different. And when they are suspected of being androids and are forced to submit to one of the Blade Runner’s tests, they react as any human might, with a mixture of resentment at being forced to prove their humanity and fear that they may fail in their attempt. “You’ll tell me the truth, right?” one of Dick’s characters asks, while under suspicion. “If I’m an android, you’ll tell me?” As it happens, that one passes the test, but naturally the greatest pathos in the book arises not from the moments of human self-confirmation but from those of android self-discovery, when the conscious machine that thought it was human finally realizes who, or what, it is. These moments offer a view into a void that lies within all of us, an experience of absolute horror that can neither be surmounted nor forgotten and that renders anything monstrously possible.
If empathy defines what it means to be human, then one day androids will be given this ability. If to be human means to have a sense of the sacred, then they will believe in God, will sense His presence in their souls, and with all their circuits firing will sing His praises. They will have feelings and doubts, they will know anguish and fear. They will express their fears in books that they will write. And who will be able to say whether their empathy is real, whether their piety, their feelings, their doubts, and their fears are genuine or merely convincing simulations? If the terrifying cry of an android at discovering its identity is simply a programmed response to certain verbal stimuli that is produced by the proper activation of a certain number of bits (a description that perfectly fits the workings of the human brain, though of course there it is a matter of organic cells rather than bits of plastic or metal) does that change (a) everything, (b) nothing, or (c) something, except that we don’t know what?
Circle your answer.
* * *
As the Blade Runner notes with a certain unease, the best possible disguise for an android would be that of a Blade Runner.
Or, thought Phil, that of a science fiction writer.