10
IRON MAN IN A CHINESE ROOM: DOES LIVING ARMOR THINK?
Ron Novy
And yet what do I see from my window but hats and coats which may cover automata? Yet I judge these to be men.
—René Descartes
Imagine that you have been given an afternoon to examine Iron Man. As you work your way up from the jet thrusters in his boots, Iron Man peppers his small talk with descriptions of each bit of equipment, its composition, function, and so forth. As you linger on the strange combination of fly-wheels and silicon chips within the knee joint, you’re told that this particular configuration was developed after a devastating tackle by Crimson Dynamo left both the armor and its wearer nearly crippled. Reaching Iron Man’s helmet, you lift the visor to see not Tony Stark’s mustachioed smirk, but simply more circuit boards and shiny brass cogs. The voice modulator continues its running description: something about an early encounter with Mister Doll that led to a redesign of the louvers in the helmet’s cooling system. You miss nearly all of the details, focused as you are on the empty space within the suit, the hollow where you had expected to see the thinking, breathing human with whom you thought you’d been speaking. Seeing your confusion, Iron Man says, “What is—so—hard to believe? . . . You knew that I was always a possibility.”
1
In 2000’s “The Mask in the Iron Man” story arc, Iron Man’s armor—like Victor Frankenstein’s monster—is imbued with consciousness by a great bolt of lightning.
2 The most advanced cybernetic equipment of the Marvel Universe became sentient: a “Living Armor” that feels pain, learns new things, and acts on abstract principle. Could something like the Living Armor—a thinking, autonomous machine—exist in our world? And if so, how would we know? At various times, Stark has programmed the suit so that he and his armored bodyguard could appear together in public. Without opening the crunchy metal shell and finding Tony in the soft chewy center, how could we tell a preprogrammed but empty Iron Man suit from Stark in armor or either of these from the Living Armor? In this chapter, we’ll look for the answer in, of all places, a Chinese Room!
Of Blenders, Toasters, and Living Armor
The plot of “The Mask in the Iron Man” is straightforward: While fighting villain-for-hire Whiplash, Iron Man is led into a trap in which his bondage-enthused adversary calls lightning down upon him. This literal and figurative shock is a pain-filled spark of life, which creates the Living Armor. Stark initially welcomes the enhanced abilities that come with the use of a sentient tool, but as the Living Armor becomes more violent and independent (eventually killing Whiplash in a vengeful rage), Stark realizes that this autonomous and deadly superweapon must be stopped. When Stark refuses the Living Armor’s demand that they merge into “a perfect union of man and machine,” he is taken to a remote island of the Bikini Atoll to be tortured into meeting the demand that they become “a perfect Iron Man.” In the ensuing struggle, Stark suffers a heart attack. And in what seems to be a moment of remorse or possibly love, the Living Armor tears out its own mechanical heart and places it in the chest of his creator, saving Tony Stark and ending its own brief existence.
3
As it happens, the philosopher and mathematician Gottfried Leibniz (1646-1716) imagined himself confronted by an apparently intelligent machine. Leibniz asked that we visualize this machine scaled up in size “until one was able to enter into its interior, as he would into a mill.” As we tour every nook and explore every cranny of this mill-size mechanical mind, we “would find only pieces working upon one another, but never would [we] find anything to explain perception. It is accordingly . . . [not] in a machine, that the perception is to be sought.”
4 Following Leibniz, the fact that we would find no element in this great clockwork that could not be fully explained by the same mechanical rules that govern the movements of washing machines and locomotives demonstrates that despite the outward
appearance of having thoughts, perceptions, and such, there is nothing that gives us reason to believe that the machine
actually has any of these things. Contrary to appearances, Leibniz would find that the Stark-free armor lacks the capacity to think; a mind’s thinking—unlike a blender’s blending or a toaster’s toasting—is simply not the sort of thing that is subject to the rules of physics.
Functionalism, or If It Walks Like a Robot . . .
“Can a machine think?” is the sort of question you cannot answer in a particularly satisfying manner without having already worked out some other pretty big philosophical questions. What do you mean by “thinking”? How do you define “machine”? Where do you stand on the question of the relationship of mind to body? These all seem to need some answers prior to wading into a discussion about artificial intelligence. So, in an effort to sidestep some of the larger items on this carousel of metaphysical baggage, the mathematician Alan Turing developed a test to determine whether a machine can function
as if it were thinking: that is, does the machine function as well as a human in the relevant ways? Basically, the question “Can a machine think?” is put aside in favor of “Can a machine pass the Turing Test?”
5
The Turing Test itself consists of a number of sessions in which a person—say, personal assistant extraordinaire Pepper Potts—is seated at a computer terminal having a text-based conversation with an unknown partner. Pepper is told that her conversation partner is either a human or a computer programmed to hold a conversation. After having some time to interact with her conversation partner, Pepper is asked with whom she was interacting. If she is unable to correctly identify her partner as either human or computer at a rate better than chance (i.e., more than half of the time), we say that the computer has passed the test. That is, if Pepper can’t tell the difference between the two, there is good reason for her to doubt that there is a difference.
The Turing Test assumes that
functionalism is correct in accounting for minds and their thoughts. Functionalism defines a thing in terms of what it does, rather than what it is made of. So, for example, anything that tells time is a clock, whether it’s made of metal, plastic, wood, or stone. Imagine that Stark’s mentor Ho Yinsen had been kidnapped by the communist warlord Wong-Chu.
6 Were a rescue to be mounted, it would look quite different depending on just who was making the attempt: S.H.I.E.L.D. might send a stealthy raiding party led by the Black Widow; the Scarlet Witch might bend reality with a hex to teleport Yinsen out of danger; while Thor might simply march verily through the front door swinging his hammer. With respect to Ho Yinsen’s rescue, any successful strategy would do—that is, would be
functionally equivalent to any other. Put another way, the escape could be realized in multiple ways, but if they all get the same results, then the rescuer fulfills the same role or function in each case.
This idea, that the role something plays is given priority over its appearance or physical makeup, has its place not only in comparing daring rescues but in the philosophy of mind as well. So when a functionalist is discussing “thinking” or “minds,” what matters is that the function or the role is instantiated somehow—that it exists in some form, not whether it occurs within a cranium among the organic goo, a shiny metal box packed with wires and transistors, or a Rube Goldberg construction of bowling balls, piano wire, and wedges of cheese.
Look Out, Tony—Intelligent Earthworms!
This idea of functional equivalence has underwritten much of the discussion of artificial intelligence since Turing, but functionalism has had a much longer history in our effort to understand intelligence. Seventy-seven years after Mary Shelley’s story of a sentient golem awakened with a bolt of lightning, Charles Darwin published his last and probably least-read major work,
The Formation of Vegetable Mould, through the Action of Worms with Observations on Their Habits, a study of the creation of topsoil (which he calls “vegetable mould”) through the digestive action of earthworms.
7 His extended study led him to conclude that “castings” (Victorian decorum prevented his identifying the material as “worm poo”) added one to three inches of topsoil per decade. More interesting, though, is Darwin’s claim “that worms, although standing low on the scale of organization, possess some degree of intelligence.”
8
Darwin had observed that at night, earthworms plugged their burrows with leaves, and that these plugs were consistently put into place by being pulled into the burrow by the narrow ends. Darwin considered four different explanations for the phenomenon: (1) chance; (2) a process of trial and error; (3) a specialized earthworm instinct; and (4) intelligent problem solving by the earthworm. The experiments themselves consisted of observing the earthworms’ behavior when presented with both familiar leaves and with a variety of materials previously unknown to the worms (exotic leaves and triangles of stiffened paper).
Darwin dismissed the first two options, given the consistency of the results and the lack of supporting observations, respectively. From this, Darwin inferred that the earthworm behavior is a response to sensory input regarding the shapes of the objects—either option 3 or option 4. Darwin found the claim that the earthworm has a specialized “leaf-grasping” instinct to be insufficient to explain the ability to make burrow plugs of previously unfamiliar materials. Having eliminated the competing hypotheses, Darwin concluded that
If worms have the power of acquiring some notion, however rude, of the shape of an object and of their burrows, as seems to be the case, they deserve to be called intelligent; for they then act in nearly the same manner as would a man under similar circumstances.
9
The claim that earthworms solve the puzzle in more or less the same way that you or I would may strike even Count Nefaria and the Ani-Men as strange, for it implies that the same measuring stick for intelligence can be applied to all comers—human, schnauzer, computer, or Martian—with the difference between them being one of degree, rather than of kind. Functionalism, and the Turing Test specifically, is concerned with countering our presuppositions underlying this “strangeness.”
As If!
Darwin’s experiment takes the earthworm to be what the contemporary philosopher Daniel Dennett called an
intentional system, “a system whose behavior can be (at least sometimes) explained and predicted by relying on ascriptions to the system of beliefs and desires.”
10 In other words, we can often explain a thing’s behavior only by interpreting its actions
as if it were motivated by and for something.
11 When your dog scratches at the door, we expect it means she wants to go outside. As often as not, we even assign reasons as to why the dog wants to go out: to relieve herself, to investigate a strange smell, or to play in the sun with the neighborhood children (who may very well be the source of the smell). Although this system for predicting and explaining behavior isn’t foolproof—your dog might be stalking a bug or just be plain nuts—without taking an “intentional stance” toward the behavior, you get no explanation at all (and risk a soiled rug).
In his article “Intentional Systems,” Dennett sketched a way to evaluate the “mental capacity” of evermore “intelligent” computers, such as IBM’s chess-playing Deep Blue or David Cope’s music-composing Experiments in Musical Intelligence.
12 Dennett asked that instead of trying “to decide whether a machine can
really think, or be conscious, or morally responsible,” we should consider “viewing the computer as an intentional system. One predicts behavior in such a case by ascribing to the system
the possession of certain information and by supposing it to be
directed by certain goals, and then by working out the most reasonable or appropriate action on the basis of these ascriptions and suppositions.”
13 It seems unavoidable that we adopt this intentional stance whenever we sit across the chessboard from any opponent. Whether playing against fellow Avenger Steve Rogers or Jocasta, the computer that runs Stark House, it is necessary for Stark to credit his opponent with possession of certain background knowledge regarding the rules of chess, with having the goal of winning the match, and with having a strategy for reaching that goal.
14 Stark would neither be able to predict an opponent’s next move nor be able to interpret past behaviors if he didn’t assume he (or “she”) was trying to win the game. Similarly, the earthworm’s burrow-plugging behavior can be explained and predicted by ascribing to it beliefs about the prevailing conditions and a desire to shut off its burrow. And, in their final confrontation, both Tony Stark and the Living Armor could not avoid taking each other as intentional—or thinking—beings.
Crediting others, be they animals or artifacts, with intentionality is not to say that they in fact have intentions, but rather that they behave
as if they have them. Ascribing intention to an earthworm, another human, Deep Blue, or the Living Armor allows us to predict future actions and interpret those in the past. Just as important, because we don’t have direct access to the mental states of earthworms, robots, or members of our own family, behavior is—in a sense—all we have to go on.
15
The Attack of the Chinese Room
Let’s suppose that the contemporary philosopher John Searle is secretly one of the Mandarin’s many sleeper agents, and he has captured Tony Stark’s chauffeur Happy Hogan. Furthermore, as dastardly henchmen are prone to do, Searle has placed Happy in his latest diabolical device, the “Chinese Room.”
16 Happy—who speaks only English and a few cheesy French pick-up lines he picked up from Stark—has been confined to a small room and provided with a thick book of tables with which to look up Chinese symbols. From one slot in the door he receives cards containing Chinese symbols that represent questions, and through another slot he returns other cards that contain different symbols as indicated in his large book. To an outside observer who understands Chinese (such as the Mandarin), the interaction appears to be a conversation—a question goes in and an answer comes out—and he may conclude that Happy knows Chinese. Nonetheless—and here’s the diabolical part of Searle’s device—Happy will not be released from the room until he
really understands Chinese. If Searle is right, this means that Happy will never get out, for although he may be a terrific manipulator of symbols, he doesn’t understand what any of those symbols really mean.
This device itself is modeled on an advanced computer that behaves as if it understands Chinese—that is, Chinese symbols are fed into the computer as input, a program “looks up” the symbols, and other Chinese symbols are produced as output. For every question in Chinese that is fed into the computer, an appropriate answer in Chinese is spat out. The computer is so finely tuned at this process that the Mandarin himself would be convinced that he was interacting with a native speaker of Chinese. But does the computer really understand the Chinese language? No more than Happy Hogan does, according to Searle.
The Chinese Room thought experiment is intended to show that even when something can pass the Turing Test, this does not mean that it understands. Merely tricking an observer into believing that Happy or the computer has true understanding is quite a different matter from actual understanding.
17 As with Leibniz’s mill-size mechanical mind, proponents of the Chinese Room argument appear to base their claims on the intuition that a mechanical device is simply not the sort of thing that can think, despite strong appearances to the contrary.
But perhaps we should not be so quick to accept Searle’s hard and fast distinction between merely appearing to understand Chinese and really being able to understand Chinese. How do we judge understanding except by the appearance of understanding? If we ask how we know that the Mandarin is really able to understand Chinese, we presumably would cite his behaviors such as negotiating with Yellow Claw in Cantonese or ordering food at a Szechuan restaurant without resorting to pointing to the “number four” on the menu. If we doubt that the computer or that Happy understands Chinese, despite behavior indicating otherwise, it seems we should also doubt whether the Mandarin really understands Chinese, despite his behavior indicating otherwise.
Although the Mandarin’s behavior seems to demonstrate that he understands Chinese, if we look into the brain processes connecting his input to his output, we would find only physical stuff: gray goop and firing neurons. As with Leibniz’s tour of the “mental mill,” we shouldn’t expect to find a few sacks of “understanding” tucked away behind some random dendrites. Because a computer in principle can pass a behavioral test as well as a native speaker can, it would seem that we lack good reason to say that one understands Chinese while the other doesn’t. If it is unreasonable to attribute understanding on the basis of the behavior exhibited by a computer, then it seems equally unreasonable to attribute understanding to humans on the same basis.
Simulating Stark
Opponents of the notion that machines can have minds might claim that understanding is merely “simulated” by the Living Armor; it’s simply a “neat parlor trick.” These folks are right to point out that we don’t confuse a computer simulation of an asteroid strike with an actual asteroid strike or consider people mass murderers for shooting characters in a video game. Sometimes, though, it isn’t quite so easy to lay out the distinction between a thing and its simulation: Does one walk or merely simulate walking with a prosthetic leg? Are manufactured objects such as titanium hips or acrylic dentures simulations or duplications of their user’s original parts? The functionalists avoid these questions, because they evaluate a thing’s performance or role, not its origin or physical composition, as important.
Against the functionalists, Searle apparently believes that minds can exist only in a limited number of biological systems, which are the product of a long evolutionary process, while a computer is “just” an artifact that simulates the thinking properties of its biological creators. So, for Searle, a mechanical system can simulate intelligence and appear indistinguishable from human behavior, all without understanding a single thing.
This, of course, is a problem. Biological evolution relies on selective forces that operate wholly on the basis of behavior. If there is no recognizable difference in the behavior of a system that understands (such as Tony Stark) and one that does not (such as the Living Armor), these selective forces cannot select for “real,” rather than “merely simulated,” understanding. If this is so, then regardless of whether they truly understand or merely appear to understand, minds are no more or less well adapted than they would be otherwise. And so we are left with the possibility that evolution could as easily have selected for the simulation—which would place us all in the Chinese Room.
My Aching Shellhead
Leibniz and Searle share an intuition about the systems they consider in their respective thought experiments. In both cases, they consider a complex physical system composed of relatively simple operations—such as the Living Armor in “The Mask in the Iron Man”—and note that it is impossible to see how understanding or consciousness could result. This simple observation does us the service of highlighting the serious problems we face in understanding meaning and minds. Nonetheless, it is difficult to imagine how you or I—or Tony Stark—could meet the same criteria demanded of Leibniz’s mental mill, hapless Happy in the Chinese Room, or the Living Armor. For as much as we might wish to deny it, we, too, are each a sort of machine. And if you don’t think so, try to prove it. (Good luck!)
18
NOTES
The chapter epigraph is from René Descartes, Second Meditation, in The Philosophical Writings of Descartes, vol. II, trans. John Cottingham et al. (New York: Cambridge University Press, 1984), p. 21.
1 Iron Man, vol. 3, #28 (May 2000).
2 Ibid., vol. 3, #26-30 (2000).
3 See the chapter in this volume by Stephanie and Brett Patterson (“‘I Have a Good Life ’: Iron Man and the Avenger School of Virtue ”) for more on “The Mask in the Iron Man.”
4 Gottfried Leibniz,
Monadology, in
The Rationalists, trans. George Montgomery (Garden City, NY: Doubleday, 1974), p. 457.
5 Alan Turing, “Computing Machinery and Intelligence,”
Mind 49 (1950): 433-460.
6 Or, if you prefer to follow the version given in the
Iron Man film (2008), imagine that Yinsen is captured by the Ten Rings, a Talibanesque organization in Afghanistan.
7 Well, I’m sure
you’ve read it, but do you know anyone else who has?
8 Charles Darwin,
The Formation of Vegetable Mould, through the Action of Worms with Observations on Their Habits (New York: Appleton & Co., 1907), p. 98.
10 Daniel C. Dennett, “Intentional Systems, ”
Journal of Philosophy 68 (1971): 87.
11 You’ve probably noticed that Dennett is using the word “intention” in a specialized sense, rather than in the ordinary meaning of a “plan, ” but his exact meaning isn’t important for our purposes. For the brave among you, see Pierre Jacob’s “Intentionality ” in the
Stanford Encyclopedia of Philosophy,
http://plato.stanford.edu/entries/intentionality.
12 While the story of Deep Blue ’s development and eventual victory over world chess champion Garry Kasparov is well known, the Experiments in Musical Intelligence (EMI) project is less familiar to most of us. Cope, a professor of music theory and composition at the University of California at Santa Cruz, has produced several albums of EMI compositions, as well as a number of books on artificial intelligence and musical creativity.
13 Dennett, “Intentional Systems, ” p. 90.
14 . In the 2008 film
Iron Man, the sarcastic mainframe computer is called Jarvis, after Tony Stark’s Alfred Pennyworth-like comic book butler Edwin Jarvis.
15 . Since its introduction in 1971, Dennett has tweaked and expanded the idea of “intentional systems ”; for instance, see his 1989 book
The Intentional Stance (Cambridge, MA: MIT Press).
16 . The “Chinese Room ” argument is regularly included in readers for introductory philosophy courses. It first appears in John Searle’s article “Minds, Brains and Programs, ”
Behavioral and Brain Sciences 3 (1980): 417-457.
17 . Searle has since put forward a positive case for the claim that programs are not the same as minds. Roughly put, (1) computer programs are
syntactic, i.e., they “merely” manipulate symbols; (2) mental content is
semantic, i.e., our thoughts represent things and we know what it is that they represent; (3) mere manipulation of symbols is not sufficient for semantic meaning; and therefore (4) minds are not programs. (See his 1990 article “Is the Brain’s Mind a Computer Program?”
Scientific American 262, pp. 26-31.)
18 . Thanks to Jake Held and Dawn Jakubowski for talking me through earlier drafts of this essay.