Dan Burkett
I think, therefore I am. These are the words that flow through the neural pathways of IG-88 as it powers up for the very first time. The IG assassin droid is the crowning achievement of Holowan Laboratories – the very first “sentient machine.” But is such a thing even possible? Are droids capable of thought?
The mechanical occupants of the Star Wars galaxy certainly exhibit many human-like characteristics. C-3PO claims to have all manner of feelings – a great number of them bad. Like many of us, he worries incessantly about his own fate. When Princess Leia's ship is captured by Imperials, he laments that he's most assuredly “doomed” and will be “sent to the spice mines of Kessel or smashed into who-knows-what.” This constant state of anxiety makes Threepio an incredibly cautious individual. He mistrusts the safety of escape pods, hates flying, and strongly warns against angering a Wookiee during a game of holochess. But Threepio's fears might be entirely justified, because if the helpless cries of that little GNK power droid in Jabba's torture chamber are anything to go by, then it seems that droids are capable of feeling something like pain. Fortunately, droids also appear capable of enjoying the good things in life – Threepio expresses pleasure when he announces that his oil bath “is going to feel so good.” His similarities with humans don't stop there. He shows embarrassment at being “naked,” guilt when hiding from Luke after R2-D2's escape, and even the more flawed attribute of forgetfulness when he neglects to contact his companions via comlink on the Death Star.
Artoo also expresses some uniquely human features. He's stubborn – fighting over a flashlight with Yoda in the swamps of Dagobah and refusing to show Luke's message to anyone but Jabba the Hutt on Tatooine. He also engages in startling acts of heroism – risking life and (mechanical) limb to save the Queen's starship in The Phantom Menace, rescuing Padmé from the depths of the droid factory, delivering Leia's plea for help to Obi-Wan Kenobi, and attempting to open the doors to the shield generator bunker on Endor. These are only a sample of the many times that Artoo saves his friends, and his heroism seems to stem from a very real concern he shows for his companions. He worries about Anakin as he departs for Mustafar, and about Luke when he goes missing on Hoth. Furthermore, this concern is coupled with the visible relief both he and Threepio display upon seeing their master “fully functional again.”
Even the interactions that occur between droids share many of the features that are common in human relationships. Artoo and Threepio bicker incessantly. Artoo once calls Threepio a “mindless philosopher,” to which Threepio responds by calling Artoo an “overweight glob of grease” and later a “near-sighted scrap pile.” The two droids fall out while trudging across the Dune Sea on Tatooine, but then – in typical human fashion – appear genuinely relieved to see each other on the Jawa sandcrawler. Despite their differences, they also display a real interest in each other's wellbeing. Threepio quietly tells Artoo to “hang on” before he launches into the Battle of Yavin, and then quickly hides this momentary display of affection by asking, “You wouldn't want my life to get boring, would you?” After the battle, Threepio once again expresses concern for his counterpart, offering to donate any of his circuits and gears if they'll help in Artoo's recovery. On occasion, the two droids even go so far as to praise one another – Threepio uncharacteristically exclaiming, “Wonderful!” after Artoo helps them escape from Cloud City.
Despite these behaviors, droids occasionally provide us with a jarring glimpse of their true mechanical natures. Threepio initially refuses to pretend to be an Ewok god, claiming that “it just wouldn't be proper.” This is an incredibly human response – implicitly appealing to some idea of etiquette or morality. But in the very next breath, Threepio explains what he really means by this, noting that it is merely against his programming to impersonate a deity. So, even though droids certainly exhibit a great number of human-like characteristics, they are also programmed machines. The question remains: are droids capable of what we call thought?
It's worth considering why this question of droid intelligence is so important for the denizens of the Star Wars galaxy. Truth is, the treatment of droids is very different from that of humans and other sentient creatures. Mechanical beings are bought and sold, owned and abused. They are overworked, unpaid, and imprisoned with restraining bolts. When a droid knows too much – as in the case of Threepio – the solution is simply to wipe its memory, thus destroying any sense of identity and individuality it possessed. We would recoil in horror if we saw humans treated in this way, but this treatment of droids largely fails to move us. Even Padmé Amidala – a champion of justice who's disgusted to find that slavery still exists in the galaxy – shows little concern for the maltreatment of droids all around her.
Not only are droids treated as property, but also they're the victims of extreme prejudice. They're banned from many establishments, the owner of the Mos Eisley cantina gruffly declaring, “We don't serve their kind here,” despite happily catering to a menagerie of other beings. This disregard for droids can even be seen among the Jedi who – while doing all they can to respect human and alien life – think nothing of laying waste to thousands of battle droids. They dispatch the Separatist droid forces – armed, unarmed, and noncombatant alike – without the slightest hint of remorse. Indeed, their prejudice against mechanical beings seems to run deep. When Obi-Wan argues that Vader is beyond redemption, he cites as evidence the fact that his former apprentice is “more machine now than man,” as though it disqualifies him from any sort of moral consideration.
Clearly, the Jedi – along with most of the occupants of the Star Wars galaxy – believe that there is some important moral difference between the mechanical and the biological. The assumption, it seems, is that while droids may be capable of acting just like us, what's actually going on inside their heads is very different.
Threepio claims that, for a mechanic, Artoo seems to do “an excessive amount of thinking.” But are droids really capable of thought? For contemporary philosopher John Searle, thought requires understanding – something that he claims no machine, no matter how advanced, could ever be capable of.1 He builds his argument upon the assumption that all “intelligent” machines operate according to the same basic process: they take an input of symbols, run these through a program, and then give some appropriate output of symbols. Consider the search engine currently open in my Internet browser. Suppose I want to ask it what species Chewbacca is. I type out my question and hit “Enter.” The search engine now has its input – a string of symbols in the form “WHAT SPECIES IS CHEWBACCA?” The search engine then takes these symbols and manipulates them via an algorithm that makes up its search program. After a fraction of a second, it provides an output of symbols on my screen: “WOOKIEE.”
Despite my search engine's incredible ability to answer this question, most of us would be reluctant to claim that it had any understanding of what it did. Consider the very different process you would go through in answering this same question. You would no doubt picture the loveable sidekick in your mind's eye, recalling his appearance and comparing it against your knowledge of the many creatures that inhabit the Star Wars galaxy. You would make a match with the Wookiees – a brave and loyal race of beings who valiantly defended their home world against an invasion by Separatist forces. Along the way, you'd also pause to consider precisely what we mean by the concept of a species. The search engine does none of this. It doesn't understand who Chewbacca is or what Wookiees are. It merely responds to my input of symbols by providing an appropriate output of symbols. According to Searle, it's this absence of understanding that precludes something like a search engine from being capable of “thought.”
Obviously droids – particularly those as complex as Artoo and Threepio – are far more advanced than any search engine we've created. But Searle asserts that this lack of understanding holds true for all machines. In order to demonstrate this, Searle uses the famous “Chinese Room” thought-experiment.
As a variation, let's consider the “Bocce Room.” Bocce is, of course, the interplanetary trade language Threepio describes as “like a second language” to him – which is saying something, given that he's fluent in over six million forms of communication. Suppose that you're placed alone in a small room. There's a slot at each end of the room: one labeled “Input,” and the other “Output.” On the shelves around you are many books. Within those books are a set of rules containing every conceivable phrase that can be said in Bocce, along with the appropriate response to each phrase. The rules all have the following format: “If you receive input ‘X,’ then give output ‘Y.’ ” No English translations of either the questions or the answers are provided.
Suppose then, that someone fluent in Bocce – someone who understands Bocce very well – is outside the room. She has no idea how the room works, nor what's contained within. She writes a question in Bocce on a slip of paper and feeds it through the “Input” slot. In the interior of the room, you receive her query. Imagine that the slip of paper says, “Keez meeza foy wunclaz?” You look along the shelves, finding the appropriate volume for phrases that begin with the word “Keez.” After flicking through the pages, you find the following rule:
“If you receive input ‘Keez meeza foy wunclaz?,’ then give output ‘Nokeezx.’ ”
You studiously obey the rule, writing your answer on a slip of paper and feeding it through the “Output” slot. A number of further inputs and outputs occur in exactly the same way. Searle argues that this exchange models the process common to all forms of artificial intelligence – be they search engines, human-made robots, or droids. An input of symbols is given and run through a “program” – the series of rules contained within the books – and an appropriate output is provided. Assuming that the volumes are comprehensive enough, and that your ability to look up the phrases is relatively time-efficient, we can imagine the individual outside the room being completely convinced that she's conversing with someone who's fluent in Bocce.
This, however, is far from the truth. As Qui-Gon astutely notes, “[T]he ability to speak does not make you intelligent.” While you may be able to very effectively imitate a fluent Bocce speaker, you're not actually fluent in the language. You don't understand Bocce. You have no idea, for example, that the first question asked of you is “Can I upgrade to first class?” to which you replied with a curt and emphatic “No.” The way in which you answer this question is very different from the way in which it would be answered by someone who actually understood Bocce.
The same, argues Searle, is true of all artificial intelligence. While advanced robots like droids might be capable of providing convincing imitations of human behaviors and emotions, they'll still be operating according to the same input–program–output process. They may act anxious or concerned, or behave as though they're experiencing pain or pleasure – but these are merely expressions of rules contained within their programming: appropriate outputs for particular inputs. Since droids are machines, and machines necessarily operate according to this process, they will never – according to Searle – be capable of understanding, nor of thought. This, it seems, may be the very intuition that underpins the terrible treatment of droids. But is it correct?
There's a good chance that Searle's argument stems from an inflated sense of the way our own minds work. What if we operate according to the input–program–output process just described? Would this alter our perspective on whether machines have the ability to think? Might it change the way we think droids should be treated?
The suggestion that our brains merely run programs might seem improbable, but consider the way in which we learn language. As children, we make simple associations between certain sounds and certain things that we experience out in the world. The repeated use of the word father to refer to a particular older male gives us reason to connect that term with that individual. In this way, our early use of language is very much like that of machines. We take certain inputs (“Where is my father?”), run them through a program (our recollections of the person with whom the term father is usually associated), and provide an appropriate output (gesturing toward the specific individual with whom we associate the word father).2
Our deeper understanding of precisely what the term father means – that it is a relational concept that can be either genetic or social – doesn't come until much later. It's only with this understanding that we can fully grasp the implications of a statement like “Darth Vader is Luke Skywalker's father.” But, somehow, this understanding is built upon a system of language that begins with the simple input–program–output process. Given this, it may very well turn out that “understanding” is simply an intricate arrangement of many of these three-step processes working in tandem. If this is the case, then there's no reason to think that sufficiently complex machines (like droids) couldn't be capable of developing understanding in the very same way.
There's another problem with Searle's argument that's worth noting. It's very clear to us that the person inside the room doesn't understand Bocce. We need only try to talk to him in Bocce when he's outside the room to show this. But it's not the person inside the room who represents the machine in this thought-experiment; it is the room as a whole. The question is not, then, whether the individual understands Bocce, but rather whether the entire system understands Bocce. The difference is subtle, but important. The answer to the first question is clearly “no.” The answer to the second question remains a little more uncertain.
There may, therefore, be space for understanding in the Bocce Room, and thus the possibility of droids possessing understanding and thought. But there's one last concern that's difficult to shake – the nagging intuition that there's still a fundamental hurdle that will forever disqualify droids from possessing minds quite like ours: namely, the fact that they're made of different material than living creatures.
It may seem a trivial point, but it does a surprising amount of work in dictating the way in which we interact with the world around us. It explains, for example, why we think it's entirely acceptable to use physical force on an uncooperative printer, but would never think of treating a misbehaving pet in the same way. It's why we barely bat an eyelid when we see dozens of battle droids cut down with a lightsaber, yet cringe in sympathy as we watch Luke lose his hand. We are biological beings, and we identify most easily with things made of the same “stuff” as us. But the idea that only biological beings should be capable of thought has little basis. Philosophers often like to illustrate the problem with this position by considering what might happen if an advanced silicon-based alien race were ever to visit Earth. 3 It's possible that they would compare their robust, intricate mechanical minds to the delicate, fleshy lumps in our own craniums and quickly come to the very same conclusions that we tend to make about mechanical life forms. “They're so primitive,” they would say, “understanding can't possibly occur inside that mush.”
Ultimately, our favoritism toward the biological seems ill-founded. Consider another thought-experiment. Within the Star Wars galaxy, it's common for limbs and body parts to be replaced by incredibly advanced mechanical substitutes. But imagine if, in our own world, we took this one step further – developing a mechanical device that was capable of perfectly replicating the entire function of a human brain. We'd no doubt be hard-pressed to convince people to swap their natural brain outright for a fully mechanical upgrade – even if we could guarantee a flawless transfer of all of a person's character traits and memories. But suppose this transition took place little by little. Suppose that you began by merely swapping out a tiny part that represented only around 1 percent of your brain. You'd be completely unaware of this change – and your mind would continue to function as it always had. You would be just as capable of thought and understanding as always.
What if you were to swap out a little more? Perhaps another 5 percent? It seems that the difference would still be negligible – there are, after all, quite a few individuals who have undergone a brain hemispherectomy and continued to live very normal lives with only half a brain. In light of this, it seems that we should be able to replace at least 50 percent of our own brain while still retaining our capability for thought. But is this the limit? Do we stop thinking and understanding as soon as we become 51 percent mechanical? This doesn't seem plausible. It's unclear how that extra 1 percent of mechanical brain could cause such a significant change in our mental processes. Even if it could, there's no principled reason to think that this should occur at 51 percent any more than at 75 or 99 percent.
In fact, it seems we could continue to swap out our brain piece by piece until all that lay within our cranium was mechanical. Furthermore, we could do this without losing any of our capability for thought or understanding. But if this is possible, then the distinction between the biological and the mechanical is irrelevant. Thought, it would seem, relies on far more than just physical operation.
While it's hard to clearly prove that droids are capable of thought, there's enough doubt to make us seriously consider the way in which they're treated. The biological–mechanical distinction isn't as relevant as we might first assume, and the fact that droids run on programs doesn't necessarily prohibit them from possessing understanding. There's a real possibility that the human-like behaviors they exhibit – fear and pain, hope and pleasure – are indeed genuine. If this is the case, they should not be treated as second-class citizens. They should be afforded the same rights and dignities so easily granted to humans and other sentient creatures. Suffering should not be their lot in life.4