10
Making IBM’s Computer Watson Human

The purpose of this chapter is to use the teleological behavioral concept of mind developed in previous chapters to deal with the question: What makes us human? The victory of the IBM computer Watson in the TV game show Jeopardy serves as a springboard to speculate on the abilities that a machine would need to be truly human. The chapter’s premise is that to be human is to behave as humans behave and to function in society as humans function. Alternatives to this premise are considered and rejected. From the viewpoint of teleological behaviorism, essential human attributes such as consciousness, the ability to love, to feel pain, to sense, to perceive, and to imagine may all be possessed by a computer (although Watson does not currently have them). Most crucially, a computer may possess self-control and may act altruistically. However, the computer’s appearance, its ability to make specific movements, its possession of particular internal structures, and the presence of any non-material “self” are all incidental to its humanity.

    The chapter is followed by two perceptive commentaries—by J. J. McDowell and Henry D. Schlinger, Jr.—and the author’s reply.

Recently an IBM computer named Watson defeated two champions of the TV quiz show Jeopardy. Excelling at Jeopardy requires understanding natural language questions posed in the form of “answers,” for example: “He was the last president of the Soviet Union.” The correct “question” is “Who is Mikhail Gorbachev?” Watson consists of numerous high-power computers operating in parallel searching over a vast self-contained database (no web connection). The computers fill a room. The only stimuli that affect Watson are the words spoken into its microphone, typed into its keyboard, or otherwise fed into it; the only action Watson is capable of is to speak or print out its verbal response—information in and information out.

    According to the IBM website, Watson’s amazing performance in the quiz game requires “natural language processing, machine learning, knowledge representation and reasoning, and deep analytics.” Still, it is clear, Watson is not human. In considering what might make Watson human, I hope to throw some light on the question: What makes us human? What are the minimal requirements of humanity?

    I will call Watson, so modified, Watson II. Many people believe that nothing whatsoever could make a machine such as Watson human. Some feel that it is offensive to humanity to even imagine such a thing. But I believe it is not only imaginable but possible with the technology we have now. Because in English we recognize the humanity of a person by referring to that person as “he” or “she” rather than “it” (and because the real Watson, IBM’s founder, was a man), I will refer to Watson as “it” and Watson II as “he.” But this is not to imply that making Watson human would be a good thing or a profitable thing for IBM to do. As things stand, Watson has no interests that are different from IBM’s interests. Since its non-humanity allows IBM to own it as a piece of property, Watson may be exploited with a clear conscience. But it is not pointless to speculate about what would make Watson human. Doing so gives us a place to stand as we contemplate what exactly makes us human, a prolegomenon for any discussion of moral questions.

Can a Computer Ever Be Conscious?

To put this question into perspective, let me repeat an anecdote from an earlier work (Rachlin, 1994, pp. 16–17). Once, after a talk I gave, a prominent philosopher in the audience asked me to suppose I were single and one day met the woman of my dreams (Dolly)—beautiful, brilliant, witty, and totally infatuated with me. We go on a date and I have the greatest time of my life. But then, the next morning, she reveals to me that she is not a human but a robot—silicon rather than flesh and blood. Would I be disappointed, the philosopher wanted to know. I admitted that I would be disappointed. Dolly was just going through the motions, her confession would have implied. She did not really have any feelings. Had I been quicker on my feet, however, I would have answered him something like this: “Imagine another robot, Dolly II, an improved model. This robot, as beautiful, as witty, as sexually satisfying as Dolly I, doesn’t reveal to me, the next day, that she’s a machine; she keeps it a secret. We go out together for a month and then we get married. We have two lovely children (half-doll, half-human) and live a perfectly happy life together. Dolly II never slips up. She never acts in any way but as a loving human being, aging to all external appearances as real human beings do—but gracefully—retaining her beauty, her wit, her charm.” At this point she reveals to me that she is a robot. “Would the knowledge that the chemistry of her insides was inorganic rather than organic make any difference to me or her loving children or her friends? I don’t think so . . . . The story of Dolly II reveals that the thing that wasn’t there in Dolly I—the soul, the true love—consists of more behavior.”

    Some philosophers would claim that Dolly II cannot be a real human being. According to them she would be essentially a zombie. Lacking human essence, she would have “...the intelligence of a toaster” (Block, 1981, p. 21); presumably we could treat her with no more consideration for her feelings than in dealing with a toaster. Since we cannot perceive the inner states of other people, such an attitude poses a clear moral danger. If the humanity of others were an essentially non-physical property within them, there would be no way of knowing for sure that others possess such a property; it may then be convenient for us to suppose that, despite their “mere” behavior, they do not. But, although the teleological behaviorism I espouse can be defended on moral grounds (Baum, 2005; and see Chapter 11 in this book), this chapter relies on functional arguments. A behavioral conception of humanity is better than a spiritual or neurocognitive conception not because it is more moral but, as I shall try to show, because it is potentially more useful than they are.

    [I do not claim that behaviorism is free of moral issues. Many people, infants or the mentally disabled, lack one or another typical human behavioral characteristic. A behavioral morality needs to accommodate the humanity of these people—perhaps in terms of their former or expected behavior or their role in human social networks, but never in terms of internal, actions or states.]

    For a behaviorist, consciousness, like perception, attention, memory, and other mental acts, is itself not an internal event at all. It is a word we use to refer to the organization of long-term behavioral patterns as they are occurring. Consider an orchestra playing Beethoven’s Fifth Symphony. At any given moment a violinist and an oboist may both be sitting quite still with their instruments at their sides. Yet they are both in the midst of playing the symphony. Since the violinist and oboist are playing different parts we can say that, despite their identical current actions (or non-actions), their mental states at that moment are different. The teleological behaviorist does not deny that the two musicians have mental states or that these states differ between them. Moreover, there must be differing internal mechanisms underlying these states. But the mental states themselves are the musicians’ current (identical) actions in their (differing) patterns. A behaviorist therefore need not deny the existence of mental states. For a teleological behaviorist such as me, they are the main object of study (Rachlin, 1994). Rather, for a teleological behaviorist, mental states such as perception, memory, attention, and the conscious versions of these states are themselves temporally extended patterns of behavior. Thus, for a teleological behaviorist, a computer, if it behaves like a conscious person, would be conscious. If Watson could be redesigned to interact with the world in all essential respects as humans interact with the world, then Watson would be human (he would be Watson II) and would be capable of behaving consciously.

Are Our Minds Inside Us?

If, like Johannes Müller’s students in 1845 (see Chapter 5) as well as most modern scientists and philosophers, you believe that no forces other than common physical and chemical ones are active within the organism, and you also believe that our minds must be inside us, you will be led to identify consciousness with activity in the brain. Thus, current materialist studies of consciousness are studies of the operation of the brain (e.g., Ramachandran, 2011). Neurocognitive theories may focus on the contribution to consciousness of specific brain areas or groups of neurons. Or they may be more broadly based, attributing conscious thought to the integration of stimulation over large areas of the brain. An example of the latter is the work of Gerald Edelman and colleagues (Tononi & Edelman, 1998) in which consciousness, defined in behavioral terms, is found to correlate with the occurrence of reciprocal action between distributed activity in the cortex and thalamus. Edelman’s conception of consciousness, like that of teleological behaviorism, is both behavioral and molar, but it is molar within the nervous system. This research is interesting and valuable. As it progresses, we will come closer and closer to identifying the internal mechanism underlying human consciousness. Someday it may be shown that Edelman’s mechanism is sufficient to generate conscious behavior. But if it were possible to generate the same behavior with a different mechanism, such behavior would be no less conscious than is our behavior now. Why? Because consciousness is in the behavior, not the mechanism.

    Chapter 6 discussed a modern movement in the philosophy of mind called enacted mind, or extended cognition. This movement bears some resemblances to behaviorism. According to the philosopher Alva Noё (2009), for example, the mind is not the brain or part of the brain and cannot be understood except in terms of the interaction of a whole organism with the external environment. Nevertheless, for these philosophers, the brain remains an important component of consciousness. They retain an essentially neurocognitive view of the mind while expanding its reach spatially, beyond the brain, into the peripheral nervous system and the external environment.

    For a behaviorist, it is not self-evident that our minds are inside us at all. For a teleological behaviorist, all mental states (including sensations, perceptions, beliefs, knowledge, even pain) are rather patterns of overt behavior (Rachlin, 1985, 2000). From a teleological behavioral viewpoint, consciousness is not the organization of neural complexity, in which neural activity is distributed widely over space in the brain, but the organization of behavioral complexity in which overt behavior is distributed widely over time. To study the former is to study the mechanism underlying consciousness, as Edelman and his colleagues are doing, but to study the latter is to study consciousness itself.

    Widespread organization is characteristic of much human behavior. As such, it must have evolved either by biological evolution over generations or by behavioral evolution within the person’s lifetime. That is, it must be beneficial for us in some way. The first question the behaviorist asks is therefore: Why are we conscious? Non-humans, like humans, may increase reward by patterning their behavior over wider temporal extents (Rachlin, 1995) and may learn to favor one pattern over another when it leads to better consequences (Grunow & Neuringer, 2002). The difference between humans and non-humans is in the temporal extent of the learned patterns. The CEO of a company, for instance, is not rewarded for anything she does over an hour or a day or a week but is rewarded for patterns in her behavior extended over months and years. Despite this, it has been claimed that corporate executives’ rewards are still too narrowly based. Family-owned businesses (for all their unfairness) measure success over generations and thus may be less likely than corporations to adopt policies that sacrifice long-term for short-term gains.

Watson’s Function in Human Society

Watson already has a function in human society; it has provided entertainment for hundreds of thousands of people. But Watson would quickly lose entertainment value if it just continued to play Jeopardy and kept winning. There is talk of adapting Watson to remember the latest medical advances, to process information on the health of specific individuals, and to answer medical questions. Other functions in business, law, and engineering are conceivable. In return, IBM, Watson’s creator, provides it with electrical power, repairs, maintenance, continuous attention, and modification so as to better serve these functions. Moreover, there is a positive relation between Watson’s work and Watson’s payoff. The more useful Watson is to IBM, the more IBM will invest in Watson’s future and in future Watsons. I do not know what the arrangement was between Jeopardy’s producers and IBM, but if some portion of the winnings went to Watson’s own maintenance, this would be a step in the right direction. It is important to note that this is the proper direction. To make Watson human, we need to work on Watson’s function in society. Only after we have determined Watson II’s minimal functional requirements should we ask how those requirements will be satisfied.

    Watson would have to have social functions over and above entertainment if we are to treat it as human. Let us assume that Watson is given such functions. [IBM is doing just that. It is currently developing and marketing versions of Watson for law, medicine, and industry (Moyer, 2011).] Still, they are far from sufficient to convince us to treat Watson as human.

Watson’s Memory and Logic

A human quality currently lacking in Watson’s logic is the ability to take its own weaknesses into account in making decisions. Here is a human example of this ability: You are driving and come to a crossroads with a traffic light that has just turned red. You are in a moderate hurry, but this is no emergency. You have a clear view in all four directions. There is no other car in sight and no policeman in sight. The odds of having an accident or getting a ticket are virtually zero. Should you drive through the red light? Some of us would drive through the light, but many of us would stop anyway and wait until it turned green. Why? Let us eliminate some obvious answers. Assume that the costs of looking for policemen and other cars are minimal. Assume that you are not religious, so you do not believe that God is watching you. Assume that you are not a rigid adherent to all laws regardless of their justifiability. Then why stop? One reason to stop is that you have learned over the years that your perception and judgment in these sorts of situations is faulty. You realize that, especially when you are in a hurry, both your perception and reasoning tend to be biased in favor of quickly getting where you’re going. To combat this tendency, you develop the personal rule: stop at all red lights (unless the emergency is dire or unless the light stays red so long that it is clearly broken). This rule, as you have also learned, serves you well over the long run.

    Here is another case. You are a recovering alcoholic. You have not taken a drink for a full year. You are at a party and are trying to impress someone. You know that having one or two drinks will cause you no harm and will significantly improve your chances of impressing that person. You know also that you are capable of stopping after two drinks; after all, you haven’t taken a drink in a year. Why not have the drink? No reason, Watson would say—unless the programmers had arbitrarily inserted the rule: never drink at parties. (But that would make Watson still more machine-like.) To be human, Watson would have to itself establish the rule, and override its own logical mechanism, because its own best current calculations have turned out to be faulty in certain situations. Watson does not have this kind of override. It does not need it currently. But it would need an override if it had to balance immediate needs with longer-term needs, and those with still longer-term needs, and so forth. As it stands, Watson can learn from experience, but its learning is time-independent. As Chapter 7 shows, addiction may be seen as a breakdown of temporally extended behavioral patterns into temporally narrower patterns. [Thus addiction is not a brain disease but is directly accessible to observation and control (see also Rachlin, 2000; Satel & Lilienfeld, 2010).]

    Watson, I assume, obeys the economic maxim: ignore sunk costs. It can calculate the benefits of a course of action based on estimated returns from that action from now to infinity. But it will not do something just because that is what it has done before. As long as its perception and logic capacity remain constant, Watson will not now simply follow some preconceived course of action. But, if its perception and logic mechanism will be predictably weaker at some future time, Watson will be better off deciding on a plan and just sticking to it than evaluating every alternative strictly on a best estimate of that alternative’s own merits. Hal, the computer of 2001: A Space Odyssey, might have known that its judgment would be impaired if it had to admit that it made a wrong prediction; Hal should have trusted its human operators to know better than it did at such a time. But Hal was machine-like in its reliance on its own logic.

    Paying attention to sunk costs often gets people into trouble. It is, after all, called a fallacy by economists. Because they had invested so much money in its development, the British and French governments stuck with the Concorde supersonic airplane long after it had become unprofitable. People often hold stocks long after they should have sold them or continue to invest in personal relationships long after they have become painful. Their reasons for doing so may be labeled “sentimental.” But we would not have such sentimental tendencies if they were not occasionally useful, however disastrous they may turn out in any particular case. Our tendency to stick to a particular behavioral pattern, no matter what, can get out of hand, as when we develop compulsions. But, like many other psychological malfunctions, compulsiveness is based on generally useful behavioral tendencies.

    A satisfactory answer to a why question about a given act may be phrased in terms of the larger pattern into which the act fits. Why is he building a floor? Because he is building a house. For Watson to be human, we must be able to assign reasons (i.e., functional explanations) for what he does: Q. Why did Watson bet $1,000 on a Jeopardy daily double? A. Because that maximizes the expected value of the outcome on that question. Q. Why maximize the expected value of the outcome? A. Because that improves the chance of winning the game. This is as far as Watson can go itself. The wider reasons may be assigned solely to its human handlers. But Watson II, the human version of Watson, would have to have the wider reasons in itself. Q. Why improve the chance of winning the game? A. Because that will please IBM. Q. Why please IBM? A. Because then IBM will maintain and develop Watson II and supply it with power. Thus, Watson II may have longer-term goals and shorter-term sub-goals. This is all it needs to have what philosopher’s call “intentionality.” Philosophers might say, but Watson does not know “what it is like” to have these goals, whereas humans do know “what it is like.”

    Do we know what it is like to be our brothers, sisters, mothers, fathers, any better than we know what it is like to be a bat? (Nagel, 1974). Not if “what it is like” is thought to be some ineffable physical or non-physical state of our nervous systems, hidden forever from the observations of others. The correct answer to “What is it like to be a bat?” is “to behave, over an extended time period, and in a social context, as a bat behaves.” The correct answer to “What is it like to be a human being?” is “to behave, over an extended time period, and in a social context, as a human being behaves.”

Will Watson II Perceive?

Watson can detect miniscule variations in its input and, up to a point, understand their meaning in terms of English sentences and their relationship to other English sentences. Moreover, the sentences it detects are directly related to what is currently its primary need—its reason for being—coming up quickly with the right answer (“question”) to the Jeopardy question (“answer”) and learning by its mistakes to further refine its discrimination. Watson’s perception of its electronic input, however efficient and refined, is also very constrained. That is because of the narrowness of its primary need (to answer questions).

    But Watson currently has other needs: a steady supply of electric power with elaborate surge protection, periodic maintenance, a specific temperature range, protection from the elements, and protection from damage or theft of its hardware and software. I assume that currently there exist sensors, internal and external, that monitor the state of the systems supplying these needs. Some of the sensors are probably external to the machine itself, but let us imagine that all are located on Watson II’s surface.

    Watson currently has no way to satisfy its needs by its own behavior, but it can be given such powers. Watson II will be able to monitor and analyze his own power supply. He will have distance sensors to monitor the condition of his surroundings (his external environment) and to detect and analyze movement in the environment, as well as his own speech, in terms of benefits and threats to his primary and secondary functions. He will be able to organize his speech into patterns—those that lead to better functioning in the future and those that lead to worse. He will be able to act in such a way that benefits are maximized and threats are minimized by his actions. That is, he will be able to discriminate among (i.e., behave differently in the presence of) different complex situations that may be extended in time. He will be able to discriminate one from another of his handlers. He will be able to discriminate between a person who is happy and a person who is sad. He will be able to discriminate between a person who is just acting happy and one who truly is happy, between a person with hostile intentions toward him and a person with good intentions.

    As indicated in Chapter 2, teleological behaviorism enables an observer to make such distinctions; they depend on narrow versus wide perspectives. Just as the difference between a person who is actually hearing a quartet and a deaf person behaving in the same way for a limited period depends on patterns of behavior beyond that period, so the difference between a person having a real perception, a real emotion, or a real thought and a great actor pretending to have the perception, emotion, or thought depends on patterns of behavior extending beyond the actor’s current performance—before the curtain rises and after it goes down. I do not believe that a system that can learn to make such discriminations is beyond current technology. The most sophisticated, modern poker-playing computers currently bluff and guess at bluffs depending on the characteristics of specific opponents over extended periods.

    These very subtle discriminations would extend to Watson II’s social environment. Watson II will have the power to lie and to deceive people. Like other humans, he will have to balance the immediate advantages of a lie with the long-term advantage of having a reputation for truth-telling and the danger of damaging another person’s interests that might overlap with his own interests. These advantages may be so ineffable that he may develop the rule: don’t lie except in obvious emergencies. Like the rule: stop at red lights except in obvious emergencies, this might serve well to avoid difficult and complex calculations and may free up resources for other purposes.

    With such powers, Watson II’s perception will function for him as ours does for us. It will help him in his social interactions as well as his interactions with physical objects that make a difference in satisfying his needs. What counts for Watson II’s humanity is not what he perceives, not even how he perceives, but why he perceives; his perception must function in the same way as ours does.

Will Watson II Be Able to Imagine?

Watson seems to have a primitive sort of imagination. Like any computer, it has internal representations of its input. But a picture in your head, or a coded representation of a picture in your head, while it may be part of an imagination mechanism, is not imagination itself. Imagination itself is behavior—acting in the absence of some state of affairs as you would in its presence. Such behavior has an important function in human life—to make perception possible. Pictures in our heads do not themselves have this function.

    Recall the illustration of imagination from Chapter 2: Two people a woman and a man, are asked to imagine a lion in the room. The woman closes her eyes, nods, says, “Yes, I see it. It has a tail and a mane. It is walking through the jungle.” The man runs screaming from the room. The man is imagining an actual lion. The woman would be imagining not a lion but a picture of a lion; in the absence of the picture, she is doing what she would do in its presence. An actor on a stage is thus a true imaginer, and good acting is good imagination—not because of any picture in the actor’s head, but because he is behaving in the absence of a set of conditions as he would if they were actually present.

    What would Watson need to imagine in this sense? Watson already has this ability to a degree. As a question is fed in, Watson does not wait for it to be completed. Watson is already guessing at possible completions and looking up answers. Similarly, we step confidently onto an unseen but imagined floor when we walk into a room. The outfielder runs to where the ball will be on the basis of the sound of the bat and a fraction of a second of its initial flight. We confide in a friend and do not confide in strangers on the basis of their voices on the telephone. On the basis of small cues, we assume that perfect strangers will either cooperate with us in mutual tasks or behave strictly in their own interests. Of course, we are often wrong. But we learn by experience to refine these perceptions. This tendency, so necessary for everyday human life, to discriminate on the basis of partial information and past experience, and to refine such discriminations based on their outcomes vis-à-vis our needs, will be possessed by Watson II.

Will Watson II Feel Pain?

In a classic article, Dennett (1978) took up the question of whether a computer could ever feel pain. Dennett designed a pain program that duplicated in all relevant respects what was then known about the human neural pain-processing system and imagined these located inside a robot capable of primitive verbal and non-verbal behavior. But in the end, he admitted that most people would not regard the robot as actually in pain. I agree. To see what the problem is, let us consider the “Turing test,” invented by the mathematician Alan Turing (1912–1954) as a means for determining the degree to which a machine can duplicate human behavior.

The Problem with the Turing Test

Imagine the machine in question and a real human side by side behind a screen. For each there is an input device (say, a computer keyboard) and an output device (say, a computer screen) by which observers may ask questions and receive answers, make comments, and receive comments, and generally communicate. The observer does not know in advance which inputs and outputs are going to and coming from the human and which are going to and coming from the machine. If, after varying the questions over a wide range so that in the opinion of the observer only a real human can meaningfully answer them, the observer still cannot reliably tell which is the computer and which is the machine, then, within the constraints imposed by the range and variety of the questions, the machine is human—regardless of the mechanism by which the computer does its job.

    What counts is the machine’s behavior, not the mechanism that produced the behavior. Nevertheless, there is a serious problem with the Turing test—it ignores the function of the supposedly human behavior in human society. Let us agree that Dennett’s computer would pass the Turing test for a person in pain. Whatever questions or comments typed on the computer’s keyboard, the computer’s answers would be no less human than those of the real person in real pain at the time. Yet the machine is clearly not really in pain, while the person is.

    For Dennett, the lesson of the Turing test for pain is that certain mental states, such as pain, pleasure, and sensation, which philosophers call “raw feels,” are truly private and are available only to introspection. That, Dennett believes, is why the Turing test fails with them—not, as I believe, because it cannot capture their functions—most importantly, their social functions. Dennett believes that other mental states, called “propositional attitudes,” such as knowledge, thought, reasoning, and memory, unlike raw feels, may indeed be detectible in a machine by means of the Turing test. But, like raw feels, propositional attitudes have social functions. Thus, the Turing test is not adequate to detect either kind of mental state in a machine. Watson may pass the Turing test for logical thinking with flying colors but, unless the actual function of its logic is expanded in ways discussed here, Watson’s thought will differ essentially from human thought.

    The Turing test is a behavioral test. But as it is typically presented, it is much too limited. If we expand the test by removing the screen and allowing the observer to interact with the mechanisms in meaningful ways over long periods of time—say, in games that involve trust and cooperation, and the computer passed the test—we would be approaching the example of Dolly II. Such a Turing test would indeed be valid. Let us call it the tough Turing test.

Passing the Tough Turing Test for Pain

In an article on pain (Rachlin, 1985), I claimed that a wagon with a squeaky wheel is more like a machine in pain than Dennett’s computer would be (although, of course, the wagon is not really in pain either). What makes the wagon’s squeak analogous to pain? The answer lies in how we interact with wagons, as opposed to how we interact with computers. The wagon clearly needs something (oil) to continue to perform its function for our benefit. It currently lacks that something and, if it does not get it soon, may suffer permanent damage, may eventually, irreparably, lose its function altogether. The wagon expresses that need in terms of a loud and annoying sound that will stop when the need is satisfied. “You help me and I’ll help you,” it seems to be saying. “I’ll help you in two ways,” the wagon says. “In the short term, I’ll stop this annoying sound; in the longer term, I’ll be better able to help you carry stuff around.”

    To genuinely feel pain, Watson must interact with humans in a way similar to a person in pain. For this, Watson would need a system of lights and sounds (speech-like, if not speech itself), the intensity of which varied with the degree of damage and the quality of which indicated the nature of the damage. To interact with humans in a human way, Watson would need the ability to recognize individual people. Currently, Watson has many systems operating in parallel. If it became a general purpose advisor for medical or legal or engineering problems, it might eventually need a system for allocating resources, for acknowledging a debt to individual handlers who helped it, a way of paying that debt (say, by devoting more of its resources to those individuals, or to someone they delegate) and, correspondingly, to punish someone who harmed it. Additionally, Watson II would be proactive, helping individuals on speculation, so to speak, in hope of future payback. Furthermore, Watson II would be programmed to respond to the pain of others with advice from his store of diagnostic and treatment information and with the ability to summon further help—to dial 911 and call an ambulance, for example. In other words, to really feel pain, Watson would need to interact in an interpersonal economy, giving and receiving help, learning whom to trust and whom not to trust, responding to overall rates of events as well as to individual events. In behaving probabilistically, Watson II will often be “wrong”—too trusting or too suspicious. But his learning capacity would bring such incidents down to a minimal level.

    Watson II will perceive his environment in the sense discussed previously—learning to identify threats to his well-being and refining that identification over time. He will respond to such threats by signaling his trusted handlers to help remove them, even when the actual damage is far in the future or only vaguely anticipated. Again, the signals could be particular arrangements of lights and sounds; in terms of human language, they would be the vocabulary of fear and anxiety. In situations of great danger, the lights and sounds would be bright, loud, continuous, annoying, and directed at those most able to help. The first reinforcement for help delivered would be the ceasing of these annoyances (technically, negative reinforcement). But, just as the wagon with the squeaky wheel functions better after oiling, the primary way that Watson II will reinforce the responses of his social circle will be his better functioning.

    It is important to note that the question: Is Watson really in pain? cannot be separated from the question: Is Watson human? A machine that had the human capacity to feel pain, and only that capacity—that was not human (or animal) in any other way—could not really be in pain. A Watson that was not a human being (or another animal) could not really be in pain. And this is the case for any other individual human trait. Watson could not remember, perceive, see, think, know, or believe in isolation. These are human qualities by definition. Therefore, although we consider them one by one, humanity is a matter of all (or most) or none. Watson needs to feel pain to be human but also needs to be human before it can feel pain. But, a Watson that is human (that is, Watson II) in other respects and exhibits pain behavior, as specified above, would really be in pain.

What Do We Talk about When We Talk about Pain?

Once we agree that the word “pain” is not a label for a non-physical entity within us, we can focus on its semantics. Is it efficient to just transfer our conception of “pain” (its meaning) from a private undefined spiritual experience that no one else can observe to a private and equally undefined physical stimulus or response that no one else can observe? Such a shift simply assumes the privacy of pain (we all know it; it is intuitively obvious) and diverts attention from an important question: What is gained by insisting on the privacy of pain? Let us consider that question.

    In its primitive state, pain behavior is the unconditional response to injury. Sometimes an injury is as clear to an observer as is the behavior itself; but sometimes the injury may be internal, such as a tooth cavity, or may be abstract in nature—such as being rejected by a loved one. In these cases the help or comfort we offer to the victim cannot be contingent on a detailed checking-out of the facts; there is no time; there may be no way to check. Our help, to be effective, needs to be immediate. So, instead of checking out the facts, we give the person in pain, and so should we give Watson II, the benefit of the doubt. Like the fireman responding to an alarm, we just assume that our help is needed. This policy, like the fireman’s policy, will result in some false alarms. Watson II’s false alarms might teach others to ignore his signals. But, like any of us, Watson II would have to learn to ration his pain behavior, to reserve it for real emergencies. Watson II’s balance point, like our balance points, will depend on the mores of his society. That society might (like the fire department) find it to be efficient to respond to any pain signal, regardless of the frequency of false alarms; or that society, like a platoon leader in the midst of battle, might generally ignore less than extreme pain signals, ones without obvious causative damage.

    Our social agreement that pain is an internal and essentially private event thus creates the risk that other people’s responses to our pain will, absent any injury, reinforce that pain. This is a risk that most societies are willing to take for the benefit of quickness of response. So, instead of laying out a complex set of conditions for responding to pain behavior, we imagine that pain is essentially private, that only the person with the injury can observe the pain. We take pain on faith. This is a useful shortcut for everyday-life use of the language of pain (as it is for much of our mentalistic language), but it is harmful for the psychological study of pain and is irrelevant to the question of whether Watson II can possibly be in pain. Once Watson II exhibits pain behavior and other human behavior in all of its essential respects, we must acknowledge that he really is in pain. It may become convenient to suppose that his pain is a private internal event. But this would be a useful convenience of everyday linguistic communication, like the convenience of saying that the sun rises and sets, not a statement of fact.

    Because Watson II’s environment would not be absolutely constant, he would have to learn to vary his behavior to adjust to environmental change. Biological evolution is the process by which we organisms adjust over generations, in structure and innate behavioral patterns, as the environment changes. Behavioral evolution is the process by which we learn new patterns and adjust them to environmental change within our lifetimes. In other words, our behavior evolves over our lifetimes, just as our physical structure evolves over generations.

    I say behavioral evolution rather than learning, to emphasize the relation of behavioral change to biological evolution. Just as organisms evolve in their communities and communities evolve in the wider environment, so patterns of behavior evolve in an individual’s lifetime (see Chapter 7). Evolution occurs on many levels. In biological evolution, replication and variation occur mostly on a genetic level, while selection acts on individual organisms. In behavioral evolution, replication and variation occur on a biological level, while selection occurs on a behavioral level; we are born with or innately develop behavioral patterns. But those patterns are further shaped by the environment over an organism’s lifetime and often attain high degrees of complexity. This is nothing more than a repetition of what every behavior analyst knows. But it is worth emphasizing that operant conditioning is itself an evolutionary process (see Staddon & Simmelhag, 1971, for a detailed empirical study and argument of this point). We hunt through our environment for signals that this adaptation is working. Similarly, Watson II will adjust its output in accordance with complex and temporally extended environmental demands. Watson II will signal that things are going well in this respect with lights and sounds that are pleasurable to us. With these signals he will not be asking his handlers for immediate attention, as he would with pain signals. But such pleasing signals will give Watson II’s handlers an immediate as well as a long-term incentive to reciprocate—to give Watson II pleasure and to avoid giving him pain.

Giving Watson Self-Control and Altruism

People often prefer smaller-sooner rewards to larger-later ones. A child, for example, may prefer one candy bar today to two candy bars tomorrow. An adult may prefer $1,000 today to $2,000 five years from today. In general, the further into the future a reward is, the less it is worth today. A bond that pays you $2,000 in five years is worth more today than one that pays you $2,000 in ten years. The mathematical function relating the present value of a reward to its delay is called a delay discount (DD) function. Delay discount functions can be measured for individuals. Different people discount money more or less steeply than others (they have steeper or more shallow delay discount functions); these functions can be measured and used to predict degree of self-control. As you would expect: children have steeper DD functions than adults; gamblers have steeper DD functions than non-gamblers; alcoholics have steeper DD functions than non-alcoholics; drug addicts have steeper DD functions than non-addicts; students with bad grades have steeper DD functions than students with good grades; and so forth (Madden & Bickel, 2009). Watson’s behavior, to be human behavior, needs to be describable by a DD function, too. It would be easy to build such a function (hyperbolic in shape, as it is among humans and other animals) into Watson II. But DD functions also change in steepness with amount, quality, and kind of reward. We would have to build in hundreds of such functions, and even then it would be difficult to cover all eventualities. A better way to give Watson II DD functions would be to first give him the ability to learn to pattern his behavior over extended periods of time. As Watson II learned to extend his behavior in time, without making every decision on a case-by-case basis, he would, by definition, develop better and better self-control. A Watson II with self-control would decide how to allocate his time over the current month or year or five-year period, rather than over the current day or hour or minute. Patterns of allocation evolve in complexity and duration over our lifetimes by an evolutionary process similar to the evolution of complexity (e.g., the eye) over generations of organisms. This ability to learn and to vary temporal patterning would yield the DD functions we observe (Locey & Rachlin, 2011). To have human self-control, Watson II needs this ability.

    A person’s relation with other people in acts of social cooperation may be seen as an extension of her relation with her future self in acts of self-control. That there is a relation between human self-control and human altruism has been noted in modern philosophy (Parfit, 1984), economics (Simon, 1995), and psychology (Ainslie, 1992; Rachlin, 2002). Biologists have argued that we humans inherit altruistic tendencies—that evolution acts over groups as well as individuals (Sober & Wilson, 1998). Be that as it may, altruistic behavior may develop within a person’s lifetime by behavioral evolution—learning patterns (groups of acts) rather than individual acts. There is no need to build altruism into Watson. If Watson II has the ability to learn to extend his patterns of behavior over time, he may learn to extend those patterns over groups of individuals. The path from swinging a hammer to building a house is no different in principle than the path from building a house to supporting one’s family. If Watson can learn self-control, then Watson can learn social cooperation. (See Chapter 8 for a more detailed discussion of delay and social discount functions and their relation to self-control and altruism.)

Watson in Love

Robots (or pairs or groups of them) may design other robots. But I am assuming that IBM or a successor will continue to manufacture Watson throughout its development. Previously in this chapter, I claimed that Watson has no interests different from IBM’s. But that does not mean that Watson could not evolve into Watson II. IBM, a corporation, itself has evolved over time. If Watson’s development is successful, it will function within IBM like an organ in an organism; as Wilson and Wilson (2008) point out, organ and organism evolve together at different levels. The function of the organ subserves the function of the organism because if the organism dies, the organ dies (unless it is transplanted).

    Watson II would not reproduce or have sex. Given his inorganic composition, it would be easier in the foreseeable future for IBM to manufacture his clones than to give him these powers. Would this lack foreclose love for Watson II? It depends what you mean by love. Let us consider Plato on the subject. Plato’s dialogue The Symposium consists mostly of a series of speeches about love. The other participants speak eloquently, praising love as a non-material good. But Socrates expands the discussion as follows:

        “Love, that all-beguiling power,” includes every kind of longing for happiness and for the good. Yet those of us who are subject to this longing in the various fields of business, athletics, philosophy and so on, are never said to be in love, and are never known as lovers, while the man who devotes himself to what is only one of Love’s many activities is given the name that should apply to all the rest as well. (Symposium, 305d)

What Plato is getting at here, I believe, is the notion, emphasized by the twentieth-century Gestalt psychologists, that the whole is greater than the sum of its parts—that combinations of things may be better than the sum of their components. To use a Gestalt example, a melody is not the sum of a series of notes. The pattern of the notes is what counts. The melody is the same and may have its emergent value in one key and another—with an entirely different set of notes.

    A basketball player may sacrifice her own point totals for the sake of the team. All else being equal, a team that plays as a unit will beat one in which each individual player focuses solely on her own point totals. Teams that play together, teams on which individuals play altruistically, will thus tend to rise in their leagues. In biological evolution, the inheritance of altruism has been attributed to natural selection on the level of groups of people (Sober & Wilson, 1998; Wilson & Wilson, 2008). In behavioral evolution, Locey and I claim, altruism may be learned by a corresponding group-selection process (Locey & Rachlin, 2011).

    According to Plato, two individuals who love each other constitute a kind of team that functions better in this world than they would separately. The actions of the pair approach closer to “the good,” in Plato’s terms, than the sum of their individual actions. How might Watson II be in love in this sense? Suppose the Chinese built a robot—Mao. Mao, unlike Watson II, looks like a human and moves around. It plays ping-pong, basketball, and soccer, and of course swims. It excels at all these sports. It is good at working with human teammates and at intimidating opponents, developing tactics appropriate for each game. However, good as Mao is, wirelessly connecting him to Watson II vastly improves the performance of each. Mao gets Watson II’s lightning-fast calculating ability and vast memory. Watson II gets Mao’s knowledge of human frailty and reading of human behavioral patterns, so necessary for sports—not to mention Mao’s ability to get out into the world. Watson II, hooked up to Mao, learns faster; he incorporates Mao’s experience with a greater variety of people, places, and things. Watson II is the stay-at-home intellectual; Mao is the get-out-and-socialize extrovert. [One is reminded of the detective novelist Rex Stout’s pairing of Nero Wolfe (so fat as to be virtually housebound, but brilliant) and Archie Goodwin (without Wolfe’s IQ but full of common sense, and mobile). Together they solve crimes. It is clear from the series of novels that their relationship is a kind of love—more meaningful to Archie, the narrator, than his relationships with women.]

    In an influential article (Searle, 1980), the philosopher John Searle argued that it would be impossible for a computer to understand Chinese. Searle imagines a computer that memorized all possible Chinese sentences and their sequential dependencies and simply responded to each Chinese sentence with another Chinese sentence as a Chinese person would do. Such a computer, Searle argues, would pass the Turing test but would not know Chinese. True enough. Searle’s argument is not dissimilar to Dennett’s argument that a computer cannot feel pain (although these two philosophers disagree in many other respects). But, like Dennett with pain, Searle ignores the function, in the world, of knowing Chinese. Contrary to Searle, a computer that could use Chinese in subtle ways to satisfy its short- and long-term needs—to call for help in Chinese, to communicate its needs to Chinese speakers, to take quick action in response to warnings in Chinese, to attend conferences conducted in Chinese, to write articles in Chinese that summarized or criticized the papers delivered at those conferences. That computer would know Chinese. That is what it means to know Chinese—not to have some particular brain state or pattern of brain states that happens to be common among Chinese speakers.

    It is important that Watson II and Mao each understand the other’s actions. Watson II will therefore know Chinese (in the sense outlined above) as well as English. Each will warn the other of dangers. Each will comfort the other for failures. Comfort? Why give or receive comfort? So that they may put mistakes of the past behind them and more quickly attend to present tasks. Mao will fear separation from Watson II and vice versa. Each will be happier (perform better) with the other than when alone. To work harmoniously together, each machine would have to slowly learn to alter its own programs from those appropriate to its single state to those appropriate to the pair. It follows that were they to be suddenly separated, the functioning of both would be impaired. In other words, such a separation would be painful for them both. Watson II, with annoying lights and sounds, and Mao, in Chinese, would complain. And they each would be similarly happy if brought together again. Any signs that predicted a more prolonged or permanent separation (for instance, if Mao should hook up with a third computer) would engender still greater pain. For the handlers of both computers, as well as for the computers themselves, the language of love, jealousy, pain, and pleasure would be useful.

    But, as with pain, love alone could not make a computer human. Perception, thought, hope, fear, pain, pleasure, and all or most of the rest of what makes us human would need to be present in its behavior before Watson II’s love could be human love.

What We Talk about When We Talk about Love

Let us consider whether Watson II might ever lie about his love, his pleasure, his pain, or other feelings. To do that, we need to consider talk about feelings separately from having the feelings themselves. In some cases they are not separate. In the case of pain, for example, saying “Ouch!” is both a verbal expression of pain and part of the pain itself. But you might say, “I feel happy,” for instance, without that verbal expression being part of your happiness itself. We tend to think that saying “I feel happy” is simply a report of a private and internal state. But, as Skinner (1957) pointed out, this raises the question of why someone should bother to report to other people her private internal state, a state completely inaccessible to them, a state that could have no direct effect on them and no meaning for them. After all, we do not walk down the street saying “the grass is green, or “the sky is blue,” even though that may be entirely the case. If we say those things to another person we must have a reason for saying them. Why then should we say, “I am happy,” or “I am sad,” or “I love you”? The primary reason must be to tell other people something about how we will behave in the future. Such information may be highly useful to them; it will help in their dealings with us over that time. And if they are better at dealing with us, we will be better at dealing with them. The function of talking about feelings is to predict our future behavior, to tell other people how we will behave. Another function of Watson II’s language may be to guide and organize his own behavior—as we might say to ourselves, “this is a waltz,” before stepping onto the dance floor. But how do we ourselves know how we will behave? We know because in this or that situation in the past we have behaved in this or that way and it has turned out for the good (or for the bad). What may be inaccessible to others at the present moment is not some internal, essentially private, state but our behavior yesterday, the day before, and the day before that. It follows that another person, a person who is close to us and observes our behavior in its environment from the outside (and therefore has a better view of it than we ourselves do), may have a better access to our feelings than we ourselves do. Such a person would be better at predicting our behavior than we ourselves are. “Don’t bother your father, he’s feeling cranky today,” a mother might say to her child. The father might respond, “What are you talking about? I’m in a great mood.” But the mother could be right. This kind of intimate familiarity, however, is rare. Mostly we see more of our own (overt) behavior than others see. We are always around when we are behaving. In that sense, and in that sense only, our feelings are private.

    Given this behavioral view of talk about feelings, it might be beneficial to us at times to lie about them. Saying “I love you” is a notorious example. That expression may function as a promise of a certain kind of future behavior on our part. If I say “I love you” to someone, I imply that my promised pattern of future behavior will not just cost me nothing but will itself be of high value to me. It may be, however, that in the past such behavior has actually been costly to me. Hence, I may lie. The lie may or may not be harmless but, in either case, I would be lying not about my internal state but about my past and future behavior.

    You could be wrong when you say “I love you,” and at the same time not be lying. As discussed previously, our perception (our discrimination) between present and past conditions may lack perspective. (This may especially be the case with soft music playing and another person in our arms.) Thus, “I love you” may be perfectly sincere but wrong. In such a case, you might be thought of as lying to yourself. The issue, like all issues about false mental states, is not discrepancy between inner and outer but discrepancy between the short term and the long term.

    So, will Watson II be capable of lying about his feelings and lying to himself about them? Why not? Watson II will need to make predictions about his own future behavior; it may be to his immediate advantage to predict falsely. Therefore he may learn to lie about his love, as he may learn to lie about his pain as well as his other mental states. Moreover, it will be more difficult for him to make complex predictions under current conditions when time is short than to make them at his leisure. That is what it takes to lie to himself about his feelings.

Does the Mechanism Matter?

Let us relax our self-imposed restraint of appearance and bodily movement and imagine that robotics and miniaturization have come so far that Watson II (like Mao) can be squeezed into a human-sized body that can move like a human. Instead of the nest of organic neural connections that constitutes the human brain, Watson II has a nest of silicon wires and chips. Now suppose that the silicon-controlled behavior is indistinguishable to an observer from the behavior controlled by the nest of nerves. The same tears (though of different chemical composition), the same pleas for mercy, and the same screams of agony that humans have are added to the behavioral patterns discussed previously. Would you say that the nest of nerves is really in pain while the nest of silicon is not? Can we say that the writhing, crying man is really in pain while the similarly writhing, crying robot is not really in pain? I would say no. I believe that a comprehensive behavioral psychology would not be possible if the answer were yes; our minds and souls would be inaccessible to others, prisoners within our bodies, isolated from the world by a nest of nerves.

    Nevertheless, many psychologists, and many behaviorists, will disagree. (They would be made uncomfortable with Dolly II, probably get a divorce, were she to reveal to them, perfect as she was, that she was manufactured, not born.) Why would such an attitude persist so strongly? One reason may be found in teleological behaviorism itself. I have argued that our status as rational human beings depends on the temporal extent of our behavioral patterns. The extent of those patterns may be expanded to events prior to birth—to our origins in the actions of human parents, as compared to Watson II’s origins in the actions of IBM. Those who would see Watson II as non-human because he was manufactured, not born, might go on to say that it would be worse for humanity were we all to be made as Watson II may be made. To me, this would be a step too far. We are all a conglomeration of built-in and environmentally modified mechanisms anyway. And no one can deny that there are flaws in our current construction.

Acknowledgment

Copyright 2013 by the Association for Behavior Analysis. Reprinted with permission of the publisher. Because this chapter includes comments by others based on the reprinted article (Rachlin, 2012b), I have not made any changes in this version, other than stylistic and grammatical ones. The chapter therefore contains repetitions of arguments, examples, and analogies, some lifted directly from earlier chapters, for which I apologize to the careful reader.

Two Commentaries

1. Minding Rachlin’s Eliminative Materialism

J. J. MCDOWELL, EMORY UNIVERSITY

Evidently, Rachlin (2012b), like many other scientists and philosophers, finds problematic the internality, privacy, ineffability, and non-physicality of mental events such as consciousness of the external world, the subjective experience of pain, qualia, raw feels, and so on. Philosopher of mind Colin McGinn (1994) discussed various ways in which thinkers have tried to deal with this dilemma, one of which is to “eliminate the source of trouble for fear of ontological embarrassment” (p. 144). This is what Rachlin has done. By identifying mental events and states with extended patterns of behavior, he has dispatched the phenomenology of consciousness tout de suite, leaving behind a purely materialist account of human behavior. It follows immediately that a machine can be human, provided only that it can be made to behave as a human behaves over extended periods of time. The only problem that remains, then, is to determine what patterns of behavior constitute “behaving like a human,” and this is what Rachlin addresses in his paper.

    Thirty-two years ago, noted philosopher John Searle (1980) published a paper in Behavioral and Brain Sciences in which he presented his now famous Chinese Room argument against the computational theory of mind that is entailed by cognitive science and much artificial intelligence research. Rachlin was a commentator on that article and in his published remarks made arguments similar to those advanced in the paper currently under consideration. In Searle’s response to Rachlin, he said “I cannot imagine anybody actually believing these [i.e., Rachlin’s] views . . . . what am I to make of it when Rachlin says that ‘the pattern of the behavior is the mental state’?. . . . I therefore conclude that Rachlin’s form of behaviorism is not generally true” (p. 454). Rachlin’s version of eliminative materialism will be regarded as not generally true by anyone who, like Searle, believes that an adequate account of human behavior must accept as real and deal with the phenomenology of consciousness. And their number is legion. In an earlier paper, Rachlin (1992) commented on widespread “anitbehaviorist polemics from psychologists of mentalistic, cognitive, and physiological orientations as well as from philosophers of all orientations” (p. 1381). This sentiment is directed toward behaviorism as philosophy of mind. It gives rise to charges that behaviorism is ludicrous (Edelman, 1990) and an embarrassment (Searle, 2004), and not generally true. Unfortunately, such sentiments and charges often generalize to behavior analysis, the science of behavior, which of course can be practiced in the absence of any commitment to a philosophy of mind, and probably usually is.

    For most people, philosophers and scientists included, conscious experience with a first-person, subjective ontology is prima facie evident as a property of human existence. Res ipsa loquitor. Rachlin does not explicitly deny this, but neither does he assert it. To assert it, given his teleological behaviorism, it seems that he would have to revert to a form of dualism, probably epiphenomenalism. There would have to be something in addition to temporally extended patterns of behavior, which would be the conscious experience. On the other hand, if he denied it, well, then Searle and many others would continue to wonder how he, or anyone, could believe such a thing.

    There are at least three alternatives to Rachlin’s eliminative materialism. One is to practice the science and clinical application of behavior analysis without a commitment to a specific philosophy of mind. This is not a bad alternative. Philosophers, and perhaps some scientists, might eventually sort things out and develop an understanding of conscious experience that could inform the science and clinical practice of behavior analysis in a useful way. It is important to note that this alternative entails agnosticism about mind, not eliminativism. A second alternative is to revert to a form of dualism that acknowledges consciousness as res cogitans. This is probably not a good idea because dualism, like eliminative behaviorism (behaviorism that entails eliminative materialism), is roundly rejected by philosophers and scientists of all persuasions (e.g., Searle, 2004). In our scientifically minded world we do not like the idea that a substance could exist that is not matter (dualism), just as we do not like the idea of denying what appears to be right under our noses (eliminative behaviorism). A third alternative is to try to reconcile the first-person, subjective ontology of consciousness with the materialist science of behavior. This is a tall order, but philosophers of mind have been working on a general form of this problem for many years. This third alternative will be considered further in the remainder of this paper.

The Phenomenology of Consciousness

What is consciousness? Many philosophical treatises have been written to address this question. This vast literature cannot be reviewed fully here, but we can get at least a basic understanding of the nature of consciousness by considering what is called intentionality. This is an idea that was introduced by Franz Brentano (1874/1995) and was later developed by Edmund Husserl (1900–1901/2001) and Jean-Paul Sartre, among others. Intentionality refers to the phenomenal fact that consciousness always appears to have an object. Consciousness is not an entity that exists on its own, like, say, a chair. Instead, it is a process or action, a consciousness of something. It is a relation toward objects in the world. Consciousness is not a thing, as its nominative grammar misleadingly suggests (cf. Wittgenstein, 1953/1999). It will be helpful to keep in mind the Husserlian motto: consciousness is always consciousness of something.

    In a remarkable paper, Sartre (1947/1970) rhapsodized that Husserl had delivered us from the “malodorous brine of the mind” in which contents of consciousness float about as if having been consumed. No, said Husserl, consciousness, or mind, cannot have contents because it is not a substance with, among other things, an inside. It is not a stomach. The objects of conscious experience are not assimilated into the mind, according to Husserl, they remain out in the world, the point that Sartre celebrated in his brief paper. Consciousness is always consciousness of these external objects, tout court. Importantly, Husserlian intentionality also extends to consciousness of ourselves:

        We are. . . delivered from the “internal life”...for everything is finally outside: everything, even ourselves. Outside, in the world, among others. It is not in some hiding-place that we will discover ourselves; it is on the road, in the town, in the midst of the crowd, a thing among things, a human among humans. (Sartre, 1947/1970, p. 5)

This understanding of conscious experience is echoed in Rachlin’s view that extended patterns of behavior in the external natural and social world are constitutive of humanity (and consciousness). Hilary Putnam (1975) advanced an analogous, externalist, view of meaning, and Maurice Merleau-Ponty’s (1942/1963) phenomenology and psychology likewise have a strong externalist cast (Noë, 2009).

    Sartre (1957/1960) further developed his view of consciousness in The Transcendence of the Ego: An Existentialist Theory of Consciousness, in which he discussed what might be considered two classes of conscious experience. The first, which will be referred to as primary conscious experience, does not entail an “ego,” that is, a reference to the person who is having the conscious experience:

        When I run after a street car, when I look at the time, when I am absorbed in contemplating a portrait, there is no I. There is consciousness of the streetcar-having-to-be-overtaken, etc . . . . In fact, I am then plunged into the world of objects; it is they which constitute the unity of my consciousnesses; it is they which present themselves with values, with attractive and repellent qualities—but me, I have disappeared . . . . There is no place for me on this level. (pp. 48–49, italics in the original)

It is possible, however, to reflect on having a conscious experience, which then creates, as it were, the ego as a transcendent object, according to Sartre. This will be referred to as secondary conscious experience. In an act of reflection, “the I gives itself as transcendent” (Sartre, 1957/1960, p. 52), that is, as an object of consciousness, and hence as an object in the world. Furthermore, “the I never appears except on the occasion of a reflective act” (p. 53). This represents a radical break with Husserl, who held that the I always stood on the horizon of, and directed, conscious experience. For Sartre, the ego does not exist except in an act of reflection, and then it is an object of consciousness just like any other.

    If phenomenology is the study of consciousness, and if human consciousness is always consciousness of the external world in which humans are immersed at every moment of their waking lives, then the study of consciousness must be the study of human existence. In this way, for Sartre, phenomenology becomes existentialism. What is important is not what a human being has as a set of properties—that is, his essence—but what the human being does, his existence or being-in-the-world (Heidegger, 1927/1962). The resonance of this perspective with a behavioral point of view is noteworthy. I have discussed some of the common grounds between behavior analysis and existential philosophy in other articles (McDowell, 1975, 1977). The task of the existential phenomenologist is to describe human existence, a topic that Sartre (1956/1966) turned to in Being and Nothingness. The task of the behavior analyst is to understand human behavior, which is a different approach to what is, in many ways, the same thing.

    The elements of Sartre’s understanding of consciousness discussed here can be summarized in three statements: (a) Consciousness is always consciousness of something and hence it is a process or action, a relation to things in the world. It follows that consciousness is not a substance or object (i.e., it is not res cogitans), and consequently it is not something that can have contents, like a stomach, or that can be located in a part of space, such as the brain, the head, or elsewhere in the body. (b) The “ego” or I is not something that stands behind and directs consciousness, as Husserl supposed. Instead, the I is given as an object of consciousness in an act of reflection; it is an object of consciousness like any other. Furthermore, the I does not exist except in an act of reflection. It follows that at least two classes of conscious experience can be distinguished, namely, primary consciousness, which is consciousness of objects in the world that does not include a reference to the consciousness of the subject, and secondary consciousness, which entails an act of reflection and hence includes a reference to the consciousness of the subject. These types of consciousness are not fundamentally different inasmuch as both are conscious experiences of something. But in secondary consciousness, the something entails a reference to the consciousness of the subject and hence creates the Sartrean I. (c) If conscious experience always exists with reference to the world, then it is probably best to study and try to understand it in the context of the human being acting in the world, which is to say, in the context of human existence, action, and behavior in the natural and social environment.

    No doubt some philosophers would take issue with some of these points. Searle (2004), for example, would probably want to include a few additional considerations. But I think most philosophers would agree that this is at least a good start to understanding the nature of consciousness. Before going any further, it might be worthwhile to consider a possible contradiction. Why, if consciousness is right under our noses and therefore is immediately apparent, do we need a phenomenological analysis of it? Why do we need philosophers to tell us about the nature of conscious experience? Don’t we already know, because we have it? The answer to these questions is that the brute fact of consciousness is what is right under our noses. It is possible that a careful analysis could uncover something important about its properties and nature, or could refine our understanding of it. If, when a philosopher says something like “consciousness is always consciousness of something,” we examine our own conscious experience and find that it is consistent with the philosopher’s statement, then we have benefited from the philosophical analysis.

    Since the work of Husserl and Sartre, the philosophy of mind has come to be influenced strongly by neuroscience. Many contemporary philosophers of mind seek a physical, that is, a materialist, account of consciousness, and most (maybe even all) look to brain function as the source of conscious experience. This focus on the brain is likely to be problematic in view of the foregoing analysis of consciousness as a property of human interaction with the natural and social environment. Nevertheless, there are some ideas in these brain-based philosophies that are worth considering.

Brain-Based Philosophies of Mind

Three points of view will be discussed in this section, namely, those advanced by John Searle, Thomas Nagel, and Colin McGinn. Searle puts forward the idea that conscious experience is a natural property of the material function of the brain, and therefore something that simply happens as the brain goes about its business of regulating commerce between the whole organism and its environment. Thomas Nagel’s view is similar, except that he is wary of contamination by remnants of Cartesian dualism. A view like Searle’s may be overreacting in a way, and hence too insistent on res extensa. An alternate possibility, according to Nagel, is that a new conceptual framework is required that would allow us to consider both brain function and conscious experience as a single type of thing, undercutting, as it were, the duality of Descartes. Colin McGinn likewise advances a point of view much like Searle’s, but he concludes, pessimistically, that a full understanding of how brain processes give rise to conscious experience is, in principle, beyond human comprehension.

John Searle

Searle (2004) believes that much confusion was created by, and has persisted because of, the language of Cartesian dualism. Res extensa and res cogitans are too starkly bifurcated. What is needed is an expanded notion of the physical that permits a subjective element. Such a notion would allow conscious experience to be a part of the physical world. Consciousness, according to Searle (1992, 2004), is caused by brain processes and therefore is causally reducible to them. Conscious experience nevertheless retains its subjective, first-person ontology and therefore is not ontologically reducible to brain processes. The two, neural activity and conscious experience, are different aspects, or levels of description, of the same thing, in the same way that, say, the molecular structure of a piston and the solidity of the piston are different aspects, or levels of description, of a piston (Searle’s example). Importantly, however, in the case of conscious experience and the brain processes that cause it, the two aspects have different ontologies, that is, the reduction of conscious experience to brain processes is causal but not ontological.

Thomas Nagel

Thomas Nagel’s view falls somewhere between Searle’s and McGinn’s. Early on, Nagel expressed doubt about whether existing conceptual categories and schemes could be used to obtain an effective understanding of consciousness from a physicalist perspective (Nagel, 1974). Searle wanted to expand the concept of the physical to include subjective elements. In contrast, Nagel wants to expand the concept of the subjective to include elements of the physical (Nagel, 1998, 2002). The first step is to acknowledge that the manifest properties of conscious experience, such as its subjectivity and first-person ontology, do not exhaust its nature. In other words, there also may be physical properties that are not manifest in the conscious experience itself. This is an important and plausible idea, and it bears repeating. The manifest properties of conscious experience, in particular its first-person, subjective ontology, may not exhaust its nature. There may be other, perhaps physical, properties of consciousness that we do not experience directly. For example, as I type these words I experience the movement of my arms, hands, and fingers on and over the keyboard, but I do not experience the many neuron firings and electrochemical events that I know are occurring in my central and peripheral nervous system, and that in principle could be detected with appropriate instrumentation. It is possible that my conscious experience of my body entails these neuron firings and chemical events, rather than just going along with them.

    While Searle expresses his view with assertive certainty, Nagel is more tentative. To say that the mental supervenes on the physical, which Nagel believes to be the case (Nagel, 1998), does not constitute a solution to the problem. Instead, it is “a sign that there is something fundamental we don’t know” (p. 205). Nagel’s (2002) view is that a radically different conceptual framework is required in order to understand these matters fully:

        If strict correlations are observed between a phenomenological and a physiological variable, the hypothesis would be not that the physiological state causes the phenomenological [which is Searle’s view], but that there is a third term that entails both of them, but that is not defined as the mere conjunction of the other two. It would have to be a third type of variable, whose relation to the other two was not causal but constitutive. This third term should not leave anything out. It would have to be an X such that X’s being a sensation and X’s being a brain state both follow from the nature of X itself, independent of its relation to anything else. (p. 46)

X would have to be something “more fundamental than the physical” (Nagel, 2002, p. 68). I understand this to mean that X must be something conceptually more fundamental than the physical, not a substance that is more fundamental than the physical. An example would be a field (Nagel’s example), analogous to the electromagnetic, gravitational, and Higgs fields that have proved to be useful in physics. An appropriately conceived field might be understood to give rise to both brain processes and conscious experience, and also to explain their connection and interaction. Nagel (2002) goes on:

        ...we may hope and ought to try as part of a scientific theory of mind to form a third conception that does have direct transparently necessary connections with both the mental and the physical, and through which their actual necessary connection with one another can therefore become transparent to us. Such a conception will have to be created; we won’t just find it lying around. A utopian dream, certainly: but all the great reductive successes in the history of science have depended on theoretical concepts, not natural ones—concepts whose whole justification is that they permit us to give reductive explanations. (p. 47)

Colin McGinn

While Nagel is hopeful that a true understanding of consciousness in the material world can be achieved, Colin McGinn (1993, 2002) has given up hope. Like Searle and Nagel, McGinn believes that conscious processes supervene on brain processes but are not ontologically reducible to them. Searle believes that this provides a reasonable understanding of consciousness, but Nagel finds this view incomplete. McGinn also disagrees with Searle, but unlike Nagel, believes it will never be possible, even in principle, to understand the relationship between conscious experience and material reality. There is a reason that efforts to understand this relationship for thousands of years have failed. It can’t be done. This does not mean there is no relationship, it just means that the human intellect is too limited to understand it. McGinn refers to this as cognitive closure. Just as a dog can know nothing about quantum physics because of its intellectual limits, a human cannot fully understand conscious experience and its relation to the material world because of his or her intellectual limits (McGinn’s example).

Making Sense of the Brain-Based Philosophies

Searle seems confident that an expanded notion of the physical, which includes a subjective element, together with the understanding that conscious experience is causally but not ontologically reducible to physical states in the brain, is a reasonable solution to the mind-body problem. But how can we be sure this is true? Is there an experiment that could conceivably produce results showing Searle’s solution to be false, that is, that conscious experience can exist independently of brain states? If not, then McGinn’s pessimism may be warranted. But we can’t be sure that McGinn’s idea of cognitive closure is true either, because it is not possible to prove the negative. In view of these uncertainties, it may be that Nagel has the best perspective. A new analytic framework that entails a conceptual X “more fundamental than the physical” may help us to understand in a transparent way how conscious experience is related to material reality.

Reconciling Conscious Experience with the Material World

Missing from all these philosophies, no doubt because they are so taken with contemporary neuroscience, is an understanding of how the natural and social environment figures into the workings of brain and consciousness. Where is consideration of the streetcar-having-to-be-overtaken, the watch-having-to-be-consulted, the portrait-having-to-be-understood? The earlier phenomenological analysis revealed the importance of such considerations, but even from a purely naturalistic perspective their importance is obvious; there would be no brain and no consciousness if there were no external environment. Indeed, the brain and consciousness are for the world of human existence, action, and behavior, they are for overtaking the streetcar, consulting the watch, and understanding the portrait. Contemporary philosopher of mind Alva Noë agrees:

        The conscious mind is not inside us; it is . . . a kind of attunement to the world, an achieved integration . . . . The locus of consciousness is the dynamic life of the whole, environmentally plugged-in person or animal. (Noë, 2009, p. 142)

    Let us acknowledge that Nagel’s view may constitute the beginning of a reconciliation, but that his unifying conceptual X should be sought not in the context of brain and consciousness alone, but in the context of brain, consciousness, and world. This may be our best hope for reconciling conscious experience with the material world of brain and behavior. Obviously, the reconciliation is far from complete—indeed, it has hardly begun. Furthermore, its ultimate achievement is far from certain. But this perspective at least allows us to talk about consciousness in a reasonable way from a materialist perspective.

Mental Causation

One problem a reconciliation like this poses is the possibility of mental causation. If we admit consciousness into our account of human action, then aren’t we also admitting the possibility that mental events can cause physical events, such as bodily motion? The answer to this question is yes, but it is not a necessary feature of such a reconciliation. Searle (2004), for example, takes mental causation as a given because it is manifest in conscious experience. But recall that for Searle, a mental event is an aspect of, and is itself caused by, a physical event, specifically a brain process. Hence it is the brain process, which entails both a physical and a mental aspect, that is the actual cause. Given Searle’s expanded understanding of the physical, mental causation really consists in a physical event causing another physical event. A brain process causes, say, behavior, and also the conscious experience of the mental aspect of the brain process causing the behavior. So far so good. But what causes the brain process in the first place? Searle entertains the possibility that the organism as an agent may initiate the brain process. This is the 2,400-year-old Aristotelian idea of an internal principle of movement, a principle that Aristotle used to explain the motion of all natural objects, including celestial and sub-lunary bodies, and animate beings and their parts (McDowell, 1988). But it could be that the causes of motion instead lie outside the bodies. In the case of celestial and sub-lunary bodies, the Aristotelian internal causes were abandoned in favor of external forces during the Middle Ages (for sub-lunary bodies) and during the Enlightenment (for celestial bodies; McDowell, 1988). Similarly, in the case of animate beings and their parts, classical behavior analysis has sought to replace the internal motive principle with an external cause, namely, the organism’s past adaptive interaction with its environment. Evidently then, admitting consciousness, and with it the experience of mental causation, does not necessarily introduce agency into an account of human action and behavior. Instead, it is possible that mental causation is only an appearance, in the same way that the setting sun and the stationariness of the earth are appearances.

A Different Approach: Build a Conscious Artifact

The interesting work of thinker, neuroscientist, and Nobel laureate Gerald Edelman brings us back to a consideration of machine humanity. From a philosophy of mind similar to Searle’s, Edelman (1990) goes on to develop an elaborate and detailed neurobiological theory of consciousness that is based on his theory of neuronal group selection (Edelman, 1987). Neuronal group selection is a theory about how the brain, understood as a selectional system, regulates the behavior of whole organisms in their environments. Briefly, neural circuits subserving behavior are selected by value systems in the brain, which are activated when the organism’s behavior produces consequences in the environment. This is a theory about the functioning of whole brains in whole organisms that are actively engaged with their environments, and as such is a theoretical rarity in contemporary neuroscience. Interestingly, Edelman’s theory is consistent with our understanding of the phenomenology of consciousness, and it shares features with a behavior analytic point of view, both in terms of its focus on whole organisms behaving in environments that may provide consequences for behavior, and in terms of its selectionist principles. The latter have been discussed extensively in behavior analysis (e.g., Skinner, 1981; Staddon & Simmelhag, 1971) and have been used explicitly in behavior-analytic theory building, which has met with some success (McDowell, 2004; McDowell, Caron, Kulubekova, & Berg, 2008). I have summarized Edelman’s theory of neuronal group selection and discussed its connection to selectionism in behavior analysis in another article (McDowell, 2010).

    Edelman believes that a machine built with a selectionist nervous system would have at least primary conscious experience. How can one know for sure? Build the machine. Edelman believes this is possible and, moreover, he is committed to making it happen. In a recent paper, he and two colleagues assert “that there is now sufficient evidence to consider the design and construction of a conscious artifact” (Edelman, Gally, & Baars, 2011, p. 1). Determining that such an artifact is conscious (no mean feat, as discussed below) would confirm that material objects and processes can in fact give rise to conscious experience.

Discussion

A reasonable alternative to Rachlin’s eliminative materialism is to admit the reality of first-person subjective conscious experience, and seek to reconcile it with the material world by means of Nagel’s X, understood in the context of the whole organism (brain included) behaving adaptively in the natural and social environment.

Machine Humanity

This alternative implies that for a machine to be judged human it must have conscious experiences that are characterized by a first-person subjective ontology, and it also must be able to do all the things that Rachlin said it must be able to do. Is this possible? We have seen that Edelman and his colleagues believe so. Among other philosophers and scientists, opinions vary. No doubt building a conscious machine would be a complicated and challenging task. At least as complicated and challenging would be the task of determining whether the machine really was conscious. A small start might be made by testing it in mental rotation experiments, and comparing the results to those obtained from human subjects (Edelman, Gally, & Baars, 2011). Koch and Tononi (2011) suggested a visual oddity test in which the artifact would be required to tell the difference between a sensible picture, for example, a person sitting in a chair at a desk, and a nonsense picture, such as a person floating in the air above the desk. A third method of testing for consciousness would be to observe extended patterns of behavior, which would tell us something about the artifact’s conscious state. Rachlin, of course, would approve of this method (it is his tough Turing test), whether he thought the behavior was the consciousness or only indicative of it. He gave a good example of such testing in his commentary on Searle’s (1980) Chinese Room article:

        [A conscious robot] might answer questions about a story that it hears, but it should also laugh and cry in the right places; it should be able to tell when the story is over. If the story is a moral one the robot might change its subsequent behavior in situations similar to the ones the story describes. The robot might ask questions about the story itself, and the answers it receives might change its behavior later (p. 444).

This is probably the best of the three methods of determining consciousness. The more behavior that is observed, and the more complicated, subtle, temporally organized, and teleological it appears to be, the more convinced we would be that the robot had conscious experience. But can we be sure? I suspect we could be no more or less sure than we can be about another person’s consciousness. After lengthy experience with the machine (as in Rachlin’s Dolly II example) we may interact with it as a matter of course as if it were conscious. If pressed, we would admit that we were not absolutely certain that it was. But at some point, after extended interaction, during which there was no behavioral indication that the machine was not conscious, the absence of absolute certainty probably wouldn’t matter. At that point, for all intents and purposes, we would take the artifact to be sentient, which is what we do with other people. Does this mean that the extended behavior is the consciousness? No. According to the perspective developed here, it means that the existence of acceptable extended behavior necessarily implies the other aspect of its material realization, which is consciousness. The two go together. They are necessary co-manifestations of the more fundamental X of Nagel. In this sense, Nagel’s X is like dark matter. We don’t know what it is, but it helps us explain our observations of the world.

Consequences for Formal- and Efficient-Cause Behavior Analysis

Rachlin’s emphasis on temporally extended and organized patterns of behavior is valuable and interesting. Would his teleological behaviorism change in any way if its elimination of conscious experience were rescinded, and were replaced by a view that acknowledged the existence and reality of consciousness? Of course, it would entail this new understanding of conscious experience, but other than that, it seems there would be little, if any, change. The use of utility functions, the assertion of correlation-based causality, and so on, would remain the same, and the science based on these tools and ideas could proceed as it always has. If Nagel’s “utopian dream” were realized at some point, then Rachlin’s program might be affected, depending on what the X was asserted to be. But then again, it might not. The discovery of synapses, for example, dramatically improved our understanding of how neurons worked, but this information does not affect the science of behavior analysis in any discernible way. The discovery of the dopaminergic value system in the brain comes a bit closer because it deals with the rewarding properties of external events. But this information also does not affect the science of behavior analysis, except insofar as it confirms what behavior analysts knew in more general terms had to be the case anyway.

    The same can be said for classical, efficient-cause behavior analysis. Accepting the version of Nagel’s theory discussed here probably would not change its practice or clinical application in any discernible way, depending again on what Nagel’s X ultimately turned out to be. In view of these considerations, I feel confident in predicting that most (of the few?) behavior analysts who read these papers, when they return to their laboratories and clinics, will not give a second thought to the philosophy of mind that their science or clinical practice entails. However, if pressed by interlocutors who charge ludicrousness, embarrassment, or untruth, they may wish to call up some of these ideas as transcendent objects of consciousness and with them dispatch the naysayers forthwith.

Acknowledgment

I thank the Stadtbibliothek in Aschaffenburg, Germany, for providing a helpful and welcoming environment while I was working on this paper. I also benefited from interesting discussions with Andrei Popa about Howard Rachlin’s point of view. Nick Calvin, Andrei Popa, and Giovanni Valiante made many helpful comments on an earlier version of this paper, for which I am grateful. Finally, I thank Howard Rachlin, who was my PhD mentor, for my initial schooling in matters of behavior and mind. I hope I have not disappointed him. Copyright 2013 by the Association for Behavior Analysis. Reprinted with permission of the publisher and author.

2. What Would It Be Like to Be IBM’s Computer Watson?

HENRY D. SCHLINGER, JR., CALIFORNIA STATE UNIVERSITY, LOS ANGELES

Rachlin (2012a) makes two general assertions: (a) that “to be human is to behave as humans behave, and to function in society as humans function,” and (b) that “essential human attributes such as consciousness, the ability to love, to feel pain, to sense, to perceive, and to imagine may all be possessed by a computer” (p. 1). Although Rachlin’s article is an exercise in speculating about what would make us call a computer human, as he admits, it also allows us to contemplate the question of what makes us human. In what follows, I mostly tackle the second general assertion, although I briefly address the first assertion.

To Be or Not to Be Human

Without becoming ensnared in the ontological question of what it means to be human, let me just say that from a radical behavioral perspective, the issue should be phrased as: What variables control the response “human”? This approach follows from Skinner’s (1945) statement of the radical behavioral position on the meaning of psychological terms. His position was that they have no meaning separate from the circumstances that cause someone to utter the word. Thus, when we ask what perception, imagining, consciousness, memory, and so on, are, we are really asking what variables evoke the terms at any given time. As one might guess, there are numerous variables in different combinations that probably evoke the response “human” in different speakers and at different times. Rachlin (2012a) claims that a “computer’s appearance, its ability to make specific movements, its possession of particular internal structures—for example, whether those structures are organic or inorganic—and the presence of any non-material ‘self,’ are all incidental to its humanity” (p. 1). However, it could be argued that one’s appearance and genetic (e.g., 46 chromosomes) and physiological structures (as behaviorists, we will omit the presence of any non-material “self”) are not incidental to the extent that they, either alone or in some combination, evoke the response “human” in some speakers (e.g., geneticists, physiologists) at some times. For example, most people would probably call an individual with autism human because he or she has a human appearance and human genetic and physiological structures and behavior, even though he or she may lack language and the consciousness that is derived from it. But because the variables that control Rachlin’s response “human” lie in the patterns of behavior of the individual organism or computer over time, we must wonder whether he could call this person “human.” If this conception of humanity is at all troubling, as behavior analysts, we would have to at least agree with him that “[a]‌ behavioral conception of humanity is better than a spiritual or neurocognitive conception. . . because it is potentially more useful.” Once we accept his basic premise, we can move on to Rachlin’s second general assertion: that a computer may possess all of the attributes listed above.

What Attributes Are Essentially Human?

Rachlin includes sensing, perceiving, consciousness, imagining, feeling pain, and being able to love as “essential human attributes” (later, he includes memory and logic in the list), but what does he mean by “essential”? There are two possibilities. The first meaning, which I would call the strong view, is that only humans possess these attributes; that is, the terms are applied only to humans. Thus, as Rachlin argues, the computer that possesses them would be human, The second meaning (the weak view) is that although other animals (or computers) might possess some of these attributes, to be called human, an individual must possess them all or must possess some (e.g., consciousness) that other organisms do not.

    The term “strong view” (of essential human attributes) is meant to mirror, though not precisely, the term strong AI used by Searle (1980) to refer to one of two views of artificial intelligence (AI). According to Searle, in strong AI, “the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states” (1980, p. 417). (Weak AI refers to using computers as a tool to understand human cognition, and it is therefore synonymous with the information-processing model of modern cognitive psychology; Schlinger, 1992.) Rachlin appears to support the strong view of human attributes (that only humans possess them) when he writes, “These are human qualities by definition.” Thus, if a computer possessed these attributes, it would be human. (Of course, an alternate conception is that if a computer could possess them, then they are not essentially human.)

    The strong view can be challenged simply by carrying out a functional analysis of terms such as sensing, perceiving, consciousness, and so on—that is, determining the variables that control our typical use of the terms (corresponding to the term’s definitions), and then looking to see whether such variables occur in other organisms. Thus, without too much debate, I believe that we can at least eliminate sensation and perception from the strong view of essential human qualities. Sensation, as the transduction of environmental energy into nerve impulses, and perception, as behavior under stimulus control, are clearly present in most other species. The remainder of Rachlin’s list of essential human attributes is trickier. Although some psychologists are willing to talk about animals being conscious and having feelings such as sadness and empathy, it is probably the case that such pronouncements are made based on some, but not all, of the behaviors exhibited by humans in similar situations. For example, regardless of any other behaviors, it is highly unlikely that other animals talk the way humans do when we are described as being empathetic. Describing non-humans with distinctly human-like terms is what leads some to level the charge of anthropomorphism.

    Most behaviorists would probably not subscribe to the strong view of essential human attributes—that a computer possessing them would be human. The weak view, on the other hand, is more defensible, but not without its problems. Let us return to the question posed by Rachlin, about whether a computer—IBM’s Watson—can possess these attributes.

Could Watson Possess Essential Human Attributes?

In what follows, I address each of the essential human attributes that Rachlin believes Watson could possess, and argue that the responses sensation, perception, consciousness, feeling, and loving as applied to Watson would be, at best, controlled by some, but not all, of the variables in humans that typically occasion the responses. Thus, the question of whether Watson can be made human may be moot. Perhaps a more relevant question is whether there is any justification to describe Watson, or any computer for that matter, with terms usually reserved for humans and some animals.

    Before addressing the list of attributes, however, there is a more important issue to tackle, namely, the origin of almost all of the attributes listed by Rachlin. This issue is at the heart of Rachlin’s statement that “[t]‌he place to start in making Watson human is not at appearance or movement but at human function in a human environment.” This statement raises the question of just what a human function in a human environment is.

    With the exception of sensation, which is built into biological organisms, the remaining attributes arise as a function of organisms interacting with their environment; and that environment consists of the natural environment as well as other organisms. In fact, one of the perennial problems in AI, at least for the first several decades of attempts to design and build computers that simulate human behavior, has been the failure on the part of researchers to recognize some important differences between computers and biological organisms, for example that organisms have bodies that sense and act upon the environment and their behavior is sensitive to that interaction; in other words, their behavior can be operantly conditioned (Schlinger, 1992). This is possible because organisms have bodies with needs and a drive to survive (Dreyfus, 1979). Of course, the “drive to survive” refers to the biological basis of staying alive long enough to increase the chances of passing on one’s genes. Based on Dreyfus’s critique of AI, Watson would need something akin to this drive. Moreover, from a behavioral perspective, a computer would have to be constructed or programmed with unconditional motivation and, like humans and other animals, be capable of acquiring conditioned motivations and reinforcers through learning. Rachlin’s Watson II does have needs, but they are primarily to answer questions posed to him by humans. In addition, he needs “a steady supply of electric power with elaborate surge protection, periodic maintenance, a specific temperature range, protection from the elements, and protection from damage or theft of its hardware and software.” Getting needs met by acting on the world presupposes another significant human attribute, the ability to learn in the ways that humans learn. In other words, Watson II’s behavior must be adaptive in the sense that successful behaviors (in getting needs met) are selected at the expense of unsuccessful behaviors. Such operant learning is the basis of behavior that we refer to as purposeful, intentional (Skinner, 1974), and intelligent (Schlinger, 1992, 2003); as I have argued, any AI device—and I would include Watson—must be adaptive, which means “that a machine’s ‘behavior’ in a specific context must be sensitive to its own consequences” (Schlinger, 1992, pp. 129–130).

    However, even if we grant Watson II’s ability to get these needs met through his “behavior” and his ability to “learn,” the question remains of whether they are functionally similar to those of biological organisms. Either way, would he then be able to possess all the human attributes listed by Rachlin?

Sensation and Perception

As mentioned previously, sensation refers to the transduction of environmental energy into nerve impulses, and perception refers to behaviors under stimulus control (Schlinger, 2009; Skinner, 1953). Following from these basic definitions, could Watson II sense and perceive? Obviously, unless Watson II was built dramatically differently from Watson, he would not sense in the same way that organisms can. He would have no sense organs and sensory receptors that could respond to different forms of environmental energy. Using the information-processing analogy, we can say that information could be input in auditory, visual, or even, perhaps, in tactile form if he were constructed as a robot. And although this type of input could become functionally related to Watson II’s output, I do not think we would want to call it sensation. At best, it is analogous to sensation. On the other hand, if by “perception,” all we mean is behavior under stimulus control, I think we could describe Watson II as engaging in perceptual behavior to the extent that his “behaviors” are brought under the control of whatever input he is capable of and assuming, even more important, that his behavior is sensitive to operant conditioning. However, even though describing Watson II as perceiving may be more accurate than describing him as sensing, it is still only analogous to what biological organisms do.

    Speaking of Watson II behaving raises yet another problem. Will Watson II really be behaving? As the behavior of biological organisms is the result of underlying (physiological and musculoskeletal) structures, Watson II’s behavior, like his sensing, would, at best, be an analogy to the behavior of organisms. Although I do not necessarily agree with Rachlin that mechanism is unimportant, I do agree with Rachlin that at least for most of our history it is not what has defined us as human, and that it might be possible to produce behavior with a different underlying mechanism that, for all practical purposes, we would call human.

Imagining

Even though constructing Watson II with attributes functionally similar to sensation and perception may be possible, arranging for the other attributes on Rachlin’s list poses greater problems. First, however, let us agree with Rachlin by acknowledging that for the behaviorist, perception, imagining, consciousness, memory, and other so-called mental states or processes are really just words that are evoked by behaviors under certain circumstances (see also Schlinger, 2008, 2009a). As mentioned previously, to understand what we mean by these terms, we must look at the circumstances in which they are evoked.

    With respect to imagining, the question is: What do we do when we are said to imagine and under what circumstances do we do it? (Schlinger, 2009a, p. 80). The answer is that when we are said to imagine either visually or auditorily, we are engaging in perceptual behaviors that are evoked in the absence of actual sensory experience, or as Rachlin put it, “[i]‌magination itself is behavior; that is, acting in the absence of some state of affairs as you would in its presence.” For example, the behaviors involved in auditory imagining are most likely talking (or singing), to oneself, and in visual imagining, the behavior of “seeing” (Schlinger, 2009a). Note that the self-talk or “seeing” need not be covert (i.e., unobserved), but most often it is (Schlinger, 2009b).

    Rachlin states that the behavior of imagining “has an important function in human life—to make perception possible” and that “[p]‌ictures in our heads do not themselves have this function,” but I would argue that he has it backward: perception (as behavior under stimulus control) makes imagining possible. In other words, we must first act in the presence of certain stimulus events and have our behavior produce consequences before we can act in the absence of those events. We must first “see” a painting by Picasso before we can “see” the painting in its absence.

    What would it take for Watson II to imagine? Simply speaking, Watson II would have to be able to behave in the absence of the stimuli. He would either have to “hear” (i.e., talk or sing to himself) in the absence of auditory stimuli or “see” in the absence of visual stimuli. In order to “hear” he would need a verbal repertoire like that of humans, and to “see” he would need some kind of visual system that would enable him to behave in the absence of the stimuli in ways similar to how he would behave in their presence. The verdict is still out as to whether this is possible.

Consciousness

Let me start by saying that I agree with Rachlin that, “[f]‌or a behaviorist, consciousness, like perception, attention, memory, and other mental acts, is itself not an internal event at all. It is a word we use. . .” In fact, I said as much in an article titled “Consciousness is Nothing but a Word” (Schlinger, 2008). (The rest of Rachlin’s statement—“to refer to the organization of long-term behavioral patterns as they are going on”—is open to debate, and I think we can address the problem raised by Rachlin without either accepting or rejecting his teleological behaviorism.) The critical point made by Rachlin is summed up in the following: “A computer, if it behaves like a conscious person, would be conscious.” This statement evokes at least two questions: (a) What does a conscious person behave like? and (b) Could a computer behave like a conscious person? If the answer to the second question is yes, then we might further ask whether the computer could behave like a conscious person without necessarily calling it “human.” A third possible question is whether a person can be a person without being conscious. Answering these questions requires some agreement about what it means to be conscious.

What Does a Conscious Person Behave Like?

In answering this question, radical behaviorists ask what variables cause us to say that a person (or any other organism for that matter) is conscious. In the afore-referenced article (Schlinger, 2008), I listed at least three such situations. The first is when an organism is awake rather than asleep. This use of “conscious,” although not the most germane for our discussion, may still be applied, if only analogically, to Watson, just as it is to my Macintosh computer. A second situation that evokes the response “conscious” is when an organism’s behavior is under appropriate stimulus control. For example, I say that my cat is conscious of his environment if he avoids walking into things, jumps on the bed, plays with a toy mouse, and so on. In this sense, animals are obviously conscious, and their behavior that leads us to say so has been operantly conditioned by interactions with the environment. (In this usage, the term is evoked by the same circumstances that the term “perceive” is. For example, saying “the cat perceives the mouse” is controlled by the same variables as “the cat is conscious of the mouse.”) Notice that the environment does not have to be a human environment; it can consist of the animal’s natural environment, including other animals. Presumably a computer could be conscious in this sense as well if its behavior could come under the stimulus control of events in its environment as a result of interactions with that environment. For Watson II, its environment would presumably consist entirely of humans. It is this sense of consciousness that interested Crick and Koch (2003) with their emphasis on visual perception.

    A third circumstance that probably evokes the term “conscious” most often—and the one that is of most interest to consciousness scholars and laypeople alike—is the tendency to talk (i.e., describe) or imagine “to ourselves about both our external and internal environments, and our own public and private behavior” (Schlinger, 2008, p. 60). It is these behaviors that give rise to what consciousness scholars refer to as qualia, or subjective experience, and consists of what I believe a conscious person behaves like. That is, a conscious person is taught by his or her verbal community to answer questions about his or her own behavior, such as “What are you doing?” “Why did you do that?” and “What, or how, are you feeling?” (Schlinger, 2008; Skinner, 1957). As a result, we are constantly describing our behavior and private events both to others and to ourselves. Presumably, this is what Rachlin means by human function in a human environment.

    As Skinner (1945) first suggested, we learn to talk about private (i.e., unobserved) events in the same way that we learn to talk about public (i.e., observed) events, that is, from others. In the case of private events, others only have access to the public events that accompany them. As a result, our descriptions come under the control, though not perfectly, of the private events. So, for example, we are taught to say “it hurts” when parents and others see either overt signs of injury, such as a cut or bruise, or when they observe us engaging in some kind of pain-related behavior, such as crying moaning, wincing, and so on. Later on, we say “it hurts” only to the private painful stimulation. (Of course, it is also possible to say “it hurts” in the absence of any painful stimulation. Rachlin would still call this pain.) I believe that it is only because we learned to say “ouch” or “it hurts” from others, that we actually are said to “feel” the pain, that is, the subjective experience of pain, as opposed to simply experiencing or reacting to the painful stimulation as my cat would. I think this is consistent with Rachlin’s statement, “To genuinely feel pain, Watson must interact with humans in a way similar to a person in pain.” This sense of consciousness is simply an extension of perception in that our verbal behavior is brought under the control of both public and private events dealing with ourselves.

    Such self-talk is what I believe Descartes was experiencing that led him to state his famous Cogito ergo sum (I think, therefore I am) or, in behavioral terms, “I talk (to myself), about myself, therefore I am conscious of my existence.” Although I might not agree with Rachlin about the details, I would agree with him that “consciousness is in the behavior not the mechanism.”

Could a Computer Behave Like a Conscious Person?

Based on the brief analysis presented above, for Watson II to behave like a conscious person, he would have to behave appropriately with respect to his entire environment, including the environment inside his skin. But therein lies the rub. We can grant that the computer should be able to describe its public behavior, whatever that behavior is, but what about private events? Without a sensory system that, in addition to exteroception, also includes interoception or proprioception, Watson II would not be able to describe private stimulation or, in other words, how he feels. And, unless he is constructed such that the mechanisms that produce behavior proximally (motor neurons, muscles) can function at reduced magnitudes without producing overt behavior, he would also not be capable of covert behavior and, thus, would not be able to learn to describe such behavior. So, at best, Watson II would behave sort of like a human in that he could potentially be able to describe his overt behavior. But he would be handicapped in that he would have no private world to experience and, thus, to describe. But even if Watson II were able to describe his overt behavior, would we call him human? As I suggested previously, I think that question is moot. It is probably best to skirt the ontological question and concentrate on whether Watson II could engage in human-like behaviors.

What Would It Be Like to Be IBM’s Computer Watson?

In addressing this issue of qualia, Nagel (1974) asked, “What is it like to be a bat?” Based on the discussion above, the answer has to be “nothing.” It is like nothing to be a bat, or any other animal, including pre-verbal or non-verbal humans, without self-descriptive behavior. As I have stated, “For the bat there will never be any qualia because there is no language to describe experience” (see Schlinger, 2008, p. 60). Even Dennett (2005) came around to this view of consciousness when he wrote, “acquiring a human language (an oral or sign language) is a necessary precondition for consciousness.”

    Rachlin does not see it quite this way. According to him:

        Do we know what it is like to be our brothers, sisters, mothers, fathers, any better than we know what it is like to be a bat?. . . if “what it is like” is thought to be some ineffable physical or non-physical state of our nervous systems, hidden forever from the observations of others. The correct answer to “What is it like to be a bat?” is “to behave, over an extended time period, as a bat behaves.” The correct answer to “What is it like to be a human being?” is “to behave, over an extended time period, as a human being behaves.”

Or, as Rachlin states elsewhere in the target article, “all mental states (including sensations, perceptions, beliefs, knowledge, even pain) are rather patterns of overt behavior.”

    Although I understand Rachlin’s point (after all, these are clear statements of his teleological behaviorism), I do not think such a position will be very palatable to traditional consciousness scholars. I believe that the position I have outlined here and elsewhere (Schlinger, 2008), while still perfectly behavioral, is closer to what consciousness scholars are getting at with their interest in qualia and subjective experience.

Conclusion

Even though it may be possible to construct a Watson (Watson II) with attributes that resemble those in humans, the question of whether the resulting computer would be human is moot. A more practical question, as I have suggested, is whether there is any justification to describe Watson II with terms usually occasioned by the behavior of biological organisms, especially humans. But even then, the critical question is: What is to be gained by talking about computers using terms occasioned by humans? If we had to choose between the weak view of AI (that the main goal in building smart computers is to try to understand human cognition or behavior) and the strong view (that the computer with the essential human attributes mentioned by Rachlin would, for all practical purposes, be human), the weak view seems to be more productive. In other words, it would challenge us to analyze attributes such as perception, imagination, and consciousness into their behavioral atoms and the history of reinforcement necessary to produce them, and then try to build a computer (Watson II) that would interact with its environment such that those repertoires would be differentially selected. If we were successful, would we then call Watson II human? Rachlin’s thesis is that we would. My point in this commentary is that such a conclusion is, at the present time, too uncertain, and we would have to wait and see if Watson II would occasion the response “human.” I’m not so sure.

    A more likely scenario, in my opinion, is that Watson II may be human-like in some very important ways. Regardless, the questions posed by Rachlin should help to pave the way for thinking about how to construct a computer that is most human-like. Rachlin is correct that, in order to do so, the computer must function like a human in a human environment. However, some of the so-called human functions mentioned by Rachlin (e.g., sensation and perception) are also possessed by other animals. And the functions he does mention that may be most distinctly human (e.g., consciousness) do not arise from interactions that differ in any fundamental way from those that are responsible for other behaviors; in other words, the behaviors in question are selected by their consequences. Thus, the most important consideration in going forward in designing human-like computers is to build them with the ability for their “behavior” to be adaptive (Schlinger, 1992) and then see what happens.

Acknowledgment

Copyright 2013 by the Association for Behavior Analysis. Reprinted with permission of the publisher and author.

Response to Commentaries

Our Overt Behavior Makes Us Human

The commentaries both make excellent points; they are fair and serve to complement the target article. Because they are also diverse, it makes sense to respond to them individually rather than topically.

McDowell

Before discussing McDowell’s (2012) thoughtful comments, I need to clarify his categorization of my position on consciousness as “eliminative materialism.” He is correct that I would eliminate the phenomenology of consciousness from scientific discourse. However, I also claim that the concept of consciousness itself is extremely useful and has an important place in behavior analysis. So I would not eliminate the concept of consciousness from scientific discourse. The theory of consciousness implied by Watson II is a physical theory, like the neural identity theories to which McDowell refers. However, neural identity theorists believe that consciousness occurs within the organism and is identical to some pattern of nervous behavior. I claim that consciousness occurs in the world outside the organism and is identical to abstract patterns of overt behavior. The difference between my identity theory and theirs is not one of physical versus mental; we agree that the mental is real, and it is identical to an abstract pattern of activity of the organism. The difference is that, for them, the pattern occurs (wholly or mostly) over some spatial extent in the brain, whereas for me the pattern occurs over time in the organism’s overt behavior. It is not the word consciousness that I would eliminate from scientific discourse—still less from everyday speech. Contrary to what McDowell says, I do “acknowledge the existence and reality of consciousness.” Abstract entities, such as behavioral patterns, are as real as or more real than their components. [Both Plato and Aristotle believed that abstract entities may be in a sense more real (because they are directly connected to their function) than their components. For Aristotle, a chair is more real than the parts that make it up, and for Plato, the user of the chair knows the chair better than does its maker—again because the user is directly involved in its function as a chair (Rachlin, 1994).] It is rather phenomenological introspection or internal “reflection” as a means of psychological investigation that I would eliminate. I recognize the importance of a kind of reflection (contingencies of reinforcement are essentially reflections from overt behavior to the world and back), but not a reflection that takes place wholly within the organism. Introspection, as a psychological technique, has been tried for at least a century and has produced little of value. [Nevertheless, introspection may be useful in everyday life. I may say, “I am angry,” or “I love you,” but not merely to report an internal state, any more than I would say, “the grass is green,” or “the sky is blue,” merely to report an external state. Any statement must be made for a reason. The reason, in the case of “I am angry,” and so on, is to predict one’s own future behavior on the basis of one’s own past behavior in similar circumstances. Such a prediction enables the hearer (it could be just one’s own self) to react appropriately. A person (who is less observant of his own behavior than is someone close to him or her) may be wrong about an introspective statement. I might say, “I am angry,” and truly believe it, and my wife may say, “No you’re not,” and she may be right. It is introspection as a scientific method, not introspection as a useful kind of everyday behavior, to which I object.]

    One argument I take very seriously is that my view of the mind is bad for behavior analysis. But I cannot abandon that view because non-behaviorists or anti-behaviorists like John Searle are not able to understand why I have it. The history of science is full of prima facie facts proven to be less useful than their contraries. Especially suspicious are those facts that put humans at the center of the universe (physical or spiritual). The sorts of existence postulated by the phenomenologists arguably come under this heading. From a pragmatic viewpoint (my viewpoint), something is true because it is useful in the long run to behave as if it were true. The burden is on us behaviorists to show that our account is more useful than others. Once that happens, what seems obvious will change accordingly. Searle’s objection, quoted by McDowell, rests on the implicit premise that what Searle cannot imagine or understand must be false. If the research based on teleological behaviorism by me and others turns out to be unfruitful or useless, then such objections will have weight. It is perhaps fair to say that there has not yet been enough research on behavioral patterns, or acceptance and understanding, even within behavior analysis, to give teleological behaviorism a fair test. One purpose of the target article is to correct this lack. Meanwhile, I will have to take my chances with Searle. He may be beyond convincing, but hopefully not every philosopher is that closed-minded. McDowell and others have reached across disciplines to make contact with philosophers and neuroscientists, and that gives one hope. If teleological behaviorism does not result in an improved behavioral technology, then that is why it will fail—not because it contradicts a philosopher’s entirely subjective certitudes.

    McDowell’s summary of the views of Brentano, Husserl, and Sartre is interesting and enlightening. There is certainly a commonality between behaviorism and their philosophy, perhaps coming to a head in Ryle (1949) and the later Wittgenstein (1958), who said, “If one sees the behavior of a living thing, one sees its soul” (p. 357). More relevant to the current topic is McDowell’s discussion of the modern philosophers John Searle, Thomas Nagel, and Colin McGinn. It seems to me that, at least as McDowell presents their views, all three are dancing around the mind-body problem and coming no closer to solving it than did the European philosophers of the eighteenth and nineteenth centuries. But modern philosophy is not as negative about behavioristic thought (or, more aptly, not as positive about phenomenology) as McDowell implies. According to Alva Noё (2009):

        After decades of concerted effort on the part of neuroscientists, psychologists, and philosophers, only one proposition about how the brain makes us conscious—how it gives rise to sensation, feeling, subjectivity—has emerged unchallenged: we don’t have a clue. (p. xi)

            Consciousness is not something that happens inside us. It is something we do or make. Better: it is something we achieve. Consciousness is more like dancing [overt behavior] than it is like digestion [covert behavior] . . . . The idea that the only genuinely scientific study of consciousness would be one that identifies consciousness with events in the nervous system is a bit of outdated reductionism. (p. xii)

    Searle, as quoted by McDowell, claims that “neural activity and conscious experience are different aspects, or levels of description, of the same thing, in the same way that, say, the molecular structure of a piston and the solidity of the piston are different aspects, or levels of description, of a piston.” Amazingly, Searle has it almost right. Substitute behavioral activity (overt) for neural activity (covert) and I would completely agree. But Searle, despite his intention to rid philosophy of Cartesian remnants, has not completely eliminated Cartesian dualism from his own philosophy. If mental (or conscious) activity is an abstract version of physical activity, what is that physical activity? Why is it any more plausible for Searle, and the many philosophers who have considered this question, that conscious physical activity has to occur inside the head than that it occur in overt behavior? I understand why Descartes saw things this way. Because Descartes believed that the soul was located deep in the brain, and the physical motions had to directly influence the soul, and vice versa, the physical motions also had to be in the brain. But Searle presumably does not believe that there is a non-physical soul located deep within the brain interacting with our nerves. Nor, as McDowell points out, is this inherently obvious. Some societies and some ancient philosophers believed that our minds as well as our souls were in our hearts. I would guess that if you name a vital organ, there will be or have been some society that believed it to be the seat of the soul; there may even have been some who identified the soul with the whole organism. So if the mind is a molar or abstract conception of some physical activity (as Searle and I seem to agree), and there is no a priori reason (such as connectivity with an internal, non-physical soul) to assume that the physical activity occurs in the brain, where does it occur?

    In answering this question, usefulness is paramount, especially as consciousness, and talk of consciousness, must have evolved along with the rest of our human qualities. Organisms may die without reproducing because their behavior is maladaptive, not (directly) because their nerves are maladaptive. Our nerves would be in direct contact with our souls if our souls, as the sources of consciousness, were inside us. But if our environment is seen as the source of our consciousness (as it would have to be if consciousness were a product of biological evolution), then it would be our overt behavior, not neural behavior, which is in direct contact with the source. Group selection (selection at the level of classes or patterns) may act at the level of nervous function, as Edelman and colleagues (e.g., Tononi & Edelman, 1998) have shown. It may act as well at the level of innate behavioral patterns across generations (Wilson & Wilson, 2008). And it may act as well at the level of learned patterns within the lifetime of a single organism (Rachlin, 2011).

    Consciousness is therefore not an epiphenomenon or a faint halo that wafts up from a certain degree of complexity in our nervous systems, but is a vital property of our overt behavior with a vital function in our complex world. Our long-term patterns of behavior—sobriety, moderation, cooperation with others, morality, rationality, as well as the language that reflects (and at the same time imposes) their organization, all evolved. These patterns are what we would have to create in Watson II for him to leap over those eons of biological evolution and be human. The mechanism that could create those patterns may very well turn out to resemble our actual nervous mechanism. Or it may not. But it is behavioral evolution, not neural evolution, that counts for Watson II’s consciousness.

    Searle, Nagel, and McGinn, as presented by McDowell, all have double-aspect theories of mind: Body and mind are two aspects of the same thing. The traditional question to ask two-aspect theorists is: Two aspects of what? Searle gives the correct answer: The body is to the mind as the molecular (“molecular structure of a piston”) is to the molar (“solidity of a piston”). This is a spatial analogy, but it could just as well be a temporal one: as the notes are to the melody; as the steps are to the dance. But Nagel and McGinn both posit a third entity that the two aspects are aspects of. For Nagel it is Factor X and for McGinn it is “unknowable.” Are these answers to the traditional question any more enlightening than the traditional answer to that question—two aspects of God? I do not believe so.

    A view of consciousness proposed by Noё (2009) holds (as I do) that the mind cannot be understood except in terms of the interaction of a whole organism with the external environment. Nevertheless, for Noё, the brain remains an important component of mental activity. He retains a neurocognitive view of the mind while expanding its reach, beyond the brain, into the peripheral nervous system and the external environment. According to Noё, “My consciousness now—with all its particular quality for me now—depends not only on what is happening in my brain but also on my history and my current position and interaction with the wider world” (p. 4, italics added).

    I believe that this is a step in the right direction, but its problem is that it mixes levels of explanation. Consider (the following transcription of) Searle’s distinction between physical activity and conscious experience: “[Behavioral] activity and conscious experience are different aspects, or levels of description, of the same thing, in the same way that, say, the molecular structure of a piston and the solidity of the piston are different aspects, or levels of description, of a piston.” If conscious experience is analogous to the solidity of the piston, then it cannot also be analogous to its molecular structure. Noё’s conception of conscious activity blurs the distinction between conscious and non-conscious activity. Extended cognition theory extends the domain of consciousness spatially beyond the brain, into the peripheral nervous system and out into the world. But it does not consider a temporally extended view of cognition, which extends behavior beyond the present moment into the past and future. It is this temporal extension, I believe, that gives Watson II his humanity.

    Finally, McDowell proposes a mental rotation test and a visual oddity test as possible alternatives to the tough Turing test I proposed in the target article. The problem with these alternatives is that it would be extremely easy to build a machine that would pass these tests with flying colors. I believe the current Watson, with a little tweaking, could easily do it. Suppose Watson did pass these tests but failed the tough Turing test. Would anyone believe that it was human? Suppose Watson passed the tough Turing test (for sensation, perception, imagination, cognition, as well as the emotions of love, anger, hope, fear, etc.), but failed the mental rotation and visual oddity tests. Would it not be a violation of our common morality not to consider it human?

Schlinger

(2012) claims that “Watson would be handicapped if he had no private world to experience and thus to describe” (p. 43). But he also agrees with me that “consciousness is in the behavior not the mechanism” (p. 42). The question I would like to address in this reply is: Do covert talking and covert picturing properly belong to the class of movements we call behavior, or are they themselves, like the chemical and electrical events involved in neural transmission, part of a mechanism underlying behavior? If the latter, then, by Schlinger’s own reasoning, Watson’s private world would be irrelevant to whether or not he could be conscious; we would then have to look, as I do in the target article, for Watson’s and our own consciousness in our overt rather than covert behavior.

    The nub of Schlinger’s views is best captured by the following passage:

        A. . . circumstance that probably evokes the term “conscious” most often—and the one that is of most interest to consciousness scholars and laypeople alike—is the tendency to talk. . . to ourselves about both our external and internal environments, and our own public and private behavior. . . . It is these behaviors that give rise to what consciousness scholars refer to as qualia, or subjective experience, and consist of what I believe a conscious person behaves like. That is, a conscious person is taught by his or her verbal community to answer questions about his or her own behavior, such as “What are you doing?” “Why did you do that?” and “What, or how, are you feeling?”...As a result, we are constantly describing our behavior and private events.

    Let us start from the back of this statement. Why does our verbal community want to know what we are doing, how we are feeling, why we do this or that? What’s in it for them to know? Or more precisely, what reinforces these requests of theirs? The answer is that we are interacting with our verbal community in a social system, our future behavior impacts on their welfare, and they would benefit by the ability to predict better than they currently can what our future behavior will be. [There may, of course, be other reasons. It may be idle curiosity. Or the questioner might be a neighbor saying “how are you?” and I answer, “fine,” even if I happen to be rushing to the doctor. But I think that the reasons for such interchanges, like the reasons for those Schlinger cites, are reducible to a mutual interest in greasing the wheels of our current and future interactions.] So when we answer their questions, we are essentially making predictions about our future behavior. Now let us consider the reverse question: Why should we bother to answer these questions? Why should we bother to make such predictions? The answer, again, must be that the questioners are interacting with us in a social system; their future behavior affects our welfare, and we are trying as best we can to maximize the value to us of their behavior, both now and in the future. In other words, we are engaged with them in a joint venture and it is to our interests to refine the flow of discriminative stimuli back and forth between us and them. Schlinger may agree so far.

    Now let us consider to what we may refer when we answer their questions. We could be referring, as Descartes believed, to a spiritual state, a state in a non-physical world with its own rules, located somewhere inside us (perhaps in our pineal glands), to which our questioners have no access but to which we have direct and unimpeachable access through introspection. Or we could be referring to a state of our nervous systems (the chemicals and electrons running through our nerves), or to a kind of organization of those chemicals and electrons in which they mimic the executive function of a computer program. I assume that Schlinger agrees with me that such neurocognitive events are interesting and valuable objects of study but are mechanisms rather than behaviors and are not what we refer to when we answer questions such as “How are you feeling?” (Moreover, why, unless they are neurologists, should other people be interested in the state of our nervous systems?)

    Or, when we answer such questions, we could be referring to what we say to ourselves. According to this scenario, if my wife asks me, “What did you think of those people we met for dinner last night?” and I say, “I think they were a pair of creeps,” I must actually be referring not to the people themselves, nor to their actual behavior, nor to my interaction with them, but to some sentences I was saying to myself or some image of them undetectable (to my wife) that I created in my muscles between her question and my answer. But even that implausible scenario would not be getting at my consciousness. According to Schlinger, it is not the covert words or images that constitute my consciousness but my proprioceptive feedback from these words and images. Schlinger claims that “...the tendency to talk or imagine [to ourselves] give[s]‌ rise to what consciousness scholars refer to as qualia or subjective experience...” (p. 42) and, “Without a sensory system that in addition to exteroception, also includes interoception or proprioception, Watson would not be able to describe private stimulation or, in other words, how he feels” (p. 42). But, aren’t interoception and proprioception chemical and electrical events in our nerves? You can’t have it both ways. Covert movements cannot just “give rise” to consciousness; if they are to explain consciousness, they must be consciousness itself. And, if covert behavior is consciousness itself, consciousness cannot also be the perception of covert behavior. But let us suppose for a moment that consciousness is perception of internal speech by our proprioceptive nervous system. What exactly would that perception be? Is it identical to the entirely physical activity in our proprioceptive nerves? Or, do we need a still more covert activity (the perception of the perception) to explain the perception. And so, on until we get to the center of the brain, where the only remaining possibility is a non-physical soul, and we are back to Descartes’ model. Moreover, what a waste it seems for such an important functional property as consciousness to have evolved to rely on the relatively impoverished proprioceptive system when our exteroceptive system is so exquisitely accurate. It is our past behavior (our reinforcement history) that best predicts our future behavior. If, as I claim, the purpose of answering Schlinger’s questions is to predict our overt behavior, the part of our behavior that will affect them, why would our answer refer to our unreliable inner speech? There is no denying that we talk and picture things to ourselves. I believe that these covert acts, when they occur, are part of the mechanism by which our overt behavior is sometimes organized. But I do not believe that they can be usefully identified as thinking, perceiving, sensing, imagining, and so on. There is insufficient room between our central and peripheral nervous systems, on the one hand, and our overt behavior, on the other, for a massive covert behavioral system, a system that, if the covert-behavior view of consciousness is right, would have to be the referent for our entire mental vocabulary.

    In the face of this unlikelihood, bordering on impossibility, what is a behaviorist to do? One tactic would be for behaviorists to join many philosophers and to declare that the mind is simply inaccessible to scientific study. Such an attitude is understandable coming from philosophers, because by implication they would be the experts on mental life. But, for a psychologist, to give up on the scientific study of the mind and consciousness is to give up on what psychology is supposed, by the people who support our research, to be all about. Such a tactic, if adopted, would marginalize behaviorism still further within psychology. But these are just extrinsic reasons. The intrinsic reason for a behavioral science of mind, the reason that I wrote the target article [and this book], is that a view of the mind as overt behavior is the best, the most logically consistent, the most satisfying (try it and see) view of the mind that one can take.

    To take this view, however, we need to give up on the strict efficient-cause, mechanical, interacting billiard-ball view of causation in which each cause must lie temporally as well as spatially up against its effect, and to adopt a teleological view of causation. From a teleological viewpoint, abstract patterns of movements are (final) causes of the particular acts that make them up. Instead of efficient causes prior to their effects, final causes are more abstract and extended in time than their effects. For example, fastening a board is a final cause of hammering a nail, building a floor a final cause of fastening a board, building a house a final cause of building a floor, sheltering a family a final cause of building a house, and so on. Each final cause is an answer to the question WHY. Efficient causes are answers to the question HOW. Thus, final causes are more appropriate than are efficient causes for Skinnerian behaviorists who are focused on explaining behavior in terms of reinforcement. Skinner’s notion, that a contingency of reinforcement (that takes time to occur) can be a cause, and that a response rate (that takes time to occur) can be an effect, is an example of departure from efficient causation. We do not need to justify the effect of contingencies by imagining miniature contingencies represented in the brain efficiently causing behavior. Physics long ago gave up the billiard-ball view of the connection between cause and effect (gravity, magnetism, electric fields, not to mention all of quantum physics). In economics, utility functions are viewed as causes of the particular economic exchanges that make them up. A utility function need not be represented in the brain or anywhere except in the economist’s observations. Aristotle believed that final causes are actually more scientific than efficient causes because they are more abstract (Randall, 1960). In the target article I tried to demonstrate that our mental vocabulary fits like a glove on patterns of overt behavior over time. It is in that (teleological) sense and in that sense only that, as Aristotle claimed, the mind can cause behavior (Rachlin, 1992, 1994).