2 Reply to Arbib and Gunderson

In December, 1972, Michael Arbib and Keith Gunderson presented papers to an American Philosophical Association symposium on my earlier book, Content and Consciousness, to which this essay was a reply.1 While one might read it as a defense of the theory in my first book, I would rather have it considered an introduction to the offspring theory. In spite of a few references to Arbib’s and Gunderson’s papers and my book, this essay is designed to be comprehensible on its own, though I would not at all wish to discourage readers from exploring its antecedents. In the first section the ground rules for ascribing mental predicates to things are developed beyond the account given in Chapter 1. There I claimed that since intentional stance predictions can be made in ignorance of a thing’s design—solely on an assumption of the design’s excellence—verifying such predictions does not help to confirm any particular psychological theory about the actual design of the thing. This implies that what two things have in common when both are correctly attributed some mental feature need be no independently describable design feature, a result that threatens several familiar and compelling ideas about mental events and states. The second section threatens another familiar and compelling idea, viz., that we mean one special thing when we talk of consciousness, rather than a variety of different and improperly united things.

I

Suppose two artificial intelligence teams set out to build face-recognizers. We will be able to judge the contraptions they come up with, for we know in advance what a face-recognizer ought to be able to do. Our expectations of face-recognizers do not spring from induction over the observed behavior of large numbers of actual face-recognizers, but from a relatively a priori source: what might be called our intuitive epistemic logic, more particularly, “the logic of our concept” of recognition. The logic of the concept of recognition dictates an open-ended and shifting class of appropriate further tasks, abilities, reactions and distinctions that ideally would manifest themselves in any face-recognizer under various conditions. Not only will we want a face-recognizer to answer questions correctly about the faces before it, but also to “use” its recognition capacities in a variety of other ways, depending on what else it does, what other tasks it performs, what other goals it has. These conditions and criteria are characterized intentionally; they are a part of what I call the theory of intentional systems, the theory of entities that are not just face-recognizers, but theorem-provers, grocery-choosers, danger-avoiders, music appreciators.

Since the Ideal Face-Recognizer, like a Platonic Form, can only be approximated by any hardware (or brainware) copy, and since the marks of successful approximation are characterized intentionally, the face-recognizers designed by the two teams may differ radically in material or design. At the physical level one might be electronic, the other hydraulic. Or one might rely on a digital computer, the other on an analogue computer. Or, at a higher level of design, one might use a system that analyzed exhibited faces via key features with indexed verbal labels—“balding,” “snub-nosed,” “lantern-jawed”—and then compared label-scores against master lists of label scores for previously encountered faces, while the other might use a system that reduced all face presentations to a standard size and orientation, and checked them quasi-optically against stored “templates” or “stencils.” The contraptions could differ this much in design and material while being equally good—and quite good—approximations of the ideal face-recognizer. This much is implicit in the fact that the concept of recognition, unlike the concepts of, say, protein or solubility, is an intentional concept, not a physical or mechanistic concept.

But obviously there must be some similarity between the two face-recognizers, because they are, after all, both face-recognizers. For one thing, if they are roughly equally good approximations of the ideal, the intentional characterizations of their behaviors will have a good deal in common. They will often both be said to believe the same propositions about the faces presented to them, for instance. But what implications about further similarity can be drawn from the fact that their intentional characterizations are similar? Could they be similar only in their intentional characterizations?

Consider how we can criticize and judge the models from different points of view. From the biological point of view, one model may be applauded for utilizing elements bearing a closer resemblance in function or even chemistry to known elements in the brain. From the point of view of engineering, one model may be more efficient, failsafe, economical and sturdy. From an “introspective” point of view, one model may appear to reflect better the actual organization of processes and routines we human beings may claim to engage in when confronted with a face. Finally, one model may simply recognize faces better than the other, and even better than human beings can. The relevance of these various grounds waxes and wanes with our purposes. If we are attempting to model “the neural bases” of recognition, sturdiness and engineering economy are beside the point—except to the extent (no doubt large) that the neural bases are sturdy and economical. If we are engaged in “artificial intelligence” research as contrasted with “computer simulation of cognitive processes,”2 we will not care if our machine’s ways are not those of the man in the street, and we will not mind at all if our machine has an inhuman capacity for recognizing faces.

Now as “philosophers of mind,” which criterion of success should we invoke? As guardians of the stock of common mentalistic concepts, we will not be concerned with rival biological theories, nor should we have any predilections about the soundness of “engineering” in our fellow face-recognizers. Nor, finally, should we grant the last word to introspective data, to the presumed phenomenology of face-recognition, for however uniform we might discover the phenomenological reports of human face-recognizers to be, we can easily imagine discovering that people report a wide variety of feelings, hunches, gestalts, strategies, intuitions while sorting out faces, and we would not want to say this variation cast any doubt on the claim of each of them to be a bona fide face-recognizer. Since it seems we must grant that two face-recognizers, whether natural or artificial, may accomplish this task in different ways, this suggests that even when we ascribe the same belief to two systems (e.g., the belief that one has seen face n more than once before), there need be no elements of design, and a fortiori of material, in common between them.

Let us see how this could work in more detail. The design of a face-recognizer would typically break down at the highest level into subsystems tagged with intentional labels: “the feature detector sends a report to the decision unit, which searches the memory for records of similar features, and if the result is positive, the system commands the printer to write ‘I have seen this face before’”—or something like that. These intentionally labeled subsystems themselves have parts, or elements, or states, and some of these may well be intentionally labeled in turn: the decision unit goes into the conviction-that-Ive-seen-this-face-before state, if you like. Other states or parts may not suggest any intentional characterization—e.g., the open state of a particular switch may not be aptly associated with any particular belief, intention, perception, directive, or decision. When we are in a position to ascribe the single belief that p to a system, we must, in virtue of our open-ended expectations of the ideal believer-that-p, be in a position to ascribe to the system an indefinite number of further beliefs, desires, etc. While no doubt some of these ascriptions will line up well with salient features of the system’s design, other ascriptions will not, even though the system’s behavior is so regulated overall as to justify those ascriptions. There need not, and cannot, be a separately specifiable state of the mechanical elements for each of the myriad intentional ascriptions, and thus it will not in many cases be possible to isolate any feature of the system at any level of abstraction and say, “This and just this is the feature in the design of this system responsible for those aspects of its behavior in virtue of which we ascribe to it the belief that p.” And so, from the fact that both system S and system T are well characterized as believing that p, it does not follow that they are both in some state uniquely characterizable in any other way than just as the state of believing that p. (Therefore, S and T’s being in the same belief state need not amount to their being in the same logical state, if we interpret the latter motion as some Turing-machine state for some shared Turing-machine interpretation, for they need not share any relevant Turing-machine interpretation.)

This brings me to Arbib’s first major criticism. I had said that in explaining the behavior of a dog, for instance, precision in the intentional story was not an important scientific goal, since from any particular intentional ascription, no precise or completely reliable inferences about other intentional ascriptions or subsequent behavior could be drawn in any case, since we cannot know or specify how close the actual dog comes to the ideal. Arbib finds this “somewhat defeatist,” and urges that “there is nothing which precludes description at the intentional level from expressing causal sequences providing our intentional language is extended to allow us to provide descriptions with the flexibility of a program, rather than a statement of general tendencies.” Now we can see that what Arbib suggests is right. If we put intentional labels on parts of a computer program, or on states the computer will pass through in executing a program, we gain access to the considerable predictive power and precision of the program.* When we put an intentional label on a program state, and want a prediction of what precisely will happen when the system is in that intentional state, we get our prediction by taking a close look not at the terms used in the label—we can label as casually as you like—but at the specification of the program so labeled. But if Arbib is right, I am not thereby wrong, for Arbib and I are thinking of rather different strategies. The sort of precision I was saying was impossible was a precision prior to labeling, a purely lexical refining which would permit the intentional calculus to operate more determinately in making its idealized predictions. Arbib, on the other hand, is talking about the access to predictive power and precision one gets when one sullies the ideal by using intentional ascriptions as more or less justifiable labels for program features that have precisely specified functional interrelations.

One might want to object: the word “label” suggests that Arbib gets his predictive power and precision out of intentional description by mere arbitrary fiat. If one assigns the intentional label “the belief-that-p state” to a logical state of a computer, C, and then predicts from C’s program what it will do in that state, one is predicting what it will do when it believes that p only in virtue of that assignment, obviously. Assignments of intentional labels, however, are not arbitrary: it can become apt so to label a state when one has designed a program of power and versatility. Similarly, one’s right to call a subsystem in his system the memory, or the nose-shape-detector, or the jawline analyzer hinges on the success of the subsystem’s design rather than any other feature of it. The inescapably idealizing or normative cast to intentional discourse about an artificial system can be made honest by excellence of design, and by nothing else.

This idealizing of intentional discourse gives play to my tactic of ontological neutrality, which Gunderson finds so dubious. I wish to maintain physicalism—a motive that Gunderson finds congenial—but think identity theory is to be shunned. Here is one reason why. Our imagined face-recognizers were presumably purely physical entities, and we ascribed psychological predicates to them (albeit a very restricted set of psychological predicates, as we shall see). If we then restrict ourselves for the moment to the “mental features” putatively referred to in these ascriptions, I think we should be able to see that identity theory with regard to them is simply without appeal. The usual seductions of identification are two, I think: ontological economy, or access to generalization (since this cloud is identical with a collection of water droplets, that cloud is apt to be as well). The latter motive has been all but abandoned by identity theorists in response to Putnam’s objections (and others), and in this instance it is clearly unfounded; there is no reason to suppose that the physical state one identified with a particular belief in one system would have a physical twin in the other system with the same intentional characterization. So if we are to have identity, it will have to be something like Davidson’s “anomalous monism.”3 But what ontic house-cleaning would be accomplished by identifying each and every intentionally characterized “state” or “event” in a system with some particular physical state or event of its parts? In the first place there is no telling how many different intentional states to ascribe to the system; there will be indefinitely many candidates. Is the state of believing that 100 < 101 distinct from the state of believing that 100 < 102, and if so, should we then expect to find distinct physical states of the system to ally with each? For some ascriptions of belief there will be, as we have seen, an isolable state of the program well suited to the label, but for each group of belief-states thus anchored to saliencies in our system, our intuitive epistemic logic will tell us that anyone who believed p, q, r, … would have to believe s, t, u, v, … as well, and while the behavior of the system would harmonize well with the further ascription to it of belief in s, t, u, v, … (this being the sort of test that establishes a thing as an intentional system), we would find nothing in particular to point to in the system as the state of belief in s, or t or u or v. … This should not worry us, for the intentional story we tell about an entity is not a history of actual events, processes, states, objects, but a sort of abstraction.* The desire to identify each and every part of it with some node or charge or region just because some parts can be so identified, is as misguided as trying to identify each line of longitude and latitude with a trail of molecules—changing, of course, with every wave and eddy—just because we have seen a bronze plaque at Greenwich or a row of posts along the Equator.

It is tempting to deny this, just because the intentional story we tell about each other is so apparently full of activity and objects: we are convicted of ignoring something in our memory, jumping to a conclusion, confusing two different ideas. Grammar can be misleading. In baseball, catching a fly ball is an exemplary physical event-type, tokens of which turn out on analysis to involve a fly ball (a physical object) which is caught (acted upon in a certain physical way). In crew, catching a crab is just as bruisingly physical an event-type, but there is no crab that is caught. Not only is it not the case that oarsmen catch real live (or dead) crabs with their oars; and not only is it not the case that for each token of catching a crab, a physically similar thing—each token’s crab—is caught, it is not even the case that for each token there is a thing, its crab, however dissimilar from all other such crabs, that is caught. The parallel is not strong enough, however, for while there are no isolable crabs that are caught in crew races, there are isolable catchings-of-crabs, events that actually happen in the course of crew races, while in the case of many intentional ascriptions, there need be no such events at all. Suppose a programmer informs us that his face-recognizer “is designed to ignore blemishes” or “normally assumes that faces are symmetrical aside from hair styles.” We should not suppose he is alluding to recurrent activities of blemish-ignoring, or assuming, that his machine engages in, but rather that he is alluding to aspects of his machine’s design that determine its behavior along such lines as would be apt in one who ignored blemishes or assumed faces to be symmetrical. The pursuit of identities, in such instances, seems not only superfluous but positively harmful, since it presumes that a story that is, at least in large part, a calculator’s fiction is in fact a history of actual events, which if they are not physical will have to be non-physical.

At this point Gunderson, and Thomas Nagel,4 can be expected to comment that these observations of mine may solve the mind–body problem for certain machines—a dubious achievement if there ever was one—but have left untouched the traditional mind–body problem. To see what they are getting at, consider Gunderson’s useful distinction between “program-receptive and program-resistant features of mentality.”5 Some relatively colorless mental events, such as those involved in recognition and theorem-proving, can be well-simulated by computer programs, while others, such as pains and sensations, seem utterly unapproachable by the programmer’s artifices. In this instance the distinction would seem to yield the observation that so far only some program-receptive features of mentality have been spirited away unidentified, leaving such program-resistant features as pains, itches, images, yearnings, thrills of lust and other raw feels unaccounted for. Doesn’t my very Rylean attempt fall down just where Ryle’s own seems to: on the undeniable episodes of conscious experience? It is certainly clear that the intentional features so far considered have a less robust presence in our consciousness than the program-resistant variety, and as Gunderson insists, the latter are the sort to which we are supposed to have incorrigible or infallible or privileged access. The former, on the other hand, are notoriously elusive; we are often deceived about our own beliefs; we often do not know what train of “subconscious” choices or decisions or inferences led to our recognition of a face or solution to a problem. So there is some plausibility in relegating these putative events, states, achievements, processes to the role of idealized fictions in an action-predicting, action-explaining calculus, but this plausibility is notably absent when we try the same trick with pains or after-images.

That is one reason why Gunderson is unsatisfied. He sees me handling the easy cases and thinks that I think they are the hard cases. It is embarrassing to me that I have given Gunderson and others that impression, for far from thinking that intentional ascriptions such as belief, desire, and decision are the stumbling blocks of physicalism, I think they are the building blocks. I agree with Gunderson that it is a long way from ascribing belief to a system to ascribing pain to a person (especially to myself), but I think that describing a system that exhibits the program-receptive features is the first step in accounting for the program-resistant features. As Gunderson says, the big problem resides in the investigational asymmetries he describes, and more particularly, in the ineliminable sense of intimacy we feel with the program-resistant side of our mentality. To build a self, a first-person, with a privileged relation to some set of mental features, out of the third-person stuff of intentional systems is the hard part, and that is where awareness1, the notion Arbib finds of dubious utility, is supposed to play its role. Content is only half the battle; consciousness is the other.

II

In Content and Consciousness I proposed to replace the ordinary word “aware” with a pair of technical terms, defined as follows:

The point of my aware1-aware2 distinction* was to drive a wedge between two sorts of allusions found in our everyday ascriptions of awareness or consciousness: allusions to privileged access and to control. What I want to establish is that these two notions wrongly coalesce in our intuitive grasp of what it is to be conscious of something. Many disagree with me, and Arbib is, I think, one of them, for he offers a new definition of his own, of an “awareness1.5,” that is supposed to split the difference and capture what is important in both my terms, and perhaps capture some other important features of consciousness as well. But what I will argue is that Arbib has gravitated to the emphasis on control at the expense of the emphasis on privileged access, and that the result is that his new notion offers some refinements to my crude definition of “aware2” but does not capture at all what I hoped to capture with “aware1.” First, to the refinements of “aware2.” Arbib points out that since a behavioral control system can tap sources of information or subprograms that find no actual exploitation in current behavior control but are only “potentially effective,” and since from such a multiplicity of sources, or “redundancy of potential command,” a higher-order choosing or decision element must pick or focus on one of these, it would be fruitful to highlight such target items as the objects of awareness for such a control system. So Arbib offers the following definition (which in its tolerance for hand-waving is a match for my definitions—he and I are playing the same game): “A is aware1.5 that p at time t if and only if p is a projection of the content of the mental state of A which expresses the concentration of As attention at time t.” I think this captures the connotations of control in our concept of awareness quite satisfactorily—better than my definition of “aware2.” If we want to attract the attention of a dog so he will be aware of our commands, or if we hope to distract the attention of a chess-playing computer from a trap we hope to spring (before it becomes aware of what we are doing), this definition does at least rough justice to those features of the situation we are trying to manipulate.

Let us suppose, as Arbib claims, that this notion of awareness1.5 can be interesting and useful in the analysis of complex natural and artificial behavioral control systems. Nevertheless, no matter how fancy such a control system becomes, if this is the only sort of awareness it has, it will never succeed in acquiring a soul. As Nagel would put it, there will not be something it is like to be that control system.7 This is surprising, perhaps, for complex control systems seem in the first blush of their intentionality to exhibit all the traditional marks of consciousness. They exhibit a form of subjectivity, for we distinguish the objective environment of the system from how the environment seems or appears to the system. Moreover, their sensory input divides into the objects of attention on the one hand, and the part temporarily ignored or relegated to the background on the other. They even may be seen to exhibit signs of self-consciousness in having some subsystems that are the objects of scrutiny and criticism of other, overriding subsystems. Yet while they can be honored with some mental epithets, they are not yet persons or selves. Somehow these systems are all outside and no inside, or, as Gunderson says, “always at most a he or she or an it and never an I or a me to me.”

The reason is, I think, that for purposes of control, the program-receptive features of mentality suffice: belief, desire, recognition, analysis, decision and their associates can combine to control (nonverbal) activity of any sophistication. And since even for creatures who are genuine selves, there is nothing it is like to believe that p, desire that q, and so forth, you can’t build a self, a something it is like something to be, out of the program-receptive features by themselves.

What I am saying is that belief does not have a phenomenology. Coming to believe that p may be an event often or even typically accompanied by a rich phenomenology (of feelings of relief at the termination of doubt, glows of smugness, frissons of sheer awe) but it has no phenomenology of its own, and the same holds for the other program-receptive features of mentality. It is just this, I suspect, that makes them program-receptive.

If we are to capture the program-resistant features in an artificial system, we must somehow give the system a phenomenology, an inner life. This will require giving the system something about which it is in a privileged position, something about which it is incorrigible, for whatever else one must be to have a phenomenology, one must be the ultimate authority with regard to its contents. On that point there is widespread agreement. Now I want to claim first that this incorrigibility, properly captured, is not just a necessary but a sufficient condition for having a phenomenology, and second, that my notion of awareness1 properly captures incorrigibility (see Chapter 9). This brings me to Arbib’s criticisms of the notion of awareness1, for they call for some clarifications and restatements on my part.

First Arbib points out, “the inadequacy of our verbal reports of our mental states to do justice to the richness that states must exhibit to play the role prescribed for them in system theory” (p. 583). At best, the utterances for which I claim a sort of infallibility express only a partial sampling of one’s inner state at the time. The content of one’s reports does not exhaust the content of one’s inner states. I agree. Second, he points out that such a sample may well be unrepresentative. Again, I agree. Finally, he suggests that it is “a contingent fact that some reports are sufficiently reliable to delude some philosophers into believing that reports of mental states are infallible” (p. 584), and not only do I find a way of agreeing with this shrewd observation; I think it provides the way out of a great deal of traditional perplexity. We are confused about consciousness because of an almost irresistible urge to overestimate the extent of our incorrigibility. Our incorrigibility is real; we feel it in our bones, and being real it is, of course, undeniable, but when we come to characterize it, we generously endow ourselves with capacities for infallibility beyond anything we have, or could possibly have, and even the premonition that we could not possibly have such infallibility comforts rather than warns us, for it ensures us that we are, after all, mysterious and miraculous beings, beyond all explaining. Once we see just how little we are incorrigible about, we can accommodate the claim that this incorrigibility is the crux of our selfhood to the equally compelling claim that we are in the end just physical denizens of a physical universe.

Arbib observes that “it follows from any reasonable theory of the evolution of language that certain types of report will be highly reliable” (p. 584). Indeed, for event-types in a system to acquire the status of reports at all, they must be, in the main, reliable (see Chapter 1, page 15). The trick is not to confuse what we are, and must be, highly reliable about, with what we are incorrigible about. We are, and must be, highly reliable in our reports about what we believe, and desire, and intend, but we are not infallible. We must grant the existence of self-deception, whether it springs from some deep inner motivation, or is the result of rather mundane breakdowns in the channels between our behavior-controlling states and our verbal apparatus.* What Arbib suggests, quite plausibly, is that some philosophers have confused the (correct) intuition that we must be authoritative in general in our reports of all our mental states, with the (false) intuition that we are incorrigible or infallible with regard to all our reports of our mental states. Infallibility, if it exists, must be a more modest endowment.

Let us consider how these highly reliable, but not incorrigible, reports of our inner, controlling states might issue from the states they report. Following Arbib, we can grant that one’s controlling state (one’s state of awareness1.5) at any time is immensely rich in functional capacities, and hence in content. Let us suppose that at some moment part of the content of Smith’s state of awareness1.5 is the belief, visually inculcated, that a man is approaching him. Since Smith is a well-evolved creature with verbal capacities, we can expect a further part of the content of this state of awareness1.5 to be a conditional command to report: “I see a man approaching,” or words to that effect. Using Putnam’s analogy, we can say that Smith’s state of awareness1.5 is rather like a Turing Machine state consisting of very many conditional instructions, one of which is the conditional instruction to print: “I see a man approaching.” Let us call Smith’s whole state of awareness1.5 state A. Suppose Smith now says, “I see a man approaching.” His verbal report certainly does not do justice to state A, certainly represents a partial and perhaps unrepresentative sample of the content of state A, and moreover, can occur in situations when Smith is not in state A, for there will no doubt be many other states of awareness1.5 that include the conditional instruction to say, “I see a man approaching,” or, due to malfunction or faulty design, Smith’s verbal apparatus may execute that instruction spuriously, when Smith’s state of awareness1.5 would not normally or properly include it. But suppose we break down state A into its component states, one for each instruction. Then being in state A will ipso facto involve being in state B, the state of being instructed to report: “I see a man approaching.” Now let us rename state B the state of awareness1 that one sees a man approaching. Abracadabra, we have rendered Smith “infallible,” for while his report “I see a man approaching” is only a highly reliable indicator that he is in state A (which would ensure that he would do the other things appropriate to believing a man is approaching, for instance), it is a foolproof indicator that he is in state B. This does not leave Smith being infallible about very much, but then we shouldn’t expect him to be—he’s only human, and infallibility about great matters is a Godlike, i.e., inconceivable, power.

Smith’s infallibility has been purchased, obviously, by a cheap trick: it is only by skewing the identity conditions of the state reported so that reportorial truth is guaranteed that we get reportorial infallibility. But the trick, while cheap, is not worthless, for states of awareness1 so defined have a role to play in the description of systems with verbal capacities: we must be able to distinguish the command states of a system’s verbal apparatus, what the system “means to say” in a particular instance, so that subsequent failures in execution—the merely verbal slips—can have a standard against which to be corrected.

Smith has not reported that he sees a man approaching if he makes a verbal slip, or misuses a word, even if the end result is his utterance of the words: “I see a man approaching.” Smith has reported that he sees a man approaching only if he said what he meant to say: that is, only if his actual utterance as executed meets the standards set by his state of awareness1. But if that is what it is for Smith to report, and not merely utter sounds, then whenever Smith reports, his reports will be guaranteed expressions of his state of awareness1.

This does not mean that we can give independent characterizations of some of Smith’s utterances, namely his reports, that happen to be foolproof signs that Smith is in certain independently characterized internal states, namely his states of awareness1. That would be miraculous. But we wouldn’t want to do that in any case, for if we could tell, by examination, which of Smith’s utterances were his genuine, error-corrected reports, he would not have privileged access, for we could be in a perfect position to determine his states of awareness1. The relationship between internal physical states of Smith, and their external manifestations in utterance is just garden-variety causation, and so any normal linkages between them are subject to all the possibilities of error or malfunction any physical system is subject to. It is just that the concepts of a report and of awareness1 are so defined that Smith has an infallible capacity to report his states of awareness1. But does this amount to anything at all of interest? Well, if we happened to want to know what state of awareness1 Smith was in, we could do no better than to wait on Smith’s report, and if we were unsure as to whether what we heard was a genuine report of Smith’s, again we could do no better than to rely on Smith’s word that it was, or on the voucher implicit in his refraining from taking back or correcting what he said. But would we ever want to know what anyone’s state of awareness1 was? If we wanted to know whether what Smith said was what he meant to say, we would. And we might be interested in that, for if Smith said what he meant to say, we have a highly reliable, though not infallible, indicator of Smith’s state of awareness1.5. Or, if we suspected that something was awry in Smith’s perceptual apparatus (because his account of what he saw did not match what was in front of his eyes), we would be interested in his states of awareness1, for if Smith said what he meant to say, then our response to his aberrant perceptual reports would be not, “That cant be what you are aware of, since there is no man approaching,” but, “Since you are aware of a man approaching, when there is no man approaching, there must be something wrong with your eyes or your brain.”

Note that Smith’s access to his states of awareness1 is both privileged and non-inferential, unlike ours. When we want to know what state of awareness1 Smith is in, we must ask him, and then infer on the basis of what happens what the state is. Or we might someday be able to take Smith’s brain apart, and on the basis of our knowledge of its interconnections make a prediction of what Smith would say, were we to ask him, and what he would say, were we further to ask him if his report was sincere, etc., and on the basis of these predictions infer that he was in a particular state of awareness1. But Smith doesn’t have to go through any of this. When we ask him what state of awareness1 he is in, he does not have to ask anyone or investigate anything in turn: he just answers. Being asked, he comes to mean to say something in answer, and whether what he means to say then is right or wrong (relative to what he ought to mean to say, what he would say if his brain were in order, if he were aware1 of what he is aware1.5 of), if he says it, he will thereby say what he is aware1 of.* By being a system capable of verbal activity, Smith enters the community of communicators. He, along with the others, can ask and answer questions, make reports, utter statements that are true or false. If we consider this group of persons and ask if there is some area of concern where Smith is the privileged authority, the answer is: in his reports of awareness1. Other persons may make fallible, inferential statements about what Smith is aware1 of. Smith can do better.

I have said that the extent of our infallibility, as opposed to our high reliability, is more restricted than some philosophers have supposed. Our infallible, non-inferential access consists only in our inevitable authority about what we would mean to say at a particular moment, whether we say it or not. The picture I want to guard against is of our having some special, probing, evidence-gathering faculty that has more access to our inner states (our states of awareness1.5 perhaps) than it chooses to express in its reports. Our coming to mean to say something is all the access we have, and while it is infallible access to what we mean to say, it is only highly reliable access to what state is currently controlling the rest of our activity and attitudes. Some philosophers have supposed otherwise. Gunderson, for example says,

Consider any intentional sentence of the form “I ____ that there are gophers in Minnesota” where ‘____’ is to be filled in by an intentional verb (‘believe’, ‘suppose’, ‘think’, etc., and contrast our way of knowing its truth (or falsity) with any non-first-person variant thereof. … That is, if I know that “I suppose that there are gophers in Minnesota” is true, the way in which I come to know it is radically different from the way I might come to know that “Dennett supposes that there are gophers in Minnesota” is true. (my italics)

The verb “suppose” has been nicely chosen; if it is taken in the sense of episodic thinking, what Gunderson at this very moment is supposing to himself, then Gunderson has special, non-inferential incorrigible access to what he supposes, but if it is taken as a synonym for “believe,” then Gunderson is in only a contingently better position than I am to say whether he supposes there are gophers in Minnesota, for he is more acquainted with his own behavior than I happen to be. It would be odd to suppose (in the sense of “judge”) that there are gophers in Minnesota without supposing (in the sense of “believe”) that there are gophers in Minnesota, but not impossible.8 That is, Gunderson’s episode of meaning to himself that there are gophers in Minnesota is something to which his access is perfect but it is itself only a highly reliable indicator of what Gunderson believes. Lacking any remarkable emotional stake in the proposition “There are gophers in Minnesota,” Gunderson can quite safely assume that his judgment is not a piece of self-deception, and that deep in his heart of hearts he really does believe that there are gophers in Minnesota, but that is to make a highly reliable inference.

There is more than one verb that straddles the line as “suppose” does. “Think” is another, and a most important one. If one supposes that it is our thinking that actually controls our behavior, then we must grant that we do our thinking subconsciously, beyond our direct access, for we have only fallible and indirect, though highly reliable, access to those states, events, processes that occur in our control systems. If one supposes on the other hand that one’s thinking is one’s “stream of consciousness,” the episodes to which we have privileged access, then we must grant that thinking is an activity restricted to language-users, and only circumstantially related to the processes that account for their self-control. The two notions of thinking can each lay claim to being ordinary. Arbib champions one, and Gunderson the other. When Arbib says of a verbal report that “the phrase is but a projection of the thought, not the thought itself. … Thus utterances like ‘I see a man approaching’ express mere aspects of the robot’s total state,” he seems to be identifying the total state of awareness1.5 with the robot’s thoughts, for he says “many different aspects of its current ‘thoughts’ could have been elicited by different questions” (my italics). The current thoughts, it seems, coexist not serially in a stream of consciousness, not as distinct episodes to which anyone, even the robot, has access in any sense, but in parallel, in the processes of control. I don’t think it is wrong to think of thought in this way, and I also don’t think it is wrong to think of thought as that contentful stream to which I have privileged, non-inferential access. I even think that in the last analysis one is not thinking about thought unless one is thinking of something with both these features.9 It is only wrong, I think, to think that this dual prescription can actually be filled by any possible entities, states, or events. In just the same way someone would be mistaken who thought there was some physical thing that was all at once the voice I can strain, lose, recognize, mimic, record, and enjoy.10

There is, then, a sense in which I am saying there is no such thing as a thought. I am not denying that there are episodes whose content we are incorrigible about, and I am not denying that there are internal events that control our behavior and can, in that role, often be ascribed content. I am denying, however, that in providing an account or model of one of these aspects one has provided in any way for the other. And I am insisting that thoughts and pains and other program-resistant features of mentality would have to have both these aspects to satisfy their traditional roles. The pain in my toe, for instance, is surely not just a matter of my meaning to tell you about it, nor is it something I am only inferentially or indirectly aware of, that is disrupting or otherwise affecting the control of my behavior. Then, since I am denying that any entity could have the features of a pain or a thought, so much the worse for the ontological status of such things.

Notes