Introduction

What is a philosophical theory of the mind?

I claim that the essays in this book taken together express a theory of the mind, so I should begin by explaining what I take a theory of the mind to be. Several very different sorts of intellectual productions are called theories: particle theory in physics, set theory in mathematics, game theory, literary theory, the theory of evolution, the identity theory in the philosophy of mind. Some things are called theories that might better be called hypotheses. The theory of evolution by natural selection is surely a theory of sorts, but its rival, creationism, is a theory only by courtesy. It lacks the parts, the predictive power, the organization of a theory; it is merely a hypothesis, the hypothesis that the theory of evolution by natural selection is false and that God created the species. I suspect we call it a theory to acknowledge that it is a genuine alternative to something that clearly is a theory. Creationism, after all, might be true and Darwinism false—which goes to show that one needn’t always counter a theory with a theory. We won’t need a theory of clairvoyance, for instance, if we can confirm the hypothesis that all apparent clairvoyants are cheats. Hoaxism is a worthy opponent of the most elaborate theory of clairvoyance, and it consists of but a single statement, supported, of course, by a good deal of sleuthing.

Philosophical theories are often hypotheses of this sort: large generalizations that do not ramify into vast organized structures of details, or predict novel effects (like theories in chemistry or physics), but are still vulnerable to disconfirmation (like hoaxism), and require detailed and systematic support. Thus “minds are just brains” is one very informal way of expressing a version of physicalism or the identity theory of mind (so-called because it identifies mental events with physical events in the brain), and “minds are not just brains; they're something non-physical” is one very informal way of expressing a version of dualism (so-called because it claims there are at least two fundamental sorts of events or things). Since philosophy often concerns itself with just such very general hypotheses, and the patterns of implication one lands oneself in when defending such hypotheses, philosophy often appears to the outsider to be a ludicrously overpopulated battlefield of “isms,” with each imaginable variation on each general assertion pompously called a theory and given a proprietary name.

This appearance is untrustworthy, however, and the proliferation of theories is not really an embarrassment. It is surely initially reasonable to suppose that such a general hypothesis about the mind makes sense, and then it is also reasonable both to suppose that either it or its denial is true, and to wonder which. A sensible way to try to answer this question is to explore the evidence for, and implications of, the possible alternatives, and defend the most plausible candidate until proven wrong. That process however soon gets complicated, and it becomes imperative to distinguish one’s hypothesis very precisely from closely resembling hypotheses whose hidden flaws one has uncovered. Technical terms—jargon—permit one to triangulate the possible positions in logical space and thus keep track of the implication chains one is avoiding or exploiting. Thus are born interactionism, anomolous monism, logical behaviorism, Turing machine functionalism and the other named locations in the logical space of possible general claims about the nature of the mind.

To a first approximation then a philosophical theory of the mind is supposed to be a consistent set of answers to the most general questions one can ask about minds, such as “are there any?,” “are they physical?,” “what happens in them?” and “how do we know anything about them?” Such a theory is not supposed to compete with or supplant neurophysiological or psychological theories, but rather both to ground such theories and to supplement them. It can ground such theories by providing the justification for the fundamental metaphysical assumptions such theories must unavoidably make. It can supplement them by providing answers to the simple, straightforward questions that those scientific theories are utterly unable to answer from their own resources. Every brain scientist knows that even in the Golden Age of neurophysiological knowledge, when the activity of every tract of fibers will be well understood, questions like “what is consciousness?” and “what is it about pains that makes them awful?” will find no answers in their textbooks—unless those textbooks include chapters of philosophy.

Many psychologists and brain scientists are embarrassed by the philosophical questions, and wish no one would ask them, but of course their students persist in asking them, because in the end these are the questions that motivate the enterprise. Synaptic junctures and response latencies have some intrinsic interest, to be sure, but if there were no hope that compounding enough facts about these would lead to discoveries about our minds, enthusiasm for such research would not be as keen as it is. The distaste of many empirical scientists for the philosophical questions is no doubt due to the fact that until very recently philosophers’ attempts to answer them were conducted in blithe ignorance of and indifference to the discoveries, theories and problems of those sciences. That indifference was galling, I am sure—as galling as the counter-disdain of the scientists—but reasonable: until very recently there were few discoveries, theories or problems in the sciences that promised to illuminate the philosophical issues at all.

Times have changed. Psychology has become “cognitive” or “mentalistic” (in many quarters) and fascinating discoveries have been made about such familiar philosophical concerns as mental imagery, remembering and language comprehension. Even the brain scientists are beginning to tinker with models that founder on conceptual puzzles. There is, for instance, the problem of avoiding the “grandmother neuron.” Many otherwise plausible theory sketches in brain science seem to lead ineluctably to the view that the “representation” of each particular “concept” or “idea” will be the responsibility of a particular neuron or other small part of the brain. Suppose your “grandmother” neuron died; not only could you not say “grandmother,” you couldn’t see her if she was standing right in front of you. You couldn’t even think about grandmothers at all; you would have a complete cognitive blind spot. Nothing remotely like that pathology is observed, of course, and neurons malfunction or die with depressing regularity, so for these and other reasons, theories that require grandmother neurons are in trouble. The problem is to find a theory that avoids this difficulty in all its guises, and this is a problem so abstract as to be properly philosophical. Many other problems arising in these sciences—problems about concept learning, reasoning, memory, decision—also have an unmistakably philosophical cast.

Philosophy of mind has responded to these developments by becoming “naturalized”; it has become a branch of the philosophy of science concerning itself with the conceptual foundations and problems of the sciences of the mind.1* This has changed the shape and texture of philosophical theories of the mind by introducing into the discussions of the traditional issues many of the data and conceptual tools of the new scientific approaches, and raising new issues arising from the puzzles and pitfalls of those approaches.

Philosophy of mind is unavoidable. As soon as one asserts anything substantive about anything mental, one ipso facto answers at least by implication one or more of the traditional questions and thus places oneself in the camp of an ism. Perhaps some theorists arrive at their positions by methodically filling in the blanks on the branching checklist of possibilities, but this is not a strategy I recommend. The views already charted, named and cataloged have all been ably defended, but none has achieved consensus. One is not apt to find the magic words of support that will suddenly bring victory to an already articulated theory. A better strategy, or at least the strategy I have tried to follow, is to start not by looking hard at the possible answers to the traditional questions posed in traditional terms, but by looking hard at the empirical data, psychological theories, models of brain function and so forth, and letting the considerations and saliencies that appeared there suggest what would be important to keep distinct in a theory of the mind. The result is a theory that looks like an ungainly and inelegant hybrid, an unnameable hodge-podge of theory parts, when measured against the traditional pattern of categories. Since I think my theory carves nature at the joints, however, I am inclined to claim that it is the traditional pattern that is misshapen. For this reason I have until now refrained from giving my theory a name, and refrained from giving explicit answers to some of the most popular watershed questions, but the questions do remain to be answered, and now it is useful and perhaps even obligatory for me to give direct answers and take sides.

What is my theory?

My theory can be distinguished easily from its rivals via a brief and oversimplified history of recent brands of physicalism. In the beginning was type identity theory. It attempted to answer two questions. To the question, “What are mental events?” it answered, “Every mental event is (identical with) a physical event in the brain,” and to the question, “What do two creatures have in common when they both believe that snow is white (both feel a twinge of pain, imagine an elephant, want a cracker)?” it answered, “In each case where creatures have something mental in common, it is in virtue of having something physical in common—e.g., their brains are in the same physical state or both exhibit the same physical feature.” The answer to the first question made the view an identity theory; the answer to the second established that types of mental events were claimed to correspond to physically characterizable types of brain events. In answering these two questions, type identity theory attempted to discharge two obligations, one “metaphysical” and the other “scientific.” The first answer amounts to the mere denial of dualism, the insistence that we don’t need a category of non-physical things in order to account for mentality. The second takes on the responsibility of explaining commonalities—the task isolated by Socrates’ incessant demands to know what is shared by things called by the same name.

Few today would quarrel with the first answer, but the second answer is hopelessly too strong. The claim it makes is that for every mentalistic term, every “mental” predicate “M,” there is some predicate “P” expressible in the vocabulary of the physical sciences such that a creature is M if and only if it is P. Symbolically,

(1) 11146_e000c_001.jpg

For instance, for all x, x is thinking about baseball if and only if x has F-neurons in electro-chemical state G; or, something is in pain if and only if it has a brain in such and such a physical condition. This is all utterly unlikely.2 Consider some simpler cases to see why. Every clock and every can-opener is no doubt nothing but a physical thing, but is it remotely plausible to suppose or insist that one could compose a predicate in the restricted language of physics and chemistry that singled out all and only the can-openers or clocks? (What is the common physical feature in virtue of which this grandfather clock, this digital wristwatch, and this sundial can be ascribed the predicate “registers 10:00 A.M.”?) What can-openers have peculiarly in common is a purpose or function, regardless of their physical constitution or even their design, and the same is true of clocks.

This recognition led to the second wave of physicalism: Turing machine functionalism. The minimal denial of dualism was maintained—every mental event was a physical event—but the requirements for answering the second question were revised: for every “mental” predicate “M” there is some predicate “F” expressible in some language that is physically neutral, but designed to specify abstract functions and functional relations. The obvious candidates for such a language were the systems used for describing computers or programs. The functional structure of a computer program can be described in an abstract way that is independent of any particular description of physical “hardware.” The most general functional language is the system for describing computers as “Turing machines.” (An elementary introduction to the concept of a Turing machine is provided in Chapter 13.) The states and activities of any digital computer or program can be given a mathematical description as states and activities of a unique (numbered) Turing machine, and this description is its mathematical fingerprint that will distinguish it from all functionally different computers or programs, but not from computers and programs that differ only in “physical realization.” There are problems with this formulation, not germane to the issue at hand, but supposing them to be eliminable, the Turing machine functionalist proposed to say things like

(2) (x)(x believes that snow is white ≡ x “realizes” some Turing machine k in logical state A)

In other words, for two things both to believe that snow is white, they need not be physically similar in any specifiable way, but they must both be in a “functional” condition or state specifiable in the most general functional language; they must share a Turing machine description according to which they are both in some particular logical state (which is roughly like two different computers having the same program and being in the same “place” in the program). The “reduction” of mental predicates to physical predicates attempted by type identity theory has been replaced in this view by a reduction of mental predicates to Turing machine predicates. While the resulting theory is only a token identity theory—each individual mental event is (identical with) some individual physical brain event or other—it is a type functionalism—each mental type is identifiable as a functional type in the language of Turing machine description.

But alas, this second answer is still too strong (as I argue in Chapter 2).3 The supposition that there could be some principled way of describing all believers and pain-sufferers and dreamers as Turing machines so that they would be in the same logical state whenever they shared a mental epithet is at best a fond hope. There is really no more reason to believe you and I “have the same program” in any relaxed and abstract sense, considering the differences in our nature and nurture, than that our brains have identical physico-chemical descriptions. What could be done to weaken the requirements for the second answer still further?

Consider what I will call token functionalism, the view that while every mental event is indeed some physical event or other, and moreover some functional event or other (this is the minimal denial of epiphenomenalism—see footnote on p. 191), mental types are not definable as Turing machine types. How will we answer the Socratic question? What do two people have in common when they both believe that snow is white? I propose this:

(3) (x)(x believes that snow is white ≡ x can be predictively attributed the belief that snow is white)

This appears to be blatantly circular and uninformative—“A horse is any animal to which the term 'horse' truly applies.” The language on the right seems simply to mimic the language on the left. What has happened to the goal of reduction? It was, I submit, a mistaken goal.4

All we need to make an informative answer of this formula is a systematic way of making the attributions alluded to on the right-hand side. Consider the parallel case of Turing machines. What do two different realizations or embodiments of a Turing machine have in common when they are in the same logical state? Just this: there is a system of description such that according to it both are described as being realizations of some particular Turing machine, and according to this description, which is predictive of the operation of both entities, both are in the same state of that Turing machine’s machine table. One doesn’t reduce Turing machine talk to some more fundamental idiom; one legitimizes Turing machine talk by providing it with rules of attribution and exhibiting its predictive powers. If we can similarly legitimize “mentalistic” talk, we will have no need of a reduction. That is the point of my concept of an intentional system (see Chapter 1). Intentional systems are supposed to play a role in the legitimization of mentalistic predicates parallel to the role played by the abstract notion of a Turing machine in setting down rules for the interpretation of artifacts as computational automata. I fear my concept is woefully informal and unsystematic compared with Turing’s, but then the domain it attempts to systematize—our everyday attributions in mentalistic or intentional language—is itself something of a mess, at least compared with the clearly defined mathematical field of recursive function theory, the domain of Turing machines.

The analogy between the theoretical roles of Turing machines and intentional systems is more than superficial. Consider that warhorse in the philosophy of mind, Brentano’s Thesis that intentionality is the mark of the mental: all mental phenomena exhibit intentionality and no physical phenomena exhibit intentionality. (The elusive concept of intentionality is introduced and explained in Chapters 1, 4 and 12.) This has been traditionally taken to be an irreducibility thesis: the mental, in virtue of its intentionality, cannot be reduced to the physical.5 But given the concept of an intentional system, we can construe the first half of Brentano’s Thesis—all mental phenomena are intentional—as a reductionist thesis of sorts, parallel to Church’s Thesis in the foundations of mathematics. According to Church’s Thesis, every “effective” procedure in mathematics is recursive, that is, Turing-computable. (The idea, metaphorically, is that any mathematical task for which there is a clear recipe composed of simple steps can be performed by a very simple computer, a universal Turing machine, the universal recipe-follower.) Church’s Thesis is not provable, since it hinges on the intuitive and unformalizable notion of an effective procedure, but it is generally accepted, and it provides a very useful reduction of a fuzzy-but-useful mathematical notion to a crisply defined notion of apparently equivalent scope and greater power. Analogously, the claim that every mental phenomenon is intentional-system-characterizable would, if true, provide a reduction of the mental—a domain whose boundaries are at best fixed by mutual acknowledgment and shared intuition—to a clearly defined domain of entities, whose principles of organization are familiar, relatively formal and systematic.

In Chapter 1 the question is posed: are there mental treasures that cannot be purchased with intentional coin? The negative answer, like Church’s Thesis, cannot be proved, but only made plausible by the examination of a series of “tough” cases in which mental phenomena are (I claim) captured in the net of intentional systems. That is the major burden of the book, and individual essays tackle particular phenomena: invention in Chapter 5, dreams in Chapter 8, mental images and some of their kin in Chapters 9 and 10, pain in Chapter 11, and free will in Chapters 12 through 15. This is hardly a complete list of mental treasures, but reasons are given along the way, in these chapters and in others, for thinking that parallel treatments can be devised for other phenomena. Complete success in this project would vindicate physicalism of a very modest and undoctrinaire sort: all mental events are in the end just physical events, and commonalities between mental events (or between people sharing a mentalistic attribute) are explicated via a description and prediction system that is neutral with regard to physicalism, but just for that reason entirely compatible with physicalism. We know that a merely physical object can be an intentional system, even if we can 't prove either that every intentional system is physically realizable in principle, or that every intuitively mental item in the world can be adequately accounted for as a feature of a physically realized intentional system.

If one insisted on giving a name to this theory, it could be called type intentionalism: every mental event is some functional, physical event or other, and the types are captured not by any reductionist language but by a regimentation of the very terms we ordinarily use—we explain what beliefs are by systematizing the notion of a believing-system, for instance. This theory has the virtues of fitting neatly into a niche left open by its rivals and being expressible in a few straight-forward general statements, but in that clean, uncomplicated form it is unacceptable to me. Sadly for the taxonomists, I cannot rest content with “type intentionalism” as it stands, for it appears to assume something I believe to be false: viz., that our ordinary way of picking out putative mental features and entities succeeds in picking out real features and entities. Type intentionalism as so far described would assume this by assuming the integrity of the ordinary mentalistic predicates used on the left-hand side of our definition schema (3). One might uncritically suppose that when we talk, as we ordinarily do, of peoples’ thoughts, desires, beliefs, pains, sensations, dreams, experiences, we are referring to members in good standing of usefully distinct classes of items in the world—“natural kinds.” Why else would one take on the burden of explaining how these “types” are reducible to any others? But most if not all of our familiar mentalistic idioms fail to perform this task of perspicuous reference, because they embody conceptual infelicities and incoherencies of various sorts. I argue for this thesis in detail with regard to the ordinary concepts of pain in Chapter 11, belief in Chapters 6 and 16, and experience in Chapters 8, 9, and 10, but the strategic point of these criticisms is more graphically brought out by a fanciful example.

Suppose we find a society that lacks our knowledge of human physiology, and that speaks a language just like English except for one curious family of idioms. When they are tired they talk of being beset by fatigues, of having mental fatigues, muscular fatigues, fatigues in the eyes and fatigues of the spirit. Their sports lore contains such maxims as “too many fatigues spoils your aim” and “five fatigues in the legs are worth ten in the arms.” When we encounter them and tell them of our science, they want to know what fatigues are. They have been puzzling over such questions as whether numerically the same fatigue can come and go and return, whether fatigues have a definite location in matter or space and time, whether fatigues are identical with some particular physical states or processes or events in their bodies, or are made of some sort of stuff. We can see that they are off to a bad start with these questions, but what should we tell them? One thing we might tell them is that there simply are no such things as fatigues—they have a confused ontology. We can expect some of them to retort: “You don't think there are fatigues? Run around the block a few times and you'll know better! There are many things your science might teach us, but the non-existence of fatigues isn't one of them.”

We ought to be unmoved by this retort, but if we wanted to acknowledge this society’s “right” to go on talking about fatigues—it’s their language, after all—we might try to accommodate by agreeing to call at least some of the claims they make about fatigues true and false, depending on whether the relevant individuals are drowsy, exhausted or feigning, etc. We could then give as best we could the physiological conditions for the truth and falsity of those claims, but refuse to take the apparent ontology of those claims seriously; that is, we could refuse to attempt any identification of fatigues. Depending on how much we choose to reform their usage before answering their questions at all, we will appear to be countenancing what is called the disappearance form of the identity theory, or eliminative materialism—for we legislate the putative items right out of existence. Fatigues are not good theoretical entities, however well entrenched the term “fatigues” is in the habits of thought of the imagined society. The same is true, I hold, of beliefs, desires, pains, mental images, experiences—as all these are ordinarily understood. Not only are beliefs and pains not good theoretical things (like electrons or neurons), but the state-of-believing-that-p is not a well-defined or definable theoretical state, and the attribute, being-in-pain, is not a well-behaved theoretical attribute. Some ordinary mental-entity terms (but not these) may perspicuously isolate features of people that deserve mention in a mature psychology; about such features I am a straightforward type-intentionalist or “homuncular functionalist,” as Lycan calls me,6 for reasons that will be clear from Chapters 5, 7, 9 and 11. About the theoretical entities in a mature psychology that eventually supplant beliefs, desires, pains, mental images, experiences … I am also a type-intentionalist or homuncular functionalist. About other putative mental entities I am an eliminative materialist. The details of my view must for this reason be built up piecemeal, by case studies and individual defenses that are not intended to generalize to all mental entities and all mental states. It is no easier to convince someone that there are no pains or beliefs than it would be to convince our imaginary people that there are no fatigues. If it can be done at all (supposing for the moment that one would want to, that it is true!), it can only be done by subjecting our intuitions and convictions about particular cases to skeptical scrutiny.

The foundation for that task is laid in Part I, where the concept of an intentional system is defined and subjected to a preliminary exploration in Chapter 1. Chapter 2 develops arguments against type functionalism and for type intentionalism, and in the second half provides a first look at some of the themes about consciousness explored in detail in Part III. Chapter 3 examines the prospects of a very tempting extension of intentionalism: the brain writing hypothesis. If we can predict someone’s behavior only by ascribing beliefs (and other intentions) to him, mustn’t we suppose those beliefs are somehow stored in him and used by him to govern his behavior, and isn’t a stored sentence a good model—if not our only model—for a stored belief? I argue that while it might turn out that there is some such brain writing that “encodes” our thoughts, the reasons for believing so are far from overwhelming. Further caveats about brain writing are developed in other chapters, especially Chapter 6. It is important to protect type intentionalism, as a general theory of the nature of mentalistic attributions, from the compelling but problem-ridden “engineering” hypothesis that all sophisticated intentional systems must share at least one design feature: they must have an internal system or language of mental representation. In some very weak sense, no doubt, this must be true, and in a variety of strong senses it must be false. What intermediate sense can be made of the claim is a subject of current controversy to which I add fuel in several of the chapters.

Part II explores the foundations of psychology in more detail, and attempts to describe the conceptual environment in which psychology could survive its infancy and grow to maturity. Current wisdom has it that behaviorism is dead and that “cognitive science,” an alliance of cognitive psychology, linguistics and artificial intelligence, is the wave of the future. I share this optimism in part, but see some conceptual traps and false avenues worth pointing out. Chapters 4 and 5 attempt to diagnose both the weaknesses and underestimated strengths of behaviorism. They yield a vision of psychology more unified in both its methods and unsolved problems than the more impassioned spokesmen would have us believe. Chapter 6, a review of Fodor’s important book, The Language of Thought, promotes a cautious skepticism about some of the theoretical underpinnings of the cognitive science movement, and Chapter 7 is an introductory travel guide to the field of artificial intelligence, recommending points of interest while warning of alien customs and unreliable accommodations. Since some enemies of artificial intelligence have viewed the piece as an unseemly glorification of the field and some workers in the field have regarded it as an unsympathetic attack, it probably strikes the right balance.

Part III then tackles some of the traditional questions that have puzzled philosophers of mind concerned with consciousness: what are sensations, dreams, mental images, pains? How can they be captured in the net of psychological theory? Together these chapters constitute a considerable revision of the account of consciousness given in the second half of Content and Consciousness, though most of the strong claims about the relation of consciousness to language survive in one form or another.

Part IV considers a variety of related questions that might be grouped under one general question: can psychology support a vision of ourselves as moral agents, free to choose what we will do and responsible for our actions? Many have thought that materialism or mechanism or determinism—all apparent assumptions of the reigning psychological theories—threaten this vision, but in Chapters 12 and 13 I consider the most persuasive of the arguments to this effect and reveal their flaws. Chapter 12 attempts to allay the worry that sheer mechanism—deterministic or indeterministic—would rule out free will and responsibility. By uncovering the missteps in the most compelling arguments for this thesis I claim not to refute it, but at least to strip it of its influence. Chapter 13 tackles the widespread conviction that Gödel’s Theorem proves we cannot be “machines,” and illustrates the fundamental confusions that give this idea whatever plausibility it has. Chapter 14 argues that persons can be defined as a particular subclass of intentional systems, “higher order” intentional systems with the capacity for natural language and (hence) consciousness in the fullest sense. In some regards then this is the unifying essay of the collection. Chapter 15 explores the relationship between free will and indeterminism, and argues that what is properly persuasive in the libertarians’ insistence that our wills be undetermined can be captured in a neutral model of rational decision-making. Chapter 16 develops this model of decision-making a bit further and proposes a reform in our ordinary concept of belief, sharply distinguishing two phenomena I call belief and opinion. I view these chapters as developing fragments of a positive psychological theory of moral agents or persons. Chapter 17 is dessert.

Notes