2
The Cognitive-Scientific Revolution
 
COMPUTATIONALISM AND THE PROBLEM OF MENTAL CAUSATION
[The computer metaphor is] the only respect in which contemporary Cognitive Science represents a major advance over the versions of [representational theories of mind] that were its eighteenth- and nineteenth-century predecessors.
—Jerry Fodor
THE “AMAZINGLY HARD PROBLEM”: MENTAL CAUSATION AND PHILOSOPHY OF MIND
In an article surveying the state of a widely entertained philosophical discussion, Paul Boghossian ventures the conclusion that “meaning properties appear to be neither eliminable, nor reducible. Perhaps it is time that we learned to live with that fact” (1989, 548). Boghossian’s conclusion follows from his survey of “the rule-following considerations”—from a line of argument, that is, framed by Saul Kripke’s (1982) influential reading of Wittgenstein’s famous but elusive arguments concerning the impossibility of a private language. On Boghossian’s account, these arguments most fundamentally disclose the normativity of conceptual content—the fact, on one way of putting the point, that the content of any belief may not be identifiable apart from considerations having to do with how or whether it might be true. As Boghossian says, “if I mean something by an expression, then the potential infinity of truths that are generated as a result are normative truths: they are truths about how I ought to apply the expressions, if I am to apply it in accord with its meaning, not truths about how I will apply it” (1989, 509). Boghossian takes it that the rule-following considerations suggest that our understanding any thought or expression as meaning what it’s about cannot be explained by reference only (for example) to behavioristic dispositions; the normativity that constitutively characterizes mental content cannot be accounted for in terms that are not themselves normative or semantic.
I’ll explore “normativity” more in chapter 3; for now, I want to focus our concerns by considering what Boghossian thinks is the most difficult problem occasioned by his argument for the ineliminability of meaning from our understanding of mental content. What makes this conclusion hard to accept, he says, is “the question of mental causation: how are we to reconcile an anti-reductionism about meaning properties with a satisfying conception of their causal or explanatory efficacy?” More starkly, “how is an antireductionist about content properties to accord them a genuine causal role without committing himself, implausibly, to the essential incompleteness of physics? This is, I believe, the single greatest difficulty for an anti-reductionist conception of content” (1989, 548–549).1
The problem thus introduced by Boghossian is basically the converse of the problem we saw Dharmakīrti address in the last chapter. While Dharmakīrti was concerned with how or whether physical events might give rise to moments of mind, the problem raised by Boghossian is how or whether mental content can cause physical events in the body. After all, human bodies are manifestly material objects and so, presumably, subject to whatever laws of physics we take to describe the actions of other such objects. Is there a way, then, to understand the content of mental events to be causally efficacious with respect to the physically described actions of our bodies, so that intentional phenomena—phenomena like believing something or having a reason—can be thought to play some explanatory role in our behavior? While the conscious character of the mental has been characterized by some philosophers as “the hard problem,” the question now before us has been characterized as the amazingly hard problem.2
The difficulty is that it’s hard to see how the semantic content of reasons and beliefs could cause anything, insofar as such content constitutively involves the kinds of abstractions (concepts, states of affairs, linguistic universals) that, as Dharmakīrti emphasizes, do not have causal efficacy. Many have supposed that things like having a reason must therefore admit of description otherwise than in terms of their semantic content—that reasons and beliefs really do whatever they do only under some description (as, say, “instantiated in neurophysiological event x”) that may not make reference to what their subjects take them to be about. Whether it makes sense to think this is right is among the central issues of Kant’s Critique of Practical Reason, which centers on the question “whether pure reason of itself alone suffices to determine the will or whether it can be a determining ground of the will only as empirically conditioned” (1788, 12). This is the question whether propositional attitudes are finally significant in virtue of their semantic content—whether, in a different idiom, persons are responsive to reasons as such—or whether, instead, these are explanatorily significant only insofar as they can also be described as particular states or events (brain events, for example) with specific identity criteria. The problem of mental causation particularly motivates the latter conclusion; insofar as things like the truth of a claim cannot cause such physical events as muscle contractions, it must therefore be, as G. F. Schueler says in sketching the guiding intuition here, “the things (‘mental states’) that have these true or false contents that do the explaining” (2003, 58).
Elaborating on the difficulties thus raised by his arguments for the ineliminability of meaning, Boghossian invokes Donald Davidson, who is widely taken to have shown that “if propositional attitude explanations are to rationalize behaviour at all, then they must do so by causing it” (Boghossian 1989, 549). Davidson’s essay “Actions, Reasons, and Causes” (1963) is indeed a locus classicus for the issues here at stake, and we can usefully sharpen some of the questions about mental causation by briefly considering it. Against the many philosophers who have held that “the concept of cause that applies elsewhere cannot apply to the relation between reasons and actions”—that, more particularly, “nonteleological causal explanations do not display the element of justification provided by reasons” (1963, 9)—Davidson argued that “the justifying role of a reason… depends upon the explanatory role, but the converse does not hold. Your stepping on my toes neither explains nor justifies my stepping on your toes unless I believe you stepped on my toes, but the belief alone, true or false, explains my action” (1963, 8; emphasis added).
This amounts to a “nonteleological” sort of explanation in the sense that it brackets (what is arguably integral to the content of a belief) the possible truth of the belief from the subject’s perspective. Insofar as one acts for a reason, one might be said to act in order to realize a not-yet-obtaining state of affairs, where the possible realization relates closely to questions of truth. This represents the telos in virtue of which the action is judged intentional—a point Kant expressed by characterizing the problem of mental causation as that of “how the ought, which has never yet happened, can determine the activity of this being and can be the cause of actions whose effect is an appearance in the sensible world” (1788, 96).3 The point is that what is needed for a solution to the problem of mental causation is an account that makes reference particularly to efficient causes. This is because the events we want to understand in this case are epitomized by bodily movements; insofar as those clearly originate in the central nervous system, and insofar as physical causation is local, any entertainings-of-reasons that can be causally efficacious must therefore finally be similarly “in” specifiable mental states or brain states. So, what is really at issue is (in Schueler’s words) “whether reasons explanations, which on their face always involve goals or purposes… are completely analyzable in terms of efficient causes which make no essential reference to any goals or purposes” (2003, 18).4
Now, whatever Davidson’s arguments show, we can reasonably ask whether they vindicate the idea that reasons are causes in the strong sense thus demanded by the problem of mental causation. Davidson’s argument turns on his analysis of the word “because”: “Central to the relation between a reason and an action it explains is the idea that the agent performed the action because he had the reason” (1963, 9). It won’t do, Davidson urges, to suppose that this relation is satisfactorily accounted for simply by the use of such an expression in the context of justifying an action—what is wanted is an explanation of the justificatory use; “the notion of justification becomes as dark as the notion of reason until we can account for the force of that ‘because’” (1963, 9). Davidson argues that what is explanatorily basic must be a description under which the reason causes the agent’s actions; the justificatory use of the word “because” is intelligible, he says, only insofar as “a primary reason for an action is its cause” (1963, 12).5 Davidson’s claim is thus that an action is reasonably judged to have been done for a reason whenever its cause is somehow the same as what shows up in expressions like “He did it because…
It is, however, only a very minimal sense of causation that is vindicated by this argument; all we are entitled to conclude from this is that a “primary reason” is a cause in the sense of being somehow concomitant with an intentional action. This amounts to what Schueler characterizes as a promissory note: “The term ‘because’ cites the fact that there is an explanatory story connecting two things, but by itself actually tells none of that story at all” (2003, 14)—an observation, he says, that should have the effect of “demystifying claims about causation such as Davidson’s” (2003, 17). Like the point I raised regarding one of Dharmakīrti’s formulations of the causal relation,6 Schueler’s is a basically Humean point about the limits of causal explanations; to affirm that one’s reason is a cause in Davidson’s sense is to affirm little more than that relations of Hume’s “contiguity and succession” obtain. But that hardly suffices to separate final from efficient causation. Whatever the cogency of Davidson’s arguments, then, they do not obviously support the conclusion that reasons (or even reason-containing mental states) must finally consist in the kinds of things that can function as the efficient causes of physical events.
However, some of Davidson’s formulations do stack the deck in favor of such a view. For example, he expresses one objection to his own position as being to the effect that “primary reasons consist of attitudes and beliefs, which are states or dispositions, not events; therefore they cannot be causes” (1963, 12; emphasis added). But the question of our responsiveness to reasons as such is begged by too quickly identifying propositional attitudes with “states or dispositions”; the latter should, rather, be understood as having whatever semantic content figures in reasons explanations, and the question just is whether it is in terms of their content that reasons might be significant for action.7 To allow, then, that talk of “attitudes and beliefs” is, ipso facto, talk of “states or dispositions” is already to concede the point that Kant took to be most crucially at issue—already to concede that reason “can be a determining ground of the will only as empirically conditioned,” leaving altogether out of account the possibility that “reason of itself alone suffices to determine the will” (1788, 12).8 If it’s really just content-bearing “states or dispositions” that explain the sense in which reasons are explanatorily significant—if it’s only under a different description (as instantiated in mental state X) that a belief can really be thought to do anything—then the semantic content of the reasons “had” by these states may turn out to be epiphenomenal.9 Insofar as everything we want to understand can on such a view be accounted for without any reference to what beliefs are about, it is arguably no longer beliefs that we are talking about at all.
ENTER COMPUTATIONALISM
For Jerry Fodor, too, the problem of mental causation is paramount, and the foregoing issues figure centrally in his work. An influential proponent particularly of the “computational” program of cognitive-scientific research, Fodor is perhaps most widely known for his defense of the language of thought (or “mentalese”) hypothesis, which has its place in a computational account of the mental. The problem of mental causation drives such an account. Computational accounts of the mental, that is, represent a contemporary iteration of the idea that “meaningful” or “contentful” episodes of awareness will also admit of an altogether different description—one in terms of which they can be understood as causally efficacious with respect to the body. Fodor’s approach, then, centrally involves “that part of psychology which concerns itself with the mental causation of behavior” (1980, 277). Indeed, Fodor’s view is that “a cognitive theory seeks to connect the intensional properties of mental states”—the character of mental states, that is, as contentful—“with their causal properties vis-à-vis behavior. Which is, of course, exactly what a theory of the mind ought to do” (1980, 292). Fodor thus affirms that an account such as his is “required by theories of the mental causation of behavior” (1980, 292).
Fodor embraces a broadly empiricist tradition of thought that includes the likes of Locke, Hume, and Berkeley, whom he takes commonly to have advanced the kind of representational theory of mind that he also favors. The salient point of such theories is their aiming to explain “how there could be states that have the semantical and causal properties that propositional attitudes are commonsensically supposed to have” (Fodor 1985, 79). These accounts commonly represent an empiricist answer to Kant’s question (noted above) whether “reason of itself alone suffices to determine the will, or whether it can be a determining ground of the will only as empirically conditioned.” Upholding the latter alternative, representational theories of mind amount to a paradigm case of the view that it is only as empirically conditioned that reasons do what they do; for these are theories, on Fodor’s account, according to which it is the empirically real things (“representations”) that have content that do the explaining.10 Such theories most basically involve, then, some kind of reference to particular mental events or states (representations) that are at once the “bearers” of mental content and themselves the causes of behavior. (Insofar as they are suitable as causes, these representations will also be finally describable as effects—the effects, for example, of environmental stimulus of sensory capacities.)
It is particularly with respect to these modern empiricist accounts, Fodor suggests, that we can understand the revolutionary character of computationalism; for the availability of the computer metaphor enables us to abandon the problematic “associationism” of the earlier accounts and thus to address what had always been their principal weakness. Indeed, Fodor says in this regard that the significance of the computer model represents “the only respect in which contemporary Cognitive Science represents a major advance over the versions of [representational theories of mind] that were its eighteenth- and nineteenth-century predecessors” (1985, 93). As for the “associationist” accounts that are thus superseded by the appeal to computational processes, Fodor rightly thinks it chief among the failures of predecessor approaches to the problem of mental causation that they “failed to produce a credible theory of the [propositional] attitudes. No wonder everybody gave up and turned into a behaviourist” (1985, 93). Earlier iterations of the representational theory of mind may have managed, among other things, to explain how mental states could be causally efficacious, but only at the cost of making the content of such states finally epiphenomenal; “Cognitive Science,” Fodor says, “is the art of getting the cat back in” (1985, 93). The possibility, then, of addressing the problem of mental causation while still saving mental content represents, at the end of the day, the major promise of the computational version of cognitivism.
Chief among the obstacles to “getting the cat back in,” we noted above with reference to Paul Boghossian, is the normativity of mental content. It is difficult to give an account of the mental that makes reference only to things (neuroelectrical events, for example) that can be causally efficacious with respect to the body and thereby to explain the kind of cognitive con-tent—that of an act of believing, for example—regarding which one could be judged right or wrong. The difficulty is that the relations involved in believing something to be true—relations such as being warranted or correct in virtue of another belief—are not obviously reducible to causal relations among particulars. Even, for example, to judge two objects as the same involves, it seems, reference to some additional fact (their being the same) that is not itself either of the particular objects, and that is not obviously seen in the same way these are. To that extent, however, it turns out to be very hard to say what it is in virtue of which one could be right or wrong in so taking things.
The problems here at issue can be gleaned from John Locke’s canonical statement of a representationalist view of the mental:
Since the Mind, in all its Thoughts and Reasonings, hath no other immediate Object but its own Ideas, which it alone does or can contemplate, it is evident, that our Knowledge is only conversant about them. Knowledge then seems to me to be nothing but the perception of the connexion and agreement, or disagreement and repugnancy of any of our Ideas. In this alone it consists.
(1689, 525)
The problem we are scouting here is that of explaining this “connexion and agreement” among ideas; what is it in virtue of which one could be judged right or wrong in thinking “connexion and agreement” to obtain? Can one’s knowing this be finally explained with reference only to particulars, or are “connexion and agreement” constitutively abstract relations?11 Locke is ultimately committed to explaining these in terms of particulars; in thrall to the ocular metaphor that Richard Rorty takes to drive Locke’s empiricism, Locke can only say you just see them.12 While this is a problematic answer, it is hard to see what other resources the empiricist has for understanding the relations among concepts and thoughts.
Hence, on Fodor’s view, the question that was never satisfactorily addressed by predecessor proponents of representational theories of mind is what things like believing and inferring could be, “such that thinking the premisses of a valid inference leads, so often and so reliably, to thinking its conclusion” (1985, 91). What could these intentional phenomena be, more particularly, if things like being led to a conclusion will not readily admit of a causal description? “How,” as Vincent Descombes effectively puts the same point, “can a mechanical sequence of mental states also be a chain of reasoning?” (2001, viii–ix). This, finally, is the problem with respect to which computers have been found helpful; computers, as Fodor says, represent “a solution to the problem of mediating between the causal properties of symbols and their semantic properties” (1985, 94). The computer model helps us imagine how the particular states or events that bear mental content might really do the causing, but without our having to deny that those states can also be individuated by their content. Thus, computers surely involve causally describable operations involving information-bearing states, but these operations “respect” the semantic character of the states involved—leave intact, that is, the fact of their being about something—in the sense, at least, that these computational operations can also be taken to represent the steps in an argument.
Consider, in this regard, the operation of a simple calculator.13 Its execution of an algorithm can be described entirely in causal terms: the completion of each instruction causes the machine to pass into a consequent electrical state, which in turn causes successor states as a function of the algorithm. What is remarkable is that these causally describable electrical events at the same time represent a calculation—something, that is, that can also be represented in terms of the steps of an argument. Here, then, is a causally describable sequence of states that seems precisely to be a chain of reasoning. What is thus advanced by the computer analogy is a way to imagine that semantically meaningful phenomena—contentful mental events like entertaining reasons or beliefs—can be explained with reference to (as, for example, really consisting in) causally efficacious states. Computational processes provide a model for understanding how processes can be described at the same time in causal terms (like the conduction of electricity through the circuits of a computer) and in logical or semantic terms (the terms, that is, in which the same process can be understood as an argument).
Fodor puts the point thus: “I take it that computational processes are both symbolic and formal. They are symbolic because they are defined over representations, and they are formal because they apply to representations in virtue of (roughly) the syntax of the representations” (1980, 279). That is, such processes immediately operate only (I suppose) with respect to electromagnetically represented zeros and ones, which are all the computer need “know” anything about; but despite their thus doing all the computing in these “syntactically” describable terms, computers operate regarding states that can also be readily understood, “semantically,” as meaning something (for the user of a computer, anyway). Of the “syntactic” description of the terms involved in computational processes, Fodor explains:
What makes syntactic operations a species of formal operations is that being syntactic is a way of not being semantic. Formal operations are the ones that are specified without reference to such semantic properties of representations as, for example, truth, reference, and meaning…. Formal operations apply in terms of the, as it were, shapes of the objects in their domains.
(1980, 279)
More precisely, “the syntax of a symbol is one of its second-order physical properties. To a first approximation, we can think of its syntactic structure as an abstract feature of its (geometric or acoustic) shape” (Fodor 1985, 93; emphasis mine).14
On Fodor’s usage, then, “syntactic” relations can be thought of as those obtaining not just (or not finally) among words but between all the meaningful parts of utterances—including those aspects of sentences having to do with what linguists mean by “syntax”—and the nonmeaningful physical factors that are the enabling conditions of any sentence’s being expressed and understood. The gap that is thus meant to be bridged by the appeal to computer models is, in other words, not one between items that are all already understood as “meaning” something (since the possibility of that is just what we want to explain); rather, it is the gap between meaningful items (sentences, thoughts, etc.) and the causally describable particulars (acoustic or printed “shapes”) that are somehow the vehicles of such items.
It is clear, in any case, that Fodor’s “formal” and “syntactic” here are most significantly to be understood as meaning causally describable. Fodor is speaking to this point when he notes—apropos of how “the representational theory of mind and the computational theory of mind merge here”—that, “on the one hand, it’s claimed that psychological states differ in content only if they are relations to type-distinct mental representations” (1980, 292). That is, insofar as any two people experience different cognitive content, it is, on a representational theory of mind, just because they have distinct mental states involving different, subjectively occurrent representations. On the other hand, however, the real point in thus invoking representational states is that only these are the kinds of things that can be thought to enter into causal relations. To that extent, the salient point about representations is that they are events or “states” with spatiotemporal identity criteria (they are particulars) such that they can be said to differ from one another in something like the way that, say, marks on a page differ from one another; cognitive processes, Fodor thus holds, “are constituted by causal interactions among mental representations, that is, among semantically evaluable mental particulars” (2006, 135). Computers, then, complement the representational theory of mind by offering a way to imagine how phenomena like believing something might really “do” what they do at the level of description that involves particulars, since (Fodor says) “computations just are processes in which representations have their causal consequences in virtue of their form” (1980, 292). Significantly, though, it’s finally in terms only of such particulars that the real explaining is done; “only formal properties of mental representations contribute to their type individuation for the purposes of theories of mind/body interaction” (1980, 292).
Phenomena like having a reason, on this view, can thus be understood as interacting with the body only insofar as they can be individuated in terms of something other than their content. To the extent, in other words, that we would understand them in their capacity as causally efficacious with respect to the body, these representational “states” must be held to differ from one another not only in terms of what they are about but also (and primarily) in terms of their “as it were, shapes.” “Or, to put it the other way ‘round,” Fodor concludes, “it’s allowed that mental representations affect behavior in virtue of their content, but it’s maintained that mental representations are distinct in content only if they are also distinct in form” (1980, 292). The point chiefly advanced by this is thus that being contentful is secondary to representations’ having whatever causal properties they do; when it comes to mental representations, their being distinct in form is what does the explaining. While it is thus “allowed” that the content of these representations is significant with respect to behavior, Fodor’s view is that their being contentful must finally be explicable in the same “formal,” “syntactic,” or causal terms in which we understand them also to be the efficient causes of bodily actions.
NARROW CONTENT AND METHODOLOGICAL SOLIPSISM: FODOR’S BRIEF FOR INTERIORITY
It’s in the context of an account like the foregoing that Fodor develops his commitment to the related ideas of “narrow content” and “methodological solipsism.” These are parts of a case for thinking there are states of mental representing that are contentful (that are somehow about something), but not in anything like the abstract way in which linguistic items are about what they mean. What is wanted is thus a kind of “aboutness” that is somehow inextricably related to a mental event’s character as causally efficacious—a place, as it were, where the intentional properties of a mental state (its being contentful) come together with its causal properties. Here it is not beside the point to recall Dharmakīrti’s consideration of the problem of how to distinguish those causes that are at the same time the objects of awareness from whatever other causes figure in the production of cognition.15 Dharmakīrti’s problem, too, was to get the intentional and the causal descriptions of mental events to come together. (We saw, in this regard, that Dharmakīrti rather question-beggingly answered that it is just that cause whose image an awareness bears that should be reckoned as what is cognitively apprehended.) Fodor is similarly after an explanation that comes to rest with mental events that are somehow about the same things that will admit of causal description.
Interestingly, we will see that Fodor is after something like the same thing when it comes to his philosophy of language; with regard to linguistic reference, he also advocates an account that is finally grounded in linguistic items—paradigmatically, words on the occasion of their first learning—that are somehow “about” their causes. To the extent that the issues thus dovetail here, we can return again to Boghossian’s reflections on the rule-following considerations and say that Fodor’s approach resembles what Boghossian characterizes as an “optimality” version of a dispositional theory. The idea behind such an account, on Boghossian’s view of the matter, “is that there is a certain set of circumstances—call them ‘optimality conditions’—under which subjects are, for one or another reason, incapable of mistaken judgements” (1989, 537).
On one important version of such an account, “optimal conditions are the conditions under which the meaning of the expression was first acquired” (Boghossian 1989, 537). Thus, someone’s meaning something by any thought or utterance might be thought to be finally fixed by some paradigm instance—one that can be described in terms of the ostension of particulars. Such an account was famously suggested by St. Augustine:
When they (my elders) named some object, and accordingly moved to wards something, I saw this and I grasped that the thing was called by the sound they uttered when they meant to point it out. Their intention was shewn by their bodily movements, as it were the natural language of all peoples…. Thus, as I heard words repeatedly used in their proper places in various sentences, I gradually learnt to understand what objects they signified….
(CONFESSIONS 1.8, AS QUOTED AND0 TRANSLATED IN WITTGENSTEIN 1958, 2)16
Insofar as one’s meaning anything by words so learned can, then, ultimately be understood in terms of memory of some particular occasion, the intelligibility of this does not require reference to universals.
Whatever the specifics of the “optimality conditions” on offer, the upshot is a dispositional reconstruction of meaning facts, such that (for example) “for Neil to mean horse by ‘horse’ is for Neil to be disposed to call only horses ‘horse,’ when conditions are optimal” (Boghossian 1989, 538). Significantly, these “optimal conditions” must be such that they will admit of a finally causal description; this is, Boghossian says, really the idea that “there is a set of naturalistically specifiable conditions under which [a subject] cannot make mistakes in the identification of presented terms” (1989, 538; emphasis added). Not only, then, will this account go through only if the specified conditions really are such as to preclude the possibility of error, but “the conditions must be specified purely naturalistically, without the use of any semantic or intentional materials—otherwise, the theory will have assumed the very properties it was supposed to provide a reconstruction of” (1989, 538). Insofar, that is, as the point of these “dispositional” accounts is to explain how we mean things by thoughts and utterances, it will not do for the explanation itself to presuppose that we already understand the idea of being meaningful.
Fodor’s “narrow content” is meant to fit the same bill; what is picked out by Fodor’s notion, too, is “naturalistically specifiable conditions” meant to ground or explain awareness’s being, in general, contentful.17 The main point to understand about narrow content, then, is that it’s to be understood in such a way that it will admit of description both in terms of its content and in terms of its causal properties. The contrastive category of “broad content,” on this account, picks out the level at which we entertain discursively elaborated judgments (like “it’s sunny outside”)—judgments, that is, which are intelligible only with reference to the conditions under which they would be true. This is, we have begun to appreciate, a level of description that involves normativity; the main point to be made about that in the present context is that what it is in virtue of which one could be right or wrong about such judgments surely includes a great many things (facts having to do with its being sunny outside, facts about conventions for the use of the word “sunny,” etc.) that are external to a subject. What Fodor aims to explain, though, is how we can get judgments like “it’s sunny outside” out of the kinds of things—photostimulation of retinal nerve endings, for example—that can themselves serve as the efficient causes of behavior. Precisely to the extent, however, that it will thus admit of causal description, this level of contentfulness must be “narrow” in the sense that it is explicable in terms that can be individuated independently of factors external to the subject. “The intrinsic nature of inner states and events,” as John McDowell says of this kind of approach, must be “a matter of their position in an internal network of causal potentialities, in principle within the reach of an explanatory theory that would not need to advert to relations between the individual and the external world” (1998, 250).18
Chief among the considerations recommending such a view is that the brain states that must (if they are to be the efficient causes of bodily actions) ultimately have whatever content we’re aware of are themselves inside the head. It seems, to that extent, that it must be possible to individuate the content of mental events in terms that will somehow admit of its thus being “in” one’s head. But what can be “in” the head does not include, for example, the various factors out there in the world that would have to obtain in order for one’s thought to be true. While experience may be, at least phenomenologically, contentful in the “broad” sense that consists in its seeming to represent true things about a real world, its thus being contentful must, on Fodor’s account, finally be explicable in terms of proximal factors. While Fodor says in a related vein that “it’s what the agent has in mind that causes his behavior” (1980, 290), it should be clear that the pressure to posit something like “narrow” content really comes from the considerations that recommend thinking of mental content as being finally “in” a brain.
Yet Fodor does not take himself to be an eliminative physicalist; his is not, that is, the idea that the commonsense view of the mental may one day be altogether superseded by an impersonal, scientific description.19 Indeed, he thinks a computational account such as his represents the only way to be a realist about propositional attitudes, the only way to retain the idea of thought as constitutively contentful. On Fodor’s account, however, one counts as a realist about propositional attitudes only insofar as she holds both that “there are mental states whose occurrences and interactions cause behaviour,” and that “these same causally efficacious mental states are also semantically evaluable” (Fodor 1985, 78). While the second claim thus affirms (against the eliminativist) that the semantic content of mental states cannot be explained away, the salient point for our purposes is Fodor’s affirmation, with the first condition, that a propositional attitude’s causing any behavior would be the only way to think it “really” in play at all. Fodor’s claim here thus reflects, in effect, something very much like Dharmakīrti’s idea of causal efficacy as the criterion of the real (though it’s especially clear in Fodor’s case that it is particularly efficient-causal efficacy that makes the difference). To the extent that such a criterion is taken as axiomatic, it seems that one only could be a “realist” about propositional attitudes by showing these finally to consist in causally efficacious particulars.
To frame the issue of content, in this way, as centrally concerning the question of its causal efficacy is effectively to advance what has been characterized as a causal argument for the necessity of positing narrow content. This can be understood as a transcendental argument of sorts: if one is going to be a realist (of Fodor’s kind) about the propositional attitudes, it must be the case that these can be explained in terms of something like “narrow content”; for it is a condition of the possibility of cognition’s being both contentful and causally efficacious that mental content be finally intelligible only in terms internal to a subject, since otherwise there will be no way to think of our mental content as causally efficacious with respect to bodily events.20 Insofar, that is, as one holds that a reason’s causing some behavior is the only way to think it “really” in play at all, there is pressure to think that such things must finally be explicable only in terms of narrow content—the kind of “content,” that is, that can be in a brain.
If, as thus stressed by the causal argument, the kind of content here in view is “narrow” in the sense that it is “in” a brain, it is also meant to be narrow in an epistemic sense; narrow content can also be characterized, that is, in terms of how things seem to a subject.21 Here we encounter Fodor’s “methodological solipsism,” which can be said to figure in an epistemic argument for narrow content. The epistemic implications of Fodor’s commitment to narrow content are evident in his contention that “if mental processes are formal, they have access only to the formal properties of such representations of the environment as the senses provide. Hence, they have no access to the semantic properties of such representations, including the property of being true, of having referents, or, indeed, the property of being representations of the environment” (1980, 283).
The idea that it is only as narrow that mental content really “does” anything recommends, in other words, a bracketing of questions of truth and reference; Fodor’s stance must be “solipsistic,” then, in the sense that attention to narrow content is, ipso facto, attention to something that (unlike the truth conditions of a belief about the environment) is somehow “in” a subject. Hilary Putnam has said in this regard that a “methodological solipsist” is “a non-realist or ‘verificationist’ who agrees that truth is to be understood as in some way related to rational acceptability, but who holds that all justification is ultimately in terms of experiences that each of us has a private knowledge of” (1981, 121–122). That is, an account is methodologically solipsistic if it initially brackets questions of truth, instead taking what irrefragably seems to a subject to be the case as foundational for understanding as such; the very idea of any belief’s possibly being true is finally to be explained, then, based on this “seeming,” rather than conversely—a version of the idea (familiar from Locke and others) that while we can coherently doubt whether things in the world are as represented in any cognition, we cannot doubt that that is how it seems.
What I want to ask with respect to Fodor’s elaboration of a similar idea is whether there is any reason to think that Fodor’s “methodological solipsism”—his epistemic case for narrow content—yields the same conclusion as the causal argument so far canvassed. Is there any reason to think that facts about what irrefragably seems to a subject to be the case put us more immediately in the vicinity of brain events than other things that might figure in the content of a thought? As with Dharmakīrti’s answer to the question of how to distinguish the objects of awareness from whatever other causes figure in its production, what is needed here is for both of these arguments (the causal and the epistemic arguments for narrow content) to pick out the same thing. These lines of argument effectively dovetail, then, only insofar as it is presupposed that we are, in virtue of picking out epistemically “narrow” content, ipso facto picking out the mental state that has that content. I think, however, that there is a real question whether the two different lines of argument succeed in picking out the same things; even if we grant that the causal argument for narrow content is cogent and that the epistemic considerations that recommend Fodor’s methodological solipsism are compelling, it may be that these arguments fail to converge on the same “mental states.”
In an epistemic key, Fodor’s point is that if we are interested in understanding the “mental causation of behavior,” it stands to reason that we must attend to what the subject takes herself to believe (or desire, intend, etc.)—which of course may be quite independent of whether the thought is true. Fodor’s argument in this regard is effectively the same one that Donald Davidson took to recommend the conclusion that the justifying role of a reason depends upon the explanatory role—that, as Davidson put it, “your stepping on my toes neither explains nor justifies my stepping on your toes unless I believe you stepped on my toes, but the belief alone, true or false, explains my action” (1963, 8). To the extent, that is, that we are interested in what caused me to step on your toes, my belief that retaliation is called for does the trick, quite independently of whether or not I’m right in so believing. Expressing the same point in terms suggested by Frege, Fodor says that “when we articulate the generalizations in virtue of which behavior is contingent upon mental states, it is typically an opaque construal of the mental state attributions that does the work; for example, it’s a construal under which believing that a is F is logically independent from believing b is F, even in the case where a = b” (1980, 286).
Thus casting his epistemic argument for narrow content in the semantic terms of “referential opacity,” Fodor emphasizes that it is “typically under an opaque construal that attributions of propositional attitudes to organisms enter into explanations of their behavior” (1980, 286). To attribute beliefs under a referentially “opaque” construal, on this usage, is to describe the contents thereof only in terms accessible to the subject—only, that is, in terms of the mode of presentation to a subject, which may of course fail to correspond to how things really are. On a referentially transparent construal, in contrast, propositional attitudes are individuated with regard to their referents, regardless of whether the subject happens to know anything about that. So, for example, it makes all the difference for our understanding of the tragedy of Oedipus that we attribute to Oedipus the belief “I want to marry Jocasta” under a referentially opaque construal; under a referentially transparent construal, that belief would be recognized as equivalent to “I want to marry my mother,” which, tragically, Oedipus was not aware that he thus intended (Fodor 1980, 287).22
It is under a referentially transparent construal, then, that we can see that it’s really true that Oedipus thus wished to marry his mother, but it’s the referentially opaque construal that explains (because it causes) his actions. “Ontologically,” Fodor thus concludes, “transparent readings are stronger than opaque ones; for example, the former license existential inferences, which the latter do not. But psychologically, opaque readings are stronger than transparent ones; they tell us more about the character of the mental causes of behavior” (1980, 297). Bringing this epistemic argument for narrow content together with the terms that figure in the causal argument, Fodor says that “narrow psychological states are those individuated in light of the formality condition; viz. without reference to such semantic properties as truth and reference” (1980, 297). Not only is “narrow content” what is captured by referentially opaque attributions of belief; it is also, Fodor here further asserts, the same thing captured by the description of a mental state in strictly formal or “syntactic” terms.
As we saw with reference to Davidson, though, there is no reason to think that this line of argument, even if rightly showing a sense in which “reasons” can be “causes,” shows us that these are also (under their description, presumably, as content-bearing brain states) the efficient causes of things like muscle contractions. In order, however, for Fodor’s “methodological solipsism” to recommend his idea of “narrow content,” what is picked out by propositional attitudes on referentially opaque construals has to be somehow the same thing that is picked out by a description of causally efficacious narrow content; only if the same thing can be described both ways will we have learned that the state or event individuated by a “referentially opaque” construal of mental content is just the state that causes behavior, in the strong sense required by Fodor’s preoccupation with the problem of mental causation.
Fodor thus exploits the idea of referential opacity—which amounts to the adoption of a first-person epistemic perspective on any subject’s beliefs—to recommend the view, already preferred on grounds having to do with the problem of mental causation, that only things internal to the subject’s head can be causes of behavior. But in fact, we are not entitled to take Fodor’s “formally” described mental content as similarly representing a first-person perspective, since however intimately brain events may be involved in our having of experience, surely it is not brain events that our experiences are of. It is not obvious, then, that the “methodologically solipsistic” identification of a mental cause—an identification essentially like the one disclosed by Davidson’s analysis of the word because—necessarily tells us anything whatsoever about the kinds of efficient causes (viz., brain events) that Fodor means to pick out with the causal argument for narrow content. Insofar as causation is still individuated by way of an intentional level of description—a “referentially opaque” description, to be sure, but referential (and therefore semantic) nevertheless—this does not necessarily tell us anything about causes as those are individuated at a physicalist level of description. Or rather, whether there is any relation between these is just what we want to know—and exploiting the sense of “causation” that is in play at a semantic level of description (the sense in play when one says “I did it because…”) does not help us with that.23
We are thus entitled to ask whether it makes sense to think that what Fodor really intends by “narrow content” counts as contentful at all.24 The failure of the two foregoing lines of argument to converge on the same causes reflects a fundamental tension in Fodor’s approach. As John McDowell says of a similar program, it “generates the appearance that we can find (narrow) content-bearing states in the interior considered by itself. But the idea looks deceptive. If we are not concerned with the point of view of the cognitive system itself (if, indeed, we conceive it in such a way that it has no point of view), there is no justification for regarding the enterprise as any kind of phenomenology at all” (1998, 256n).25 To the extent, in other words, that Fodor is finally concerned to pick out only such causal or “formal” properties of representations as the senses provide—only such properties as can be individuated without reference even to their being representations of the environment—he cannot claim also to be talking about beliefs. Even on a referentially opaque construal, beliefs are, phenomenologically, about things like “states of affairs”; they are not “about” their own proximate causes.
The divergence of the causal and the epistemic arguments for narrow content thus reflects, on Lynne Rudder Baker’s account of the problem, a dilemma: “On the one hand, if we take narrow content to be the product of input analysis, then the wrong things get semantically evaluated.” In other words, if we take “narrow content” to consist essentially in, say, the brain states precipitated by environmental stimulus (“the intermediate outputs of perceptual systems,” as Baker says), we haven’t identified anything at all like what a subject would take to be the content of her belief. “On the other hand,” Baker continues, “if we take narrow content to be the product of higher-level processing, then we remove the psychological warrant for construing narrow content in terms of symbols denoting [nothing but] phenomenologically accessible properties” (1986, 67). That is, to bring in such factors as would make the content of a belief recognizable to the subject thereof is, ipso facto, already to bring in the world. It thus turns out, as we noted in chapter 1 with reference to Dharmakīrti, to be difficult to distinguish those causes of any cognition that are at the same time what that cognition is about from whatever other causes (e.g., properly functioning sensory capacities) are appropriately thought to figure in causing the awareness; the same problem, we now see, figures centrally in Fodor’s cognitive-scientific philosophy of mind, as well.
THE “LANGUAGE OF THOUGHT”: AN ACCOUNT OF LANGUAGE ITSELF AS CAUSALLY DESCRIBABLE
To ask whether mental states with “narrow content” really count as contentful is effectively to ask whether Fodor’s way of reconciling the two levels of description (intentional and causal) really counts as realism about propositional attitudes. The problem with Fodor’s view is that all of the explanatory work is done here by mental representations only insofar as they are “formally” describable; it is only in their capacity as having causally efficacious “shapes” that representations really cause anything. All the computer metaphor gets us, then, is a way to think of “formally” (syntactically, causally) described representations as also “meaningful”; it remains the case on this account, however, that it’s not in their capacity as meaningful that we are to understand them as doing what they do. Despite the promise of the appeal to computationalism, the character of mental events as meaning something may after all be epiphenomenal on Fodor’s account.
Fodor is not unaware of this objection, which is one to the effect (he says) that “it is the computational roles of mental states, and not their content, that are doing all the work in psychological explanation” (1994, 49–50). In that case, he allows, it may be that “the attachment to an intentional, as opposed to computational, level of psychological explanation is merely sentimental” (1994, 50). While eliminative physicalists like Paul Churchland are willing to embrace just such a conclusion, Fodor appreciates the “well-known worry about narrow content that it tends to be a little suicidal” (1994, 49); to hold, that is, that mental content in general must be explicable with reference to Fodor’s causally describable narrow content is arguably to do away with the very level of description in terms of which one’s making this very argument makes any sense.26
In trying to dispel this objection (here anticipated in a work written some time after the earlier works to which we have hitherto referred), Fodor backs away somewhat from his commitment to narrow content—not, to be sure, so far as to disavow the idea, only far enough that he no longer thinks his position can only be defended with reference to that.27 Instead, he now thinks he can defend his account of mental causation even with reference to the kind of “broad” content that is necessarily involved in thinking one’s own position true. While I must confess that I’m not altogether sure how his argument in this regard is meant to meet the epiphenomenalism objection, it is clear, at least, that it’s crucial to his answer that we “suppose… that some sort of causal account of broad content is correct” (1994, 52); it is, in other words, only insofar as broad content, too, will admit of causal explanation that he can concede its significance.
On a causal account of broad content, he says, anyone having propositional attitudes regarding (say) what we would identify as “water,” regardless of the description under which they experience it, “must have modes of presentation that trace back, in the right way, to interactions with water. My point is that, qua water-believers, they needn’t have anything else in common: Their shared causal connection to water has left its mark on each of them” (1994, 52). This is a view according to which believing that P can thus be understood in terms of “being in states that are caused by, and hence bear information about, the fact that P” (1994, 53). Fodor’s project, he thus thinks, is still viable even if mental content is understood as “broad” (in the sense of essentially consisting in representations of the environment)—but only insofar as broad content, too, is causally related to the environment.
On this alternative development of his position, “content is broad, the metaphysics of content is externalist (e.g., causal/informational)”—and, Fodor immediately continues, “modes of presentation are sentences of Mentalese” (1994, 52). In backing away from narrow content and embracing thought’s relatedness to the world, then, Fodor here appeals (with his reference to “Mentalese”) to the idea of a “language of thought.” In his 1975 book of that title, Fodor thus sketches the idea here invoked:
To have a certain propositional attitude is to be in a certain relation to an internal representation. That is, for each of the (typically infinitely many) propositional attitudes that an organism can entertain, there exist an internal representation and a relation such that being in that relation to that representation is nomologically necessary and sufficient for (or nomologically identical to) having the propositional attitude. The least that an empirically adequate cognitive psychology is therefore required to do is to specify, for each propositional attitude, the internal representation and the relation which, in this sense, correspond [sic] to it.
(1975, 198)
In the context, then, of what we can recognize as Fodor’s representationalist theory of mind—a theory according to which a phenomenon like believing something is to be explained in terms of a subject’s relation to an internal representation—the “language of thought” represents something like the system of rules regulating the well-formed “sentences” of such mental relating.28 Among the salient characteristics of this “language” is that it can be exhaustively described in terms of unique particulars—in terms, e.g., of brain states, the “syntax” of whose relations is here imagined as language-like.29
While it’s perhaps possible to imagine something like the “syntax” of a language of thought—to imagine, for example, that there are structural regularities in neurophysiological events that impose some constraints on what we can represent or that are isomorphic with what we “think”—the hard thing, just as in Fodor’s development of the computer metaphor, is to explain how the regularities so described can mean or represent or be about anything. The problem, once again, is to get a semantic level of description into the picture. Fodor appreciates as much, allowing of the representational theory of mind he more generally aims to advance that it “needs some semantic story to tell”; which semantic story to tell, he says, is “going to be the issue in mental representation theory for the foreseeable future” (1985, 96). Indeed, on Fodor’s view, this is pretty much the whole shootin’ match; “the problem of the intentionality of the mental is largely—perhaps exhaustively—the problem of the semanticity of mental representations. But of the semanticity of mental representations we have, as things now stand, no adequate account” (1985, 99).
Fodor’s “language of thought” figures centrally in his attempt to rectify that situation. Here, let’s recall Augustine’s account of language acquisition as exemplifying the sort of “optimal” conditions invoked by some philosophers to explain what fixes content. Augustine, we saw, said that the first learning of a language consists in watching one’s elders naming indicated objects and grasping thereby that “the thing was called by the sound they uttered when they meant to point it out”; their intentions were reflected “by their bodily movements, as it were the natural language of all peoples.” Wittgenstein’s Philosophical Investigations famously begins with a consideration of Augustine’s account of this, which so preoccupies Wittgenstein that the lengthy discussion that follows represents one of the most sustained engagements with any thinker explicitly addressed in the Investigations. For our purposes, the insight Wittgenstein most compellingly presses against Augustine’s picture is this: “Augustine describes the learning of human language as if the child came into a strange country and did not understand the language of the country; that is, as if it already had a language, only not this one. Or again: as if the child could already think, only not yet speak. And ‘think’ would here mean something like ‘talk to itself’” (1958, 15–16).
Among Wittgenstein’s points, I take it, is that knowing a language involves something more—indeed, much more—than knowing (what can at least arguably be taught by ostension) “the names of things.” It involves, more basically, the very idea that there could be names of things—that by any act of speech or ostension, one could mean what one thus refers to. What we really want to understand when we ask for an account of an infant’s language acquisition is how the child acquires the very idea of meaning something and in what that consists; to that extent, an account like Augustine’s begs the question most centrally at issue, presupposing as it does that the idea of meaning something is already intelligible to the language learner and that she therefore requires only to learn which sounds “mean” which things.
What, then, are we to say about the relations involved in anything’s meaning something else? Insofar as he is preoccupied with the question of mental causation, Fodor is inclined to say that it is only in virtue of causal relations that anything can be thought finally real; the relations involved in meaning anything raise, however, what Fodor calls the disjunction problem. This is the problem that “it’s just not true that Normally [sic] caused intentional states ipso facto mean whatever causes them” (1990, 89)—which relates, again, to the problem recurrently noted since we saw Dharmakīrti address it in chapter 1. Thus, on Dharmakīrti’s eminently causal account of perception, all manner of things (properly functioning sense capacities, for example) are among the causes of any cognition—but these are not among the things that we say are thus perceived.30 A causal theory of perception thus requires that there be some principled way to explain which of the relevant causes of any perception is at the same time what is perceived; “there has to be some way,” as Fodor similarly allows in the present case, “of picking out semantically relevant causal relations from all the other kinds of causal relations that the tokens of a symbol can enter into” (1990, 91). The problem, then, is again that of getting causal and intentional descriptions (here, of language) together.
Providing such an account with regard to perception is far from straightforward; the explicitly semantic version of the problem is even more difficult. Here, “what the disjunction problem is really about deep down is the difference between meaning and information” (1990, 90). The latter, for Fodor, denotes mental content that is, as it were, efficiently precipitated by its causes; “information is tied to etiology in a way that meaning isn’t.” Mental content that is, in contrast, meaningful is relatively unconstrained; symbols are meaningful just insofar as they are somehow about something other (or something more) than the particulars that cause them, a fact that Fodor characterizes in terms of the greater “robustness” that characterizes meaning. In contrast to “information,” “the meaning of a symbol is one of the things that all of its tokens have in common, however they may happen to be caused. All ‘cow’ tokens mean cow; if they didn’t, they wouldn’t be ‘cow’ tokens” (1990, 90). The problem is how to tell a finally causal story about meaning while allowing that what thus distinguishes the “meaning” relation just is its “robustness,” or apparent lack of causal constraint.
With regard to this problem, Fodor’s proposal is that “‘cow’ means cow and not cat or cow or cat because there being cat-caused ‘cow’ tokens depends on there being cow-caused ‘cow’ tokens, but not the other way around. ‘Cow’ means cow because but that ‘cow’ tokens carry information about cows, they wouldnt carry information about anything” (1990, 90). The account thus concisely stated involves an appeal to asymmetric dependence, which is crucial to unpacking this. For example, Fodor says, “you have to invoke the practice of naming to specify the practice of paging. So the practice of paging is parasitic on the practice [of] naming; you couldn’t have the former but that you could have the latter. But not, I suppose, vice versa?… so I take it to be plausible that paging is asymmetrically dependent on naming” (1990, 96–97). This notion is thus invoked with respect to the Wittgensteinian example of bringing slabs in response to the command “bring me a slab” (cf. Wittgenstein 1958, §20): “it’s plausible that the cluster of practices that center around bringing things when they’re called for is asymmetrically dependent on the cluster of practices that fix the extensions of our predicates” (Fodor 1990, 97; emphasis mine). Any particular utterance of “slab,” in other words, is intelligible as the practice it is only relative to earlier practices fixing the use of the word—and insofar as these earlier practices are causally describable, the later uses, too, are rightly considered to be grounded in a causal description.
The point, then, of Fodor’s account of what it is in virtue of which “cow” means what it does is this: “All that’s required for ‘cow’ to mean cow, according to the present account, is that some ‘cow’ tokens should be caused by (more precisely, that they should carry information about) cows, and that noncow-caused ‘cow’ tokens should depend asymmetrically on these” (1990, 91). The claim is that while not all particular utterances of any word will demonstrably be causally relatable to some token of the type denoted—the fact that they will not is just what is identified in terms of the “disjunction problem”—it will always at least be the case that some such tokens are so relatable. Fodor’s point is that while the intelligibility of the causally describable tokens does not depend on there being tokens that are not so describable, the intelligibility of the latter does depend on there being some instances of the former; in order that there be any cases of rightly calling particular bovine critters cows, there must be some cases of this that will admit of causal description. What is thus claimed is that there is a causal chain linking any use of a term to some first use that itself causally links the term to its referent—and the whole point of the exercise is that “you can say what asymmetric dependence is without resort to intentional or semantic idiom” (1990, 92).31 Here, then, the project of “naturalizing” intentionality could come to rest.
Now, in order to specify—in nonsemantic terms—what it is upon which all correct uses of a word asymmetrically depend, it becomes necessary to have a nonsemantic account of those “practices” that, Fodor said, initially “fix the extensions of our predicates.” What is properly basic on the account thus proposed is the initial application of terms: “Some of our linguistic practices presuppose some of our others, and it’s plausible that practices of applying terms (names to their bearers, predicates to things in their extensions) are at the bottom of the pile” (1990, 97). It’s unclear whether we are to understand this initial “application” in terms of someone’s first assigning a name to anything or (more likely) of someone’s first learning the name so assigned. (We will see in chapter 4 that there is a similar ambiguity in Dharmakīrti’s account and that the problem may be conceptually the same regardless.) Either way, the point is that the process is understood to be causally describable—in terms, indeed, rather like those imagined by Augustine.
Putting the point in terms of the above-described sense of “information,” Fodor says “the idea is that, although tokens of ‘slab’ that request slabs carry no information about slabs (if anything, they carry information about wants; viz., the information that a slab is wanted), still, some tokens of ‘slab’ presumably carry information about slabs (in particular, the tokens that are used to predicate slabhood of slabs do)” (1990, 97; emphasis added). The initial “tokens that are used to predicate slabhood of slabs,” he thus suggests, will admit of causal description; insofar as all subsequent understandings of the idea of being a slab can then be taken asymmetrically to depend on these baptismal tokenings—“but for there being tokens of ‘slab’ that carry information about slabs, I couldn’t get a slab by using ‘slab’ to call for one”—meaning has been grounded in causation. “My ‘slab’ requests are thus, in a certain sense, causally dependent on slabs even though there are no slabs in their causal histories” (1990, 97–98).
This amounts to just the Augustinian idea that so preoccupied Wittgenstein; what is distinctive, that is, about the tokens that initially “predicate slabhood of slabs” is that they can plausibly be described in terms of the ostension of perceptible particulars—in terms (as Augustine said) of “bodily movements” that represent “as it were the natural language of all peoples.” Fodor’s approach here would seem, then, to be vulnerable to the critique ventured by Wittgenstein; recognizing as much, Fodor precisely identifies the problem that remains in terms that echo Wittgenstein’s objection: “as it stands none of this is of any use to a reductionist. For, in these examples, we’ve been construing robustness by appeal to asymmetric dependences among linguistic practices. And linguistic practices depend on linguistic policies.” The problem is that “being in pursuit of a policy is being in an intentional state,” so how “could asymmetric dependence among linguistic practices help with the naturalization problem?” (1990, 98). The intelligibility of any of these baptismal tokenings as a linguistic act—the understanding of any particular utterance together with ostension as naming the thing indicated—already presupposes our knowing what it means to mean something; insofar, however, as that is just what we were trying to explain, the question is begged.
It is, finally, Fodor’s recognition of this problem that drives his argument for the necessity of positing a “language of thought.” Fodor’s argument, most basically, is that the point here developed opens an intolerable regress that can only be terminated by positing something, in a sense, that both is and is not a “language.” What is needed is again something that can be described both in terms of its semantic content (this is the sense in which it is like a language) and in terms of causally relatable particulars (this is the sense in which it is not). Here is a succinct statement of the argument:
Learning a language (including, of course, a first language) involves learning what the predicates of the language mean. Learning what the predicates of a language mean involves learning a determination of the extension of these predicates. Learning a determination of the extension of the predicates involves learning that they fall under certain rules (i.e., truth rules). But one cannot learn that P falls under R unless one has a language in which P and R can be represented. So one cannot learn a language unless one has a language. In particular, one cannot learn a first language unless one already has a system capable of representing the predicates in that language and their extensions. And, on pain of circularity, that system cannot be the language that is being learned. Bur first languages are learned. Hence, at least some cognitive operations are carried out in languages other than natural languages.
(FODOR 1975, 63–64)32
That is, there must be some kind of “language” other than languages like English and Sanskrit and Tibetan, since it could only be “in” some other language that one transacts the business of first learning any one of these. Contra Wittgenstein, then, it must actually be the case that a language-acquiring child does “already have a language, only not this one.” (We will see in chapter 6 that Mīmāsakas pressed a strikingly similar argument to very different ends.)
Fodor thus affirms that Wittgenstein’s characterization of Augustine’s account is “transparently absurd,” urging instead that the argument just sketched “suggests, on the contrary, that Augustine was precisely and demonstrably right and that seeing that he was is prerequisite to any serious attempts to understand how first languages are learned” (1975, 64). Augustine was right, in particular, to think there must be some naturalistically specifiable conditions (“as it were the natural language of all peoples”) upon which all instances of meaning, more generally, are asymmetrically dependent. The idea, then, that Fodor invokes—when, allowing that “broad” content may be compatible with his approach to mental causation, he says (we saw above at p. 67) that “modes of presentation are sentences of Mentalese”—is that a fundamentally causal account of thought’s relatedness to the world is possible insofar as language itself can be so described. Sentences of Mentalese, it thus seems we are to understand, are the language-like neurophysiological precipitates of perceptual encounters with the environment—encounters that can be causally described and upon which all other instances of meaningful thought are asymmetrically dependent. These primitive modes of presentation are like “sentences” in that their well-formedness is a function of “rules” that can be understood on the model of syntax—and, as well, in that they are the bearers of “information” regarding what causes them. It is, Fodor has thus argued, only in virtue of there being this kind of causally describable “content”—the kind that can be described in terms of things like photostimulation of retinal nerve endings or in terms of the demonstrative indication of perceptible particulars—that we can have the kind of meaningful content reflected in overt judgments.
Fodor’s solution to the disjunction problem—and, more generally, to the question of naturalistically specifiable conditions for our meaning any-thing—thus involves the idea that all instances of meaningful thought are asymmetrically dependent upon causally describable episodes that are just intrinsically language-like. This can be understood as amounting to a concession that, in effect, we don’t know how a mental event can mean or represent or be about some other thing—that, in other words, if there seems to be no way to get the semantic character of the mental into a naturalistically described picture, it must just be part of that picture from the beginning. Characterizing the argument we’ve just rehearsed, Daniel Dennett says to similar effect that “some elegant, generative, indefinitely extendable principles of representation must be responsible” for the brain’s having “solved the problem of combinatorial explosion,” but “we have only one model of such a representation system: a human language. So the argument for a language of thought comes down to this: what else could it be?” (1987, 35). Insofar, that is, as the intentionality of the mental cannot be characterized without reference to certain features of language, the would-be naturalizer of mental content can simply hold that the relevant features of language must therefore just intrinsically characterize the structure and function of the brain.
This move, we saw from the beginning of this chapter, is motivated by the problem of mental causation; the foregoing is proposed as an account on which the kinds of things that explain how we can mean anything are at the same time the kinds of things—things, like brain events, with spatiotemporal identity criteria—that can cause movements of the body. If Fodor has in more recent years been willing to account for mental content as constitutively related to the world, the salient point remains his insistence that it is only as causally related that it can “really” be so. Whatever else it’s meant to do, Fodor’s language of thought thus undergirds an approach according to which a propositional attitude’s causing behaviors is the only way to think it real. To the extent, however, that the “language”-like features of the brain are thus invoked chiefly to explain how mental states that are about things can also be the causes of things like muscle contractions, their being “about” their contents may remain finally epiphenomenal; it is, once again, only the causal level of description that has explanatory significance here. We still have, to that extent, the problems that go with thinking that reasons are explanatorily significant only insofar as they can be described in terms of something other than their semantic content. Mental content is here finally explained, moreover, by a redescription in causal terms of what is supposedly the same thing—but it remains unclear whether we are entitled to think such an alternative description really picks out the same thing we have in view when we talk of reasons and beliefs.
CONCLUSION: DOES DENNETT’S APPROACH REPRESENT AN ALTERNATIVE?
We can usefully conclude our survey of the computational iteration of cognitivism by looking briefly at some proposals from Daniel Dennett, whose project differs from Fodor’s in ways that can help us bring more sharply into relief some of the basic problems with characteristically cognitivist approaches. Much influenced by cognitive-scientific research in artificial intelligence and related fields, Dennett is inclined to embrace Fodor’s “language of thought” hypothesis but recognizes it as distinct from the position he (Dennett) most wants to defend—a position that can be represented as an alternative way to be a realist about propositional attitudes. Thus, in contrast to the focus on efficient causation that can be said to characterize Fodor’s project, Dennett’s approach can be taken to allow for something like a teleological level of description; reflecting as much, Dennett says that “while belief is a perfectly objective phenomenon (that apparently makes me a realist), it can be discerned only from the point of view of one who adopts a certain predictive strategy” (1987, 15).33
The idea here introduced is that intentionality is best understood in terms of what Dennett calls the “intentional stance.” Attributing the intentional stance to (nota bene) an object consists, he says, in “treating the object whose behavior you want to predict as a rational agent with beliefs and desires and other mental stages exhibiting what Brentano and others call intentionality” (1987, 15). That is, we invoke “intentional stance” descriptions whenever we usefully treat some object or creature as though it entertained the kinds of discursive thoughts in terms of which its patterned behaviors are reasonably thought of as purposeful. My “as though” locution should not be taken to suggest that Dennett denies the reality of the patterns that come into view by assuming the intentional stance; it is just Dennett’s point to stress that “intentional stance description yields an objective, real pattern in the world” (1987, 34). If we imagine, for example, extraterrestrial observers experiencing highly complex rational behaviors—those, for example, constituting the commerce of a stock exchange—as consisting in nothing more than interactions among fathomlessly many subatomic particles, we would be right to judge them as having “failed to see a real pattern in the world they are observing” (1987, 26).
Distinguishing the intentional stance idea from Fodor’s language of thought hypothesis, Dennett emphasizes that the latter represents only one possible way (albeit one he thinks probably correct) to explain how and why intentional stance descriptions work. Given Fodor’s hypothesis, the patterned behaviors of some objects can be successfully predicted by attributing the intentional stance just insofar as those behaviors are “produced by another real pattern roughly isomorphic to it within the brains of intelligent creatures” (1987, 34). The language of thought hypothesis thus has it that insofar as there are real behavioral patterns of the sort that would lead us to characterize an object’s behavior as purposeful, there must be corresponding patterns in the object’s internal states (in, e.g., its brain events). While Dennett thinks this is probably right, he urges that one does not need to accept that view in order to hold the view he chiefly wants to defend, which is that an intentional level of description picks out patterns that only emerge given the adoption of this stance.
Nevertheless, it’s chief among Dennett’s points to urge that intentional characterizations do not require that we invoke the kinds of universals that arguably figure in accounts of the semantic content of intentional states. With respect, for example, to the operations of a computer that is “playing” chess, he says that “doubts about whether the chess-playing computer really has beliefs and desires are misplaced; for the definition of intentional systems I have given does not say that intentional systems really have beliefs and desires, but that one can explain and predict their behavior by ascribing beliefs and desires to them.” Precisely how one imagines what is thus ascribed, he says, “makes no difference to the nature of the calculation one makes on the basis of the ascriptions” (1981, 7). Like Fodor, Dennett thus takes his bearings from the eminently computationalist idea that it is the patterned “syntax” that really matters and that beliefs therefore need not be individuated in terms of their semantic content. The claim, rather, is that “all there is to being a true believer is being a system whose behavior is reliably predictable via the intentional strategy, and hence all there is to really and truly believing that p (for any proposition p) is being an intentional system for which p occurs as a belief in the best (most predictive) interpretation” (1987, 29).
Central to this proposal, I think, is the idea that intentionality can thus be described from a third-person perspective; that is the real point in understanding the intentional stance as usefully attributed to any of the various objects whose behaviors might usefully be predicted in terms thereof. Being rational, Dennett thus says, “is being intentional[,] is being the object of a certain stance” (1981, 271; emphasis mine).34 By thus characterizing intentionality, Dennett aims to avoid invoking anything that will not admit of explanation; “whenever we stop in our explanations at the intentional level we have left over an unexplained instance of intelligence or rationality” (1981, 12). Reference, in any account of a person’s intentionally describable actions, to the content of her beliefs is problematic, then, just insofar as “rationality is being taken for granted, and in this way shows us where a theory is incomplete” (1981, 12). Reference to the content of a subject’s beliefs does not, that is, explain anything; indeed, this is precisely the point where, for Dennett as for Fodor, explanation is called for. The idea that the explanation must be essentially “third-personal” reflects Dennett’s confidence that this is a finally empirical matter, such as will admit of a scientific answer; his idea of “intentional systems” is invoked as “a bridge connecting the intentional domain (which includes our ‘common-sense’ world of persons and actions, game theory, and the ‘neural signals’ of the biologist) to the non-intentional domain of the physical sciences” (1981, 22).
Significantly, though, Dennett allows that anything can be an intentional system “only in relation to the strategies of someone who is trying to explain and predict its behavior” (1981, 3–4; emphasis mine). It’s revealing to ask, in this regard, for whom the system in question is thus the “object” of a “stance”; more compellingly, what is the person for whom some system is thus an “object” of the intentional stance doing in attributing that stance? The point is that attributing the intentional stance to anything—regarding any object as though it were acting as we act when we act purposefully—is itself an intentional idea par excellence. The intentional stance idea does not, to that extent, explain anything at all about intentionality; for we understand what it means to attribute the “intentional stance” to anything only insofar as we already have a first-personal experience of acting based on reasons. Insofar as it is thus intelligible only with reference to our own experienced intentionality, this idea cannot explain the very thing we supposedly want to understand.
Dennett seems to acknowledge as much when he notes an asymmetry that is crucial to his thought experiment about alternative descriptions of the workings of a stock market: namely, “the unavoidability of the intentional stance with regard to oneself and one’s fellow intelligent beings” (1987, 27). I do not see, however, that he addresses the significance of this concession. What this unavoidability brings into view is the extent to which intentionality may not, in principle, be exhaustively describable with reference only to a third-person perspective. There is, as G. F. Schueler puts the point I’m after, “a ‘non-theoretical’ element at the heart of reasons explanations, namely the way I understand my own case when I act for a reason” (2003,160). The “furthest down” we can go, that is, in thinking about what is meant by the kind of constitutively intentional action that will admit of demands for justification, is to understand it in terms of what I am doing when I experience myself as acting for a reason.35 This, I think, is effectively the point John Haugeland makes in characterizing “the ultimate limitation” of the intentional stance idea as being that “neither knowledge nor understanding is possible for a system that is itself incapable of adopting a stance” (1993, 67).
This is the problem, finally, with what Dennett allows is his “apparently shallow and instrumentalistic criterion of belief” (1987, 29).36 However instrumentally useful it is to make reference to belief, the intentional level of description is not, on Dennett’s account, to be reckoned as picking out anything that is (in Dharmakīrti’s idiom) “ultimately real.” As Lynne Rudder Baker puts the same point, Dennett is “explicitly committed to… the ‘stance-dependence’ of features attributed from the intentional stance” (1987, 154). That is, an intentional level of description picks out phenomena that are, on Dennett’s account, real only insofar as they are conveniently assumed for purposes of attributing the intentional stance; this is in contrast to the ontology of items invoked in the kinds of “physical stance” descriptions that ultimately ground Dennett’s explanations. Insofar, however, as the very idea of the intentional stance is intelligible only given our own understanding of what we are doing in attributing this stance, it cannot be right to say that its reality depends only upon the predictive strategies of others; it must, rather, always already be integral to what we are trying to understand—indeed, integral to our trying to understand it.
But it seems, to that extent, that an intentional level of description is not so much instrumental as constitutive—in which case, the conceptual work done by the contrast between Dennett’s levels of explanation is empty. Insofar as an intentional level of description is necessarily presupposed even by the proponent of an account on which that is only “instrumental,” one cannot coherently claim to offer an explanation of the intentional level in terms of a putatively privileged level of description.37 Despite Dennett’s differences from Fodor, we thus see here, too, something of how difficult it is to advance a naturalistic (read: nonintentional) explanation of intentionality—how difficult it is, in particular, to reconcile any intentional description of the mental (a description of the mental as contentful) with causal descriptions thereof. While Dennett’s project may, then, be taken to represent an alternative to Fodor’s as a way to be realist about the propositional attitudes, both approaches essentially privilege causal explanation; it is finally to this extent that they are similarly problematic.
We have seen, then, that Fodor’s appropriation of the computer metaphor is guided by a preoccupation with the problem of mental causation; what the example of computation most compellingly offers is a way to imagine how a sequence of causally related states can at the same time represent the steps in a chain of reasoning. Elaborating this insight, Fodor develops the idea that the “contentful” character of thought is finally explicable in terms of narrow content—in terms, that is, of a level of description that can be individuated with reference only to factors somehow internal to a subject. Fodor takes this view to be supported by the same considerations that make the problem of mental causation pressing in the first place—considerations that recommend the conclusion that thought’s content must be inside the head. He also takes it, though, to be supported by the kinds of considerations that Donald Davidson marshaled to argue that reasons explanations should finally be reckoned as causal explanations—epistemic considerations, that is, having to do with what a subject believes to be the case, which are logically independent of the objective states of affairs that a subject takes her beliefs to be about. There is, however, a real question whether these two lines of argument (the causal and the epistemic arguments for narrow content) converge on the same conclusion; arguing that they do not, I suggested that, as we initially saw regarding Davidson, there is no reason to think, given the minimal sense in which reasons may be causes, that reasons must therefore consist in the kinds of efficient causes that can be thought to impel movements of the body.
Moreover, we have seen that the computational model problematically recommends the view that beliefs can (as Fodor’s methodological solipsism is meant to suggest) finally be individuated without any reference to what they are about; if Fodor’s computational processes involve factors that are both “syntactically” and “semantically” evaluable, it is nevertheless only in terms of the former level of description that any explanatory work is done. To that extent, and despite the very basis of the appeal to computers, reference to beliefs arguably remains epiphenomenal on the computational iteration of cognitivism. Noting Fodor’s concession that this was a significant concern, we turned to his attempt to ground the contentfulness of thought in what he figuratively calls a “language of thought”—in the “syntax” of thought’s relations to internal representations that, though intrinsically related to the environment, are still reckoned to be real only as causally related. But even when Fodor tries to allow that “broad content” may figure in the explanation of behavior, he remains committed to the idea that it’s only under a causal description that this could be so; we are, to that extent, still faced with the problem that the content of beliefs may be epiphenomenal. As long as it’s supposed that beliefs do what they do only insofar as they can be described in terms of something other than their semantic content—as long, in Kant’s terms, as it’s supposed that reason finally figures in accounts of what we are “only as empirically conditioned”—this will be a problem.
Dennett, in contrast to Fodor, advances a basically computationalist approach that nevertheless aims to allow for the reality of the patterns that emerge on something like a teleological level of description. Despite the significance of this gesture of accommodation with regard to semantic content, Dennett’s “intentional stance” idea nevertheless relegates reason and belief to merely instrumental status; reference to these is instrumentally useful, that is, in predicting the seemingly purposeful behaviors of certain objects, but the whole point of the idea is to circumvent the question whether these “objects” really have the beliefs we find it useful to attribute to them. The very idea of intentionality as a possible “stance” is itself intelligible, however, only given our own first-personal acquaintance with what it is to have the kinds of beliefs thus attributed; to that extent, what we are doing in attributing the intentional stance to any object is already exhibiting intentionality, which turns out therefore to be presupposed by Dennett’s proposed explanation thereof. Dennett’s distinction between the instrumentally real and the really real parts of his picture thus turns out to be incapable of doing the work it needs to do, and the intentional stance idea finally gives us no traction on the problem of intentionality.
We will see in chapters 4 through 6 that many of the essentials of the foregoing picture apply, mutatis mutandis, to Dharmakīrti’s project as well. Thus, having already seen in chapter 1 something of the extent to which Dharmakīrti privileges causal explanation, we will see in chapter 4 that Dharmakīrti’s apoha doctrine represents a full-fledged attempt to explain conceptual mental content in just such terms; we will see in chapter 5 that that account relates closely to his doctrine of svasavitti, which can be understood as in important respects similar to Fodor’s methodological solipsism; and we will see in chapter 6 that Brahmanical philosophers of the Mīmāsā school and Buddhist philosophers of the Madhyamaka school pressed, with regard to these commitments of Dharmakīrti, arguments to the effect that Dharmakīrti cannot make his own case for these without helping himself to precisely the kinds of things he claims to explain thereby. Before returning, however, to the project of Dharmakīrti, we will first try to get a bit more clear, in the next chapter, on just what it is we are talking about when we ask about intentionality, and about why it’s reasonably thought that an intentional level of description may constitutively be the sort of thing that resists such explanations as Fodor and Dennett have proposed. More particularly, we turn now to the elaboration of a basically Kantian story of why and how reason itself can be taken to epitomize what Brentano characterized as “reference to a content, direction toward an object.”