NOTES
THE INTRODUCTION TO
Philosophical Foundations of Neuroscience
 
The following is the unaltered text of the preface of Philosophical Foundations of Neuroscience, save for cutting the last two paragraphs and the elimination of cross-references, which have been replaced, where necessary, by notes. (Subsequent references to the book are flagged PFN.)
 
1.  Methodological objections to these distinctions are examined in the sequel and, in further detail, in PFN, chapter 14.
2.  Chapter 1 of PFN accordingly begins with a historical survey of the early development of neuroscience.
3.  Chapter 2 of PFN is accordingly dedicated to a critical scrutiny of their conceptual commitments.
4.  PFN §3.10.
5.  See below, in the excerpt from chapter 3 of PFN. The original chapter is much longer than the excerpt here supplied and the argument correspondingly more elaborate.
6.  Reductionism is discussed in PFN chapter 13.
7.  In PFN chapter 14.
8.  See PFN §14.3.
9.  See PFN chapters 1 and 2.
10.  Examples that arguably render the research futile are scrutinized in PFN §6.31, which examines mental imagery, and PFN §8.2, which investigates voluntary movement.
11.  Examples are given in the discussions of memory in PFN §§5.21–5.22 and of emotions and appetites in PFN §7.1.
12.  We address methodological qualms in detail in PFN chapter 3, § 3 (this volume) and in PFN chapter 14.
 
AN EXCERPT FROM CHAPTER 3
 
These pages consist of the unaltered text of PFN, pp. 68–80, save for cross-references, which have been relegated to notes where necessary.
 
1.  F. Crick, The Astonishing Hypothesis (Touchstone Books, London, 1995), pp. 30, 32f., 57.
2.  G. Edelman, Bright Air, Brilliant Fire (Penguin Books, London, 1994), pp. 109f., 130.
3.  C. Blakemore, Mechanics of the Mind (Cambridge University Press, Cambridge, 1977), p. 91.
4.  J.Z. Young, Programs of the Brain (Oxford University Press, Oxford, 1978), p. 119.
5.  A. Damasio, Descartes’ Error—Emotion, Reason and the Human Brain (Papermac, London, 1996), p. 173.
6.  B. Libet, ‘Unconscious cerebral initiative and the role of conscious will in voluntary action’, The Behavioural and Brain Sciences (1985) 8, p. 536.
7.  J.P. Frisby, Seeing: Illusion, Brain and Mind (Oxford University Press, Oxford, 1980), pp. 8f. It is striking here that the misleading philosophical idiom associated with the Cartesian and empiricist traditions, namely talk of the ‘outside’ world, has been transferred from the mind to the brain. It was misleading because it purported to contrast an inside ‘world of consciousness’ with an outside ‘world of matter’. But this is confused. The mind is not a kind of place, and what is idiomatically said to be in the mind is not thereby spatially located (cp. ‘in the story’). Hence too, the world (which is not ‘mere matter’, but also living beings) is not spatially ‘outside’ the mind. The contrast between what is in the brain and what is outside the brain is, of course, perfectly literal and unobjectionable. What is objectionable is the claim that there are ‘symbolic descriptions’ in the brain.
8.  R.L. Gregory, ‘The Confounded Eye’, in R.L. Gregory and E.H. Gombrich eds. Illusion in Nature and Art (Duckworth, London, 1973), p. 50.
9.  D. Marr, Vision, a Computational Investigation into the Human Representation and Processing of Visual Information (Freeman, San Francisco, 1980), p. 3 (our italics).
10.  P.N. Johnson-Laird, ‘How could consciousness arise from the computations of the brain?’ in C. Blakemore and S. Greenfield eds. Mindwaves (Blackwell, Oxford, 1987), p. 257.
11.  Susan Greenfield, explaining to her television audiences the achievements of positron emission tomography, announces with wonder that for the first time it is possible to see thoughts. Semir Zeki informs the Fellows of the Royal Society that the new millennium belongs to neurobiology, which will, among other things solve the age old problems of philosophy (see S. Zeki, ‘Splendours and miseries of the brain’, Phil. Trans. R. Soc. Lond. B (1999), 354, 2054). See PFN §14.42.
12.  L. Wittgenstein, Philosophical Investigations (Blackwell, Oxford, 1953), §281 (see also §§282–4, 357–61). The thought fundamental to this remark was developed by A.J.P. Kenny, ‘The Homunculus Fallacy’ (1971), repr. in his The Legacy of Wittgenstein (Blackwell, Oxford, 1984), pp. 125–36. For the detailed interpretation of Wittgenstein’s observation, see P.M.S. Hacker, Wittgenstein: Meaning and Mind, Volume 3 of an Analytical Commentary on the Philosophical Investigations (Blackwell, Oxford, 1990), Exegesis §§281–4, 357–61 and the essay entitled ‘Men, Minds and Machines’, which explores some of the ramifications of Wittgenstein’s insight. As is evident from [PFN] Chapter 1, he was anticipated in this by Aristotle (DA 408b2–15).
13.  Kenny (ibid., p. 125) uses the term ‘homunculus fallacy’ to signify the conceptual mistake in question. Though picturesque, it may, as he admits, be misleading, since the mistake is not simply that of ascribing psychological predicates to an imaginary homunculus in the head. In our view, the term ‘mereological fallacy’ is more apt. It should, however, be noted that the error in question is not merely the fallacy of ascribing to a part predicates that apply only to a whole, but is a special case of this more general confusion. As Kenny points out, the misapplication of a predicate is, strictly speaking, not a fallacy, since it is not a form of invalid reasoning, but it leads to fallacies (ibid., pp. 135f.). To be sure, this mereological confusion is common among psychologists as well as neuroscientists.
14.  Comparable mereological principles apply to inanimate objects and some of their properties. From the fact that a car is fast it does not follow that its carburettor is fast, and from the fact that a clock tells the time accurately, it does not follow that its great wheel tells the time accurately.
15.  But note that when my hand hurts, I am in pain, not my hand. And when you hurt my hand, you hurt me. Verbs of sensation (unlike verbs of perception) apply to parts of the body, i.e. our body is sensitive and its parts may hurt, itch, throb, etc. But the corresponding verb phrases incorporating nominals, e.g. ‘have a pain (an itch, a throbbing sensation)’ are predicable only of the person, not of his parts (in which the sensation is located).
16.  See Simon Ullman, ‘Tacit Assumptions in the Computational Study of Vision’, in A. Gorea ed. Representations of Vision, Trends and Tacit Assumptions in Vision Research (Cambridge University Press, Cambridge, 1991), pp. 314f. for this move. He limits his discussion to the use (or, in our view, misuse) of such terms as ‘representation’ and ‘symbolic representation’.
17.  The phrase is Richard Gregory’s, see ‘The Confounded Eye’ in R.L. Gregory and E.H. Gombrich eds. Illusion in Nature and Art (Duckworth, London, 1973), p. 51.
18.  See C. Blakemore, ‘Understanding Images in the Brain’, in H. Barlow, C. Blakemore and M. Weston-Smith eds. Images and Understanding (Cambridge University Press, Cambridge, 1990), pp. 257–83.
19.  S. Zeki, ‘Abstraction and Idealism’, Nature 404 (April 2000), p. 547.
20.  J.Z. Young, Programs of the Brain (Oxford University Press, Oxford, 1978), p. 192.
21.  Brenda Milner, Larry Squire and Eric Kandel, ‘Cognitive Neuroscience and the Study of Memory’, Neuron 20 (1998), p. 450.
22.  For detailed discussion of this questionable claim, see PFN §5.22.
23.  Ullman, ibid., pp. 314f.
24.  Marr, ibid., p. 20.
25.  Marr, ibid., p. 21.
26.  Marr, ibid.
27.  For further criticisms of Marr’s computational account of vision, see PFN §4.24.
28.  Frisby, ibid., p. 8.
29.  Roger Sperry, ‘Lateral Specialization in the Surgically Separated Hemispheres’, in F.O. Schmitt and F.G. Worden eds. The Neurosciences Third Study Programme (MIT Press, Cambridge, Mass., 1974), p. 11 (our italics). For detailed examination of these forms of description, see PFN §14.3.
30.  Blakemore, ‘Understanding Images in the Brain’, p. 265. It should be noted that what is needed in order to recognize the order in the brain is not a set of rules, but merely a set of regular correlations. A rule, unlike a mere regularity, is a standard of conduct, a norm of correctness against which behaviour can be judged to be right or wrong, correct or incorrect.
31.  J.Z. Young, Programs of the Brain (Oxford University Press, Oxford, 1978), p. 52.
32.  Blakemore, ibid., pp. 265–7.
33.  J.Z. Young, Programs of the Brain, p. 11.
34.  Just how confusing the failure to distinguish a rule from a regularity, and the normative from the causal, is evident in Blakemore’s comments on the Penfield and Rasmussen diagram of the motor ‘homunculus’. Blakemore remarks on ‘the way in which the jaws and hands are vastly over-represented’ (‘Understanding Images in the Brain’, p. 266, in the long explanatory note to Fig. 17.6); but that would make sense only if we were talking of a map with a misleading method of projection (in this sense we speak of the relative distortions of the Mercator (cylindrical) projection. But since all the cartoon drawing represents is the relative number of cells causally responsible for certain functions, nothing is, or could be, ‘over-represented’. For, to be sure, Blakemore does not mean that there are more cells in the brain causally correlated with the jaws and the hands than there ought to be!
 
AN EXCERPT FROM CHAPTER 10
 
1.  Ned Block, ‘Qualia’, in S. Guttenplan ed. Blackwell Companion to the Philosophy of Mind (Blackwell, Oxford, 1994), p. 514.
2.  .R. Searle, ‘Consciousness’, Annual Review, p. 560.
3.  Searle, ibid., p. 561.
4.  Chalmers, The Conscious Mind (Oxford University Press, Oxford, 1996), p. 4.
5.  D.J. Chalmers, The Conscious Mind, p. 10.
6.  I. Glynn, An Anatomy of Thought, p. 392.
7.  A. Damasio, The Feeling of What Happens, p. 9. Note that there is here an unargued assumption that colour and sound are not properties of objects but of sense-impressions.
8.  G. Edelman and G. Tononi, ConsciousnesHow Matter Becomes Imagination, p. 157.
9.  E. Lomand, ‘Consciousness’, in Routledge Encyclopaedia of Philosophy (Routledge, London, 1998), vol. 2, p. 581.
10.  Searle, The Mystery of Consciousness, p. xiv.
11.  T. Nagel, ‘What is it like to be a bat?’, repr. in Mortal Questions (Cambridge University Press, Cambridge, 1979), p. 166.
12.  Nagel, ibid., p. 170n.
13.  Nagel, ibid., p. 170.
14.  M. Davies and G.W. Humphreys ed. Consciousness (Blackwell, Oxford, 1993), p. 9.
15.  Edelman and Tononi, Consciousness—How Matter becomes Imagination, p. 11.
16.  Chalmers, The Conscious Mind, p. 4.
17.  Cf. Searle, The Mysteries of Consciousness, p. 201.
 
AN EXCERPT FROM CHAPTER 14
 
1.  See, for example, the discussion of voluntary movements in PFN §8.2.
2.  See, for example, the discussion of mental imagery in PFN §6.31.
3.  See, for example, PFN §14.3.
4.  Discussed in PFN §4.23.
5.  As argued in PFN §§6.3–6.31.
6.  See PFN §2.3.
 
NEUROSCIENCE AND PHILOSOPHY
 
1.  There are important criticisms of the use in this way of terms such as “storage” and “memory”; see Bennett and Hacker, Philosophical Foundations of Neuroscience (Oxford: Blackwell, 2003), pp. 158–71.
2.  Professor Dennett suggests in his note 15 that at the APA meeting Bennett expressed “utter dismay with the attention-getting hypotheses and models of today’s cognitive neuroscientists and made it clear that he thought it was all incomprehensible. With an informant like Bennett, it is no wonder that Hacker was unable to find anything of value in cognitive neuroscience.” He also suggests that I am clearly caught up in that “mutual disrespect” that occurs between synaptic neuroscientists and cognitive neuroscientists. This is not correct. First, David Marr is held up to be a cognitive neuroscientist of genius in textbooks on the subject (see Gazzaniga, Ivry, and Mangun 2002:597); I have published papers on synaptic network theory in the spirit of Marr’s work and do not see this in any way as showing illogical hostility to the cognitive neurosciences (see, for example, Bennett, Gibson, and Robinson 1994). A forthcoming book by Hacker and myself, History of Cognitive Neuroscience, would not have been written if we had been caught up in irrational hostility to cognitive neuroscience. Second, I did not claim at the APA meeting that the “models of today’s cognitive neuroscientists” are “all incomprehensible.” Rather I stressed the extreme complexity of the biology being modeled and the resultant paucity of our biological knowledge. This makes it very difficult to build models that illuminate synaptic network functions. The examples offered to support this view are given in the second section of this chapter. However, I did go on to say that it seems strange that such networks and collections of networks should be said to “see,” “remember,” etc., that is, possess the psychological attributes of human beings (see the third section).
 
PHILOSOPHY AS NAIVE ANTHROPOLOGY
 
1.  My purpose in Content and Consciousness, in 1969, was “to set out the conceptual background against which the whole story must be told, to determine the constraints within which any satisfactory theory must evolve (p. ix) … [to develop] the notion of a distinct mode of discourse, the language of the mind, which we ordinarily use to describe and explain our mental experiences, and which can be related only indirectly to the mode of discourse in which science is formulated” (p. x).
2.  Although earlier theorists—e.g., Freud—spoke of folk psychology with a somewhat different meaning, I believe I was the first, in “Three Kinds of Intentional Psychology” (1978), to propose its use as the name for what Hacker and Bennett call “ordinary psychological description.” They insist that this is not a theory, as do I.
3.  See my discussion of this in “A Cure for the Common Code,” in Brainstorms (1978) and, more recently, in “Intentional Laws and Computational Psychology” (section 5 of “Back from the Drawing Board”) in Dahlbom, ed., Dennett and His Critics, 1993.
4.  The list is long. See, in addition to the work cited in the earlier footnotes, my critiques of work on imagery, qualia, introspection, and pain in Brainstorms. I am not the only theorist whose work anticipatory to their own is overlooked by them. For instance, in their discussion of mental imagery, they reinvent a variety of Zenon Pylyshyn’s points without realizing it. Bennett and Hacker are not the first conceptual analysts to frequent these waters, and most, if not quite all, of their points have been aired before and duly considered in literature they do not cite. I found nothing new in their book.
5.  Their appendix devoted to attacking my views is one long sneer, a collection of silly misreadings, ending with the following: “If our arguments hold, then Dennett’s theories of intentionality and of consciousness make no contribution to the philosophical clarification of intentionality or of consciousness. Nor do they provide guidelines for neuroscientific research or neuroscientific understanding” (p. 435). But there are no arguments, only declarations of “incoherence.” At the APA meeting during which this essay was presented, Hacker responded with more of the same. It used to be, in the Oxford of the sixties, that a delicate shudder of incomprehension stood in for an argument. Those days have passed. My advice to Hacker: If you find these issues incomprehensible, try harder. You’ve hardly begun your education in cognitive science.
6.  Hornsby 2000. Hacker’s obliviousness to my distinction cannot be attributed to myopia; in addition to Hornsby’s work, it has also been discussed at length by other Oxford philosophers: e.g., Davies 2000; Hurley, Synthese, 2001; and Bermudez, “Nonconceptual Content: From Perceptual Experience to Subpersonal Computational States,” Mind and Language, 1995.
7.  See also “Conditions of Personhood” in Brainstorms.
8.  See also the discussion of levels of explanation in Consciousness Explained (1991).
9.  At the APA meeting at which this essay was presented, Searle did not get around to commenting on this matter, having a surfeit of objections to lodge against Bennett and Hacker.
10.  For a philosopher who eschews truth and falsehood as the touchstone of philosophical propositions, Hacker is remarkably free with unargued bald assertions to the effect that so-and-so is mistaken, that such-and-such is wrong, and the like. These obiter dicta are hard to interpret without the supposition that they are intended to be true (as contrasted with false). Perhaps we are to understand that only a tiny fraction of his propositions, the specifically philosophical propositions, “antecede” truth and falsehood while the vast majority of his sentences are what they appear to be: assertions that aim at truth. And as such, presumably, they are subject to empirical confirmation and disconfirmation.
11.  In Sweet Dreams: Philosophical Obstacles to a Science of Consciousness (2005), I describe some strains of contemporary philosophy of mind as naive aprioristic autoanthropology (pp. 31–35). Hacker’s work strikes me as a paradigm case of this.
12.  Notice that I am not saying that autoanthropology is always a foolish or bootless endeavor; I’m just saying that it is an empirical inquiry that yields results—when it is done right—about the intuitions that the investigators discover in themselves, and the implications of those intuitions. These can be useful fruits of inquiry, but it is a further matter to say under what conditions any of these implications should be taken seriously as guides to the truth on any topic. See Sweet Dreams for more on this.
13.  The Claim of Reason: Wittgenstein, Skepticism, Morality, and Tragedy (1979; 2d ed. 1999).
14.  Can a philosopher like Hacker be right even if not aiming at the truth?
15.  Presumably Bennett, a distinguished neuroscientist, has played informant to Hacker’s anthropologist, but then how could I explain Hacker’s almost total insensitivity to the subtleties in the patois (and the models and the discoveries) of cognitive science? Has Hacker chosen the wrong informant? Perhaps Bennett’s research in neuroscience has been at the level of the synapse, and people who work at that subneuronal level are approximately as far from the disciplines of cognitive science as molecular biologists are from field ethologists. There is not much communication between such distant enterprises, and even under the best of circumstances there is much miscommunication—and a fair amount of mutual disrespect, sad to say. I can recall a distinguished lab director opening a workshop with the following remark: “In our lab we have a saying: if you work on one neuron, that’s neuroscience; if you work on two neurons, that’s psychology.” He didn’t mean it as a compliment. Choosing an unsympathetic informant is, of course, a recipe for anthropological disaster. (Added after the APA meeting:) Bennett confirmed this surmise in his opening remarks; after reviewing his career of research on the synapse, he expressed his utter dismay with the attention-getting hypotheses and models of today’s cognitive neuroscientists and made it clear that he thought it was all incomprehensible. With an informant like Bennett, it is no wonder that Hacker was unable to find anything of value in cognitive neuroscience.
16.  See my Content and Consciousness, p. 183.
17.  To take just one instance, when Hacker deplores my “barbaric nominal ‘aboutness’” (p. 422) and insists that “opioid receptor are no more about opioids than cats are about dogs or ducks are about drakes” (p. 423), he is of course dead right: the elegant relation between opioids and opioid receptors isn’t fully fledged aboutness (sorry for the barbarism), it is mere proto-aboutness (ouch!), but that’s just the sort of property one might treasure in a mere part of some mereological sum which (properly organized) could exhibit bona fide, echt, philosophically sound, paradigmatic … intentionality.
18.  In Hacker’s narrow sense.
19.  This has been an oft-recurring theme in critical work in cognitive science. Classic papers go back to William Woods’s “What’s in a Link?” (in Bobrow and Collins, Representation and Understanding, 1975) and Drew McDermott’s “Artificial Intelligence Meets Natural Stupidity,” in Haugeland, Mind Design (1981), through Ulrich Neisser’s Cognition and Reality (1975) and Rodney Brooks’s “Intelligence Without Representation,” Artificial Intelligence (1991). They continue to this day, including contributions by philosophers who have done their homework and know what the details of the issues are.
20.  Hacker and Bennett say: “It would be misleading, but otherwise innocuous, to speak of maps in the brain when what is meant is that certain features of the visual field can be mapped on to the firings of groups of cells in the ‘visual’ striate cortex. But then one cannot go on to say, as Young does, that the brain makes use of its maps in formulating its hypotheses about what is visible” (p. 77). But that is just what makes talking about maps perspicuous: that the brain does make use of them as maps. Otherwise, indeed, there would be no point. And that is why Kosslyn’s pointing to the visible patterns of excitation on the cortex during imagery is utterly inconclusive about the nature of the processes underlying what we call, at the personal level, visual imagery. See Pylyshyn’s recent target article in BBS (April 2002) and my commentary, “Does Your Brain Use the Images on It, and If So, How?”
21.  “Philosophers should not find themselves having to abandon pet theories about the nature of consciousness in the face of scientific evidence. They should have no pet theories, since they should not be propounding empirical theories that are subject to empirical confirmation and disconfirmation in the first place. Their business is with concepts, not with empirical judgments; it is with the forms of thought, not with its content; it is with what is logically possible, not with what is empirically actual; with what does and does not make sense, not with what is and what is not true” (p. 404). It is this blinkered vision of the philosopher’s proper business that permits Hacker to miss the mark so egregiously when he sets out to criticize the scientists.
22.  For an example of such a type of explanation, see my simplified explanation of how Shakey the robot tells the boxes from the pyramids (a “personal level” talent in a robot) by (subpersonally) making line drawings of its retinal images and then using its line semantics program to identify the telltale features of boxes, in Consciousness Explained.
23.  Bennett and Hacker’s “Appendix 1: Daniel Dennett” does not deserve a detailed reply, given its frequent misreadings of passages quoted out of context and its apparently willful omission of any discussion of the passages where I specifically defend against the misreadings they trot out, as already noted. I cannot resist noting, however, that they fall for the creationist canard they presume will forestall any explanations of biological features in terms of what I call the design stance: “Evolution has not designed anything—Darwin’s achievement was to displace explanation in terms of design by evolutionary explanations” (p. 425). They apparently do not understand how evolutionary explanation works.
 
PUTTING CONSCIOUSNESS BACK IN THE BRAIN
 
I am indebted to Romelia Drager, Jennifer Hudin, and Dagmar Searle for comments on earlier drafts of this article.
1.  For example, John R. Searle, The Rediscovery of the Mind (Cambridge: MIT Press, 1992).
2.  For possible counterevidence to this claim, see Christof Koch’s discussion of “the Halle Berry neuron,” e.g., New York Times, July 5, 2005.
3.  John R. Searle, Rationality in Action (Cambridge: MIT Press, 2001).
 
THE CONCEPTUAL PRESUPPOSITIONS OF COGNITIVE NEUROSCIENCE
 
1.  M. R. Bennett and P. M. S. Hacker, Philosophical Foundations of Neuroscience (Oxford: Blackwell, 2003); page references to this book will be flagged PFN.
2.  Professor Searle asserts that a conceptual result is significant only as a part of a general theory (p. 122). If by “a general theory” he means an overall account of a conceptual network, rather than mere piecemeal results, we agree. Our denial that our general accounts are theoretical is a denial that they are logically on the same level as scientific theories. They are descriptions, not hypotheses; they are not confirmable or refutable by experiment; they are not hypothetico-deductive and their purpose is neither to predict nor to offer causal explanations; they do not involve idealizations in the sense in which the sciences do (e.g., the notion of a point mass in Newtonian mechanics) and they do not approximate to empirical facts within agreed margins of error; there is no discovery of new entities and no hypothesizing entities for explanatory purposes.
3.  Professor Dennett seemed to have difficulties with this thought. In his criticisms (p. 79), he quoted selectively from our book: “Conceptual questions antecede matters of truth and falsehood …” (PFN 2, see p. 4, this volume) “What truth and falsity is to science, sense and nonsense is to philosophy” (PFN 6, see p. 12, this volume). From this he drew the conclusion that in our view, philosophy is not concerned with truth at all. However, he omitted the sequel to the first sentence:
They are questions concerning our forms of representation, not questions concerning the truth or falsehood of empirical statements. These forms are presupposed by true (and false) scientific statements, and by correct (and incorrect) scientific theories. They determine not what is empirically true or false, but rather what does and does not make sense.
(PFN 2, see p. 4, this volume; emphasis added)
 
He likewise omitted the observation on the facing page that neuroscience is discovering much concerning the neural foundations of human powers, “but its discoveries in no way affect the conceptual truth that these powers and their exercise … are attributes of human beings, not of their parts” (PFN 3, see p. 6, this volume; emphasis added). As is patent, it is our view that philosophy is concerned with conceptual truths and that conceptual truths determine what does and does not make sense.
4.  Professor Paul Churchland proposes, as a consideration against our view, that “since Quine, the bulk of the philosophical profession has been inclined to say ‘no’” to the suggestion that there are “necessary truths, constitutive of meanings, that are forever beyond empirical or factual refutation.” “Cleansing Science,” Inquiry 48 (2005): 474. We doubt whether he has done a social survey (do most philosophers really think that truths of arithmetic are subject to empirical refutation together with any empirical theory in which they are embedded?) and we are surprised that a philosopher should think that a head count is a criterion of truth.
5.  For canonical criticism of Quine on analyticity, see P. F. Strawson and H. P. Grice, “In Defense of a Dogma,” Philosophical Review 1956. For more recent, meticulous criticism of Quine’s general position, see H.-J. Glock, Quine and Davidson on Language, Thought, and Reality (Cambridge: Cambridge University Press, 2003). For the contrasts between Quine and Wittgenstein, see P. M. S. Hacker, Wittgenstein’s Place in Twentieth-Century Analytic Philosophy (Oxford: Blackwell, 1996), chapter 7.
6.  It might be thought (as suggested by Professor Churchland) that Descartes’ view that the mind can causally affect the movement of the body (understood, according to Professor Churchland, as a conceptual claim) is refuted by the law of conservation of momentum. This is a mistake. It could be refuted (no matter whether it is a conceptual or empirical claim) only if it made sense; but, in the absence of criteria of identity for immaterial substances, it does not. The very idea that the mind is a substance of any kind is not coherent. Hence the statement that the mind, thus understood, possesses causal powers is not intelligible, a fortiori neither confirmable nor refutable by experimental observation and testing. (Reflect on what experimental result would count as showing that it is true.)
7.  Such an epistemic conception informs Professor Timothy Williamson’s lengthy attack on the very idea of a conceptual truth, “Conceptual Truth,” Proceedings of the Aristotelian Society, suppl. vol. 80 (2006). The conception he outlines is not what many great thinkers, from Kant to the present day, meant by “a conceptual truth.” Having criticized, to his own satisfaction, the epistemic conception that he himself delineated, Professor Williamson draws the conclusion that there are no conceptual truths at all. But that is a non sequitur of numbing proportions. For all he has shown (at best) is that there are no conceptual truths that fit the Procrustean epistemic bed he has devised.
8.  The Aristotelian, anti-Cartesian, points that we emphasize are 1. Aristotle’s principle, which we discuss below, 2. Aristotle’s identification of the psuchē with a range of capacities, 3. that capacities are identified by what they are capacities to do, 4. that whether a creature possesses a capacity is to be seen from its activities, 5. Aristotle’s realization that whether the psuchē and the body are one thing or two is an incoherent question.
9.  It is, of course, not strictly a fallacy, but it leads to fallacies—invalid inferences and mistaken arguments.
10.  A. J. P. Kenny, “The Homunculus Fallacy,” in M. Grene, ed., Interpretations of Life and Mind (London: Routledge, 1971). We preferred the less picturesque but descriptively more accurate name “mereological fallacy” (and, correlatively, “the mereological principle”). We found that neuroscientists were prone to dismiss as childish the fallacy of supposing that there is a homunculus in the brain and to proceed in the next breath to ascribe psychological attributes to the brain.
11.  Not, of course, with his brain, in the sense in which one does things with one’s hands or eyes, nor in the sense in which one does things with one’s talents. To be sure, he would not be able to do any of these things but for the normal functioning of his brain.
12.  D. Dennett, Content and Consciousness (London: Routledge and Kegan Paul, 1969), p. 91.
13.  We were more than a little surprised to find Professor Dennett declaring that his “main points of disagreement” are that he does not believe that “the personal level of explanation is the only level of explanation when the subject matter is human minds and actions” and that he believes that the task of relating these two levels of explanation is “not outside the philosopher’s province” (p. 79). There is no disagreement at all over this. Anyone who has ever taken an aspirin to alleviate a headache, or imbibed excessive alcohol to become jocose, bellicose, or morose, and wants an explanation of the sequence of events must surely share Dennett’s first commitment. Anyone who has concerned himself, as we have done throughout the 452 pages of Philosophical Foundations of Neuroscience, with clarifying the logical relationships between psychological and neuroscientific concepts, and between the phenomena they signify, share his second one.
14.  L. Wittgenstein, Philosophical Investigations (Oxford: Blackwell, 1953), §281.
15.  The Cartesian conception of the body a human being has is quite mistaken. Descartes conceived of his body as an insensate machine—a material substance without sensation. But our actual conception of our body ascribes verbs of sensation to the body we have—it is our body that aches all over or that itches intolerably.
16.  The human brain is part of the human being. It can also be said to be part of the body a human being is said to have. It is, however, striking that one would, we suspect, hesitate to say of a living person, as opposed to a corpse, that his body has two legs or, of an amputee, that her body has only one leg. The misleading possessive is applied to the human being and to a human corpse, but not, or only hesitantly, to the body the living human being is said to have. Although the brain is a part of the human body, we surely would not say “my body has a brain” or “My body’s brain has meningitis.” That is no coincidence.
17.  We agree with Professor Searle that the question of which of the lower animals are conscious cannot be settled by “linguistic analysis” (p. 104). But, whereas he supposes that it can be settled by investigating their nervous system, we suggest that it can be settled by investigating the behavior the animal displays in the circumstances of its life. Just as we find out whether an animal can see by reference to its responsiveness to visibilia, so too we find out whether an animal is capable of consciousness by investigating its behavioral repertoire and responsiveness to its environment. (That does not imply that being conscious is behaving in a certain way, but only that the criteria for being conscious are behavioral.)
18.  The warrant for applying psychological predicates to others consists of evidential grounds. These may be inductive or constitutive (criterial). Inductive grounds, in these cases, presupposes noninductive, criterial grounds. The criteria for the application of a psychological predicate consist of behavior (not mere bodily movements) in appropriate circumstances. The criteria are defeasible. That such-and-such grounds warrant the ascription of a psychological predicate to another is partly constitutive of the meaning of the predicate but does not exhaust its meaning. Criteria for the application of such a predicate are distinct from its truth-conditions—an animal may be in pain and not show it or exhibit pain behavior without being in pain. (We are no behaviorists.) The truth-conditions of a proposition ascribing a psychological predicate to a being are distinct from its truth. Both the criteria and the truth-conditions are distinct from the general conditions under which the activities of applying or of denying the predicate of creatures can significantly be engaged in. But it is wrong to suppose that a condition of “the language-game’s being played” (as Professor Searle puts it) is the occurrence of publicly observable behavior. For the language game with a psychological predicate is played with its denial no less than with its affirmation. It would also be wrong to conflate the conditions for learning a language game with those for playing it.
19.  J. Z. Young, Programs of the Brain (Oxford: Oxford University Press, 1978), p. 192. Professor Dennett also suggests (p. 90) that we misrepresented Crick in holding that, because he wrote that our brain believes things and makes interpretations on the basis of its previous experience or information (F. Crick, The Astonishing Hypothesis [London:Touchstone, 1995], pp. 28–33, 57), therefore Crick really thought that the brain believes things and makes interpretations, etc. We invite readers to look for themselves at Crick’s cited discussions.
20.  N. Chomsky, Rules and Representations (Oxford: Blackwell, 1980). Far from being oblivious to this, as Professor Dennett asserted (p. 91), the matter was critically discussed in G. P. Baker and P. M. S. Hacker, Language, Sense and Nonsense (Oxford: Blackwell, 1984), pp. 340–45.
21.  D. Dennett, Consciousness Explained (Harmondsworth: Penguin, 1993), pp. 142–44.
22.  Dennett here quotes from his autobiographical entry in S. Guttenplan, ed., A Companion to the Philosophy of Mind (Oxford: Blackwell, 1994), p. 240.
23.  Of course, we are not denying that analogical extension of concepts and conceptual structures is often fruitful in science. The hydrodynamical analogy generated a fruitful, testable, and mathematicized theory of electricity. Nothing comparable to this is evident in the poetic license of Dennett’s intentional stance. It is evident that poetic license allows Professor Dennett to describe thermostats as sort of believing that it is getting too hot and so switching off the central heating. But this adds nothing to engineering science or to the explanation of homeostatic mechanisms.
Professor Dennett asserts (p. 88) that we did not address his attempts to use what he calls “the intentional stance” in explaining cortical processes. In fact we discussed his idea of the intentional stance at some length (PFN 427–31), giving seven reasons for doubting its intelligibility. Since Professor Dennett has not replied to these objections, we have, for the moment, nothing further to add on the matter.
In the debate at the APA Professor Dennett proclaimed that there are “hundreds, maybe thousands, of experiments” to show that a part of the brain has information that it contributes to “an ongoing interpretation process in another part of the brain.” This, he insisted, is a “sort of asserting—a sort of telling ‘Yes, there is color here,’ ‘Yes, there is motion here.’” This, he said, “is just obvious.” But the fact that cells in the visual striate cortex fire in response to impulses transmitted from the retina does not mean that they have information or sort of information about objects in the visual field and the fact that they respond to impulses does not mean that they interpret or sort of interpret anything. Or should we also argue that an infarct shows that the heart has sort of information about the lack of oxygen in the bloodstream and sort of interprets this as a sign of coronary obstruction? Or that my failing torch has information about the amount of electric current reaching its bulb and interprets this as a sign of the depletion of its batteries?
24.  Thinking does not occur in the human being, but rather is done by the human being. The event of my thinking that you were going to V is located wherever I was located when I thought this; the event of my seeing you V-ing is located wherever I was when I saw you V. That is the only sense in which thinking, perceiving, etc. have a location. To ask, as Professor Searle does (p 110), where exactly did the thinking occur, in any other sense, is like asking where exactly did a person weigh 160 pounds in some other sense than that specified by the answer “When he was in New York last year.” Sensations, by contrast, have a somatic location—if my leg hurts, then I have a pain in my leg. To be sure, my state (if state it be) of having a pain in my leg obtained wherever I was when my leg hurt.
25.  One needs a normally functioning brain to think or to walk, but one does not walk with one’s brain. Nor does one think with it, any more than one hears or sees with it.
26.  Professor Searle contends that because we repudiate qualia as understood by philosophers, therefore we can give no answer to the question of what going through a mental process consists in (p. 111). If reciting the alphabet in one’s imagination (Professor Searle’s example) counts as a mental process, it consists in first saying to oneself “a,” then “b,” then “c,” etc. until one reaches “x, y, z.” That mental process is not identified by its qualitative feel but by its being the recitation of the alphabet. The criteria for its occurrence include the subject’s say-so. Of course, it can be supposed to be accompanied by as yet unknown neural processes, the locus of which can be roughly identified by inductive correlation using fMRI.
27.  Descartes, Principles of Philosophy 1:46, 67, and especially 4:196.
28.  As is asserted by Professor Churchland, “Cleansing Science,” 469f., 474.
29.  Ibid., p. 470.
30.  For further discussion, see P. M. S. Hacker, Wittgenstein: Meaning and Mind, part 1: The Essays (Blackwell, Oxford, 1993), “Men, Minds and Machines,” pp. 72–81.
31.  C. Blakemore, “Understanding Images in the Brain,” in H. Barlow, C. Blakemore, and M. Weston-Smith, eds., Images and Understanding (Cambridge: Cambridge University Press, 1990), p. 265.
32.  J. Z. Young, Programs of the Brain (Oxford: Oxford University Press, 1978), p. 112.
33.  D. Chalmers, The Conscious Mind (Oxford: Oxford University Press, 1996), p. 4.
34.  F. Crick, The Astonishing Hypothesis (London: Touchstone, 1995), pp. 9f.
35.  A. Damasio, The Feeling of What Happens (London: Heineman, 1999), p. 9.
36.  Ned Block, “Qualia,” in S. Guttenplan, ed., Blackwell Companion to the Philosophy of Mind (Oxford: Blackwell, 1994), p. 514.
37.  Searle, Mystery of Consciousness (London: Granta, 1997), p. xiv.
38.  T. Nagel, “What It Is Like to Be a Bat?” reprinted in his Mortal Questions (Cambridge: Cambridge University Press, 1979), p. 170.
39.  Professor Searle (like Grice and Strawson) supposes that perceptual experiences are to be characterized in terms of their highest common factor with illusory and hallucinatory experiences. So all perceptual experience is, as it were, hallucination, but veridical perception is a hallucination with a special kind of cause. This, we think, is mistaken.
40.  Professor Searle asserts that we deny the existence of qualitative experiences (p. 99). We certainly do not deny that people have visual experiences, i.e., that they see things. Nor do we deny that seeing things may have certain qualities. What we deny is that whenever someone sees something, there is something it is like for them to see that thing, let alone that there is something it feels like for them to see what they see. And we deny that “the qualitative feel of the experience” is its “defining essence” (p. 115). Seeing or hearing are not defined by reference to what they feel like, but by reference to what they enable us to detect.
41.  G. Wolford, M. B. Miller, and M. Gazzaniga, “The Left Hemisphere’s Role in Hypothesis Formation,” Journal of Neuroscience 20 (2000), RC 64 (1–4), p. 2.
42.  We are grateful to Robert Arrington, Hanoch Ben-Yami, Hanjo Glock, John Hyman, Anthony Kenny, Hans Oberdiek, Herman Philipse, Bede Rundle, and especially to David Wiggins for their helpful comments on the early draft of this paper, which we presented at the APA, Eastern Division, in New York on December 28, 2005, in an “Authors and Critics” debate.
 
EPILOGUE
 
1.  Professor Searle suggested this at the APA meeting. He claimed that the attainment of this goal will proceed in three steps: first, determination of the neural correlates of consciousness (NCC); second, establishment of the causal relationship between consciousness and these NCC; and, finally, development of a general theory relating consciousness and the NCC.
2.  J. Searle, Mind: A Brief Introduction (Oxford: Oxford University Press, 2004), chapter 5.
3.  J. Searle, “Consciousness: What We Still Don’t Know,” New York Review of Books, January 13, 2005. This is a review of Christof Koch, The Quest for Consciousness (Greenwood Village, CO: Roberts, 2004).
 
STILL LOOKING
 
1.  A most discerning essay on this work is Lana Cable, “Such nothing is terrestriall: philosophy of mind on Phineas Fletcher’s Purple Island.” Journal of Historical Behavioral Science 19(2): 136–52.
2.  Aristotle, On the Soul, 403a25–403b1, in Richard McKeon, ed., The Basic Works of Aristotle, J.A. Smith, trans. (New York: Random House, 1941).
3.  Ibid., 408b10–15.
4.  This is in his Philosophical Investigations, §265.