PHILOSOPHICAL FOUNDATIONS OF NEUROSCIENCE
An Excerpt from Chapter 3
M. R. BENNETT AND P. M. S. HACKER
3.1  MEREOLOGICAL CONFUSIONS IN COGNITIVE NEUROSCIENCE
 
Ascribing psychological attributes to the brain Leading figures of the first two generations of modern brain-neuroscientists were fundamentally Cartesian. Like Descartes, they distinguished the mind from the brain and ascribed psychological attributes to the mind. The ascription of such predicates to human beings was, accordingly, derivative—as in Cartesian metaphysics. The third generation of neuroscientists, however, repudiated the dualism of their teachers. In the course of explaining the possession of psychological attributes by human beings, they ascribed such attributes not to the mind but to the brain or parts of the brain.
Neuroscientists assume that the brain has a wide range of cognitive, cogitative, perceptual and volitional capacities. Francis Crick asserts that
 
What you see is not what is really there; it is what your brain believes is there…. Your brain makes the best interpretation it can according to its previous experience and the limited and ambiguous information provided by your eyes…. The brain combines the information provided by the many distinct features of the visual scene (aspects of shape, colour, movement, etc.) and settles on the most plausible interpretation of all these various clues taken together…. What the brain has to build up is a many-levelled interpretation of the visual scene … [Filling-in] allows the brain to guess a complete picture from only partial information—a very useful ability.1
 
So the brain has experiences, believes things, interprets clues on the basis of information made available to it and makes guesses. Gerald Edelman holds that structures within the brain ‘categorize, discriminate, and recombine the various brain activities occurring in different kinds of global mappings’, and that the brain ‘recursively relates semantic to phonological sequences and then generates syntactic correspondences, not from preexisting rules, but by treating rules developing in memory as objects for conceptual manipulation’.2 Accordingly the brain categorizes, indeed, it ‘categorizes its own activities (particularly its perceptual categorizations)’ and conceptually manipulates rules. Colin Blakemore argues that
 
We seem driven to say that such neurons [as respond in a highly specific manner to, e.g., line orientation] have knowledge. They have intelligence, for they are able to estimate the probability of outside events—events that are important to the animal in question. And the brain gains its knowledge by a process analogous to the inductive reasoning of the classical scientific method. Neurons present arguments to the brain based on the specific features that they detect, arguments on which the brain constructs its hypothesis of perception.3
 
So the brain knows things, reasons inductively, constructs hypotheses on the basis of arguments, and its constituent neurons are intelligent, can estimate probabilities, and present arguments. J.Z. Young shared much the same view. He argued that ‘we can regard all seeing as a continual search for the answers to questions posed by the brain. The signals from the retina constitute “messages” conveying these answers. The brain then uses this information to construct a suitable hypothesis about what is there.’4 Accordingly, the brain poses questions, searches for answers, and constructs hypotheses. Antonio Damasio claims that ‘our brains can often decide well, in seconds, or minutes, depending on the time frame we set as appropriate for the goal we want to achieve, and if they can do so, they must do the marvellous job with more than just pure reason’5, and Benjamin Libet suggests that ‘the brain “decides” to initiate or, at least, to prepare to initiate the act before there is any reportable subjective awareness that such a decision has taken place.’6 So brains decide, or at least “decide”, and initiate voluntary action.
Psychologists concur. J.P. Frisby contends that ‘there must be a symbolic description in the brain of the outside world, a description cast in symbols which stand for the various aspects of the world of which sight makes us aware.’7 So there are symbols in the brain, and the brain uses, and presumably understands, symbols. Richard Gregory conceives of seeing as ‘probably the most sophisticated of all the brain’s activities: calling upon its stores of memory data; requiring subtle classifications, comparisons and logical decisions for sensory data to become perception.’8 So the brain sees, makes classifications, comparisons, and decisions. And cognitive scientists think likewise. David Marr held that ‘our brains must somehow be capable of representing … information…. The study of vision must therefore include … also an inquiry into the nature of the internal representations by which we capture this information and make it available as a basis for decisions about our thoughts and actions.’9 And Philip Johnson-Laird suggests that the brain ‘has access to a partial model of its own capabilities’ and has the ‘recursive machinery to embed models within models’; consciousness, he contends, ‘is the property of a class of parallel algorithms’.10
 
Questioning the intelligibility of ascribing psychological attributes to the brain With such broad consensus on the correct way to think about the functions of the brain and about explaining the causal preconditions for human beings to possess and exercise their natural powers of thought and perception, one is prone to be swept along by enthusiastic announcements—of new fields of knowledge conquered, new mysteries unveiled.11 But we should take things slowly, and pause for thought. We know what it is for human beings to experience things, to see things, to know or believe things, to make decisions, to interpret equivocal data, to guess and form hypotheses. We understand what it is for people to reason inductively, to estimate probabilities, to present arguments, to classify and categorize the things they encounter in their experience. We pose questions and search for answers, using a symbolism, namely our language, in terms of which we represent things. But do we know what it is for a brain to see or hear, for a brain to have experiences, to know or believe something? Do we have any conception of what it would be for a brain to make a decision? Do we grasp what it is for a brain (let alone a neuron) to reason (no matter whether inductively or deductively), to estimate probabilities, to present arguments, to interpret data and to form hypotheses on the basis of its interpretations? We can observe whether a person sees something or other—we look at his behaviour and ask him questions. But what would it be to observe whether a brain sees something—as opposed to observing the brain of a person who sees something. We recognize when a person asks a question and when another answers it. But do we have any conception of what it would be for a brain to ask a question or answer one? These are all attributes of human beings. Is it a new discovery that brains also engage in such human activities? Or is it a linguistic innovation, introduced by neuroscientists, psychologists and cognitive scientists, extending the ordinary use of these psychological expressions for good theoretical reasons? Or, more ominously, is it a conceptual confusion? Might it be the case that there is simply no such thing as the brain’s thinking or knowing, seeing or hearing, believing or guessing, possessing and using information, constructing hypotheses, etc., i.e. that these forms of words make no sense? But if there is no such thing, why have so many distinguished scientists thought that these phrases, thus employed, do make sense?
 
Whether psychological attributes can intelligibly be ascribed to the brain is a philosophical, and therefore a conceptual, question, not a scientific one The question we are confronting is a philosophical question, not a scientific one. It calls for conceptual clarification, not for experimental investigation. One cannot investigate experimentally whether brains do or do not think, believe, guess, reason, form hypotheses, etc. until one knows what it would be for a brain to do so, i.e. until we are clear about the meanings of these phrases and know what (if anything) counts as a brain’s doing so and what sort of evidence supports the ascription of such attributes to the brain. (One cannot look for the poles of the earth until one knows what a pole is, i.e. what the expression ‘pole’ means, and also what counts as finding a pole of the earth. Otherwise, like Winnie-the-Pooh, one might embark on an expedition to the East Pole.) The moot question is: does it make sense to ascribe such attributes to the brain? Is there any such thing as a brain’s thinking, believing, etc. (Is there any such thing as the East Pole?)
In the Philosophical Investigations, Wittgenstein made a profound remark that bears directly on our concerns. ‘Only of a human being and what resembles (behaves like) a living human being can one say: it has sensations; it sees, is blind; hears, is deaf; is conscious or unconscious.’12 This epitomizes the conclusions we shall reach in our investigation. Stated with his customary terseness, it needs elaboration, and its ramifications need to be elucidated.
The point is not a factual one. It is not a matter of fact that only human beings and what behaves like human beings can be said to be the subject of these psychological predicates. If it were, then it might indeed be a discovery, recently made by neuroscientists, that brains too see and hear, think and believe, ask and answer questions, form hypotheses and make guesses on the basis of information. Such a discovery would, to be sure, show that it is not only of a human being and what behaves like a human being that one can say such things. This would be astonishing, and we should want to hear more. We should want to know what the evidence for this remarkable discovery was. But, of course, it is not like this. The ascription of psychological attributes to the brain is not warranted by a neuroscientific discovery that shows that contrary to our previous convictions, brains do think and reason, just as we do ourselves. The neuroscientists, psychologists and cognitive scientists who adopt these forms of description have not done so as a result of observations which show that brains think and reason. Susan Savage-Rambaugh has produced striking evidence to show that bonobo chimpanzees, appropriately trained and taught, can ask and answer questions, can reason in a rudimentary fashion, give and obey orders, and so on. The evidence lies in their behaviour—in what they do (including how they employ symbols) in their interactions with us. This was indeed very surprising. For no one thought that such capacities could be acquired by apes. But it would be absurd to think that the ascription of cognitive and cogitative attributes to the brain rests on comparable evidence. It would be absurd because we do not even know what would show that the brain has such attributes.
 
The misascription of psychological attributes to the brain is a degenerate form of Cartesianism Why then was this form of description, and the attendant forms of explanation that are dependent upon it, adopted without argument or reflection? We suspect that the answer is—as a result of an unthinking adherence to a mutant form of Cartesianism. It was a characteristic feature of Cartesian dualism to ascribe psychological predicates to the mind, and only derivatively to the human being. Sherrington and his pupils Eccles and Penfield cleaved to a form of dualism in their reflections on the relationship between their neurological discoveries and human perceptual and cognitive capacities. Their successors rejected the dualism—quite rightly. But the predicates which dualists ascribe to the immaterial mind, the third generation of brain neuroscientists applied unreflectively to the brain instead. It was no more than an apparently innocuous corollary of rejecting the two-substance dualism of Cartesianism in neuroscience. These scientists proceeded to explain human perceptual and cognitive capacities and their exercise by reference to the brain’s exercise of its cognitive and perceptual capacities.
 
The ascription of psychological attributes to the brain is senseless It is our contention that this application of psychological predicates to the brain makes no sense. It is not that as a matter of fact brains do not think, hypothesize and decide, see and hear, ask and answer questions, rather, it makes no sense to ascribe such predicates or their negations to the brain. The brain neither sees nor is it blind—just as sticks and stones are not awake, but they are not asleep either. The brain does not hear, but it is not deaf, any more than trees are deaf. The brain makes no decisions, but neither is it is indecisive. Only what can decide, can be indecisive. So too, the brain cannot be conscious, only the living creature whose brain it is can be conscious—or unconscious. The brain is not a logically appropriate subject for psychological predicates. Only a human being and what behaves like one can intelligibly and literally be said to see or be blind, hear or be deaf, ask questions or refrain from asking.
Our point, then, is a conceptual one. It makes no sense to ascribe psychological predicates (or their negations) to the brain, save metaphorically or metonymically. The resultant combination of words does not say something that is false, rather it says nothing at all, for it lacks sense. Psychological predicates are predicates that apply essentially to the whole living animal, not to its parts. It is not the eye (let alone the brain) that sees, but we see with our eyes (and we do not see with our brains, although without a brain functioning normally in respect of the visual system, we would not see). So too, it is not the ear that hears, but the animal whose ear it is. The organs of an animal are parts of the animal, and psychological predicates are ascribable to the whole animal, not to its constituent parts.
 
Neuroscientists’ ascription of psychological attributes to the brain may be termed ‘the mereological fallacy’ in neuroscience Mereology is the logic of part/whole relations. The neuroscientists’ mistake of ascribing to the constituent parts of an animal attributes that logically apply only to the whole animal we shall call ‘the mereological fallacy’ in neuroscience.13 The principle that psychological predicates which apply only to human beings (or other animals) as wholes cannot intelligibly be applied to their parts, such as the brain, we shall call ‘the mereological principle’ in neuroscience.14 Human beings, but not their brains, can be said to be thoughtful or to be thoughtless; animals, but not their brains, let alone the hemispheres of their brains, can be said to see, hear, smell and taste things; people, but not their brains, can be said to make decisions or to be indecisive.
It should be noted that there are many predicates that can apply both to a given whole (in particular a human being) and to its parts, and whose application to the one may be inferred from its application to the other. A man may be sunburnt and his face may be sunburnt; he may be cold all over, so his hands will be cold too. Similarly, we sometimes extend the application of a predicate from a human being to parts of the human body, e.g. we say that a man gripped the handle, and also that his hand gripped the handle, that he slipped and that his foot slipped. Here there is nothing logically awry. But psychological predicates apply paradigmatically to the human being (or animal) as a whole, and not to the body and its parts. There are a few exceptions, e.g. the application of the verbs of sensation such as ‘to hurt’, to parts of the body, e.g. ‘My hand hurts’, ‘You are hurting my hand’.15 But the range of psychological predicates that are our concern, i.e. those that have been invoked by neuroscientists, psychologists and cognitive scientists in their endeavours to explain human capacities and their exercise, have no literal application to parts of the body. In particular they have no intelligible application to the brain.
 
3.2  METHODOLOGICAL QUALMS
 
Methodological objections to the accusation that neuroscientists are guilty of a mereological fallacy If a person ascribes a predicate to an entity to which the predicate in question logically could not apply, and this is pointed out to him, then it is only to be expected that he will indignantly insist that he didn’t ‘mean it like that’. After all, he may say, since a nonsense is a form of words that says nothing, that fails to describe a possible state of affairs, he obviously did not mean a nonsense—one cannot mean a nonsense, since there is nothing, as it were, to mean. So his words must not be taken to have their ordinary meaning. The problematic expressions were perhaps used in a special sense, and are really merely homonyms; or they were analogical extensions of the customary use—as is indeed common in science; or they were used in a metaphorical or figurative sense. If these escape routes are available, then the accusation that neuroscientists fall victim to the mereological fallacy is unwarranted. Although they make use of the same psychological vocabulary as the man in the street, they are using it in a different way. So objections to neuroscientists’ usage based upon the ordinary use of these expressions are irrelevant.
Things are not that straightforward, however. Of course, the person who misascribes a predicate in the manner in question does not intend to utter a form of words that lacks sense. But that he did not mean to utter a nonsense does not ensure that he did not do so. Although he will naturally insist that he ‘didn’t mean it like that’, that the predicate in question was not being used in its customary sense, his insistence is not the final authority. The final authority in the matter is his own reasoning. We must look at the consequences he draws from his own words—and it is his inferences that will show whether he was using the predicate in a new sense or misusing it. If he is to be condemned, it must be out of his own mouth.
So, let us glance at the proposed escape routes that are intended to demonstrate that neuroscientists and cognitive scientists are not guilty of the errors of which we have accused them.
 
First objection (Ullman): the psychological predicates thus used are homonyms of ordinary psychological predicates, and have a different, technical, meaning First, it might be suggested that neuroscientists are in effect employing homonyms, which mean something altogether different. There is nothing unusual, let alone amiss, in scientists introducing a new way of talking under the pressure of a new theory. If this is confusing to benighted readers, the confusion can easily be resolved. Of course, brains do not literally think, believe, infer, interpret or hypothesize, they think*, believe*, infer*, interpret* or hypothesize*. They do not have or construct symbolic representations, but symbolic representations*.16
 
Second objection (Gregory): the psychological predicates thus used are analogical extensions of the ordinary expressions Secondly, it might be suggested that neuroscientists are extending the ordinary use of the relevant vocabulary by analogy—as has often been done in the history of science, for example in the analogical extension of hydrodynamics in the theory of electricity. So to object to the ascription of psychological predicates to the brain on the grounds that in ordinary parlance such predicates are applicable only to the animal as a whole would be to display a form of semantic inertia.17
 
Third objection (Blakemore): neuroscientists’ ascription of psychological attributes to the brain is figurative or metaphorical, since they know perfectly well that the brain does not think or use maps Finally, it might be argued that neuroscientists do not really think that the brain reasons, argues, asks and answers questions just as we do. They do not really believe that the brain interprets clues, makes guesses, or contains symbols which describe the outside world. And although they talk of there being ‘maps’ in the brain and of the brain’s containing ‘internal representations’, they are not using these words in their common or vulgar sense. This is figurative and metaphorical speech—sometimes even poetic licence.18 Neuroscientists, therefore, are not in the least misled by such ways of speaking—they know perfectly well what they mean, but lack the words to say it save metaphorically or figuratively.
 
Reply to the objection that neuroscientists are using the psychological vocabulary in a special technical sense With regard to the misuse of the psychological vocabulary involved in ascribing psychological predicates to the brain, all the evidence points to the fact that neuroscientists are not using these terms in a special sense. Far from being new homonyms, the psychological expressions they use are being invoked in their customary sense, otherwise the neuroscientists would not draw the inferences from them which they do draw. When Crick asserts that ‘what you see is not what is really there; it is what your brain believes is there …’ it is important that he takes ‘believes’ to have its normal connotations—that it does not mean the same as some novel term ‘believes*’. For it is part of Crick’s tale that the belief is the outcome of an interpretation based on previous experience and information (and not the outcome of an interpretation* based on previous experience* and information*). When Semir Zeki remarks that the acquisition of knowledge is a ‘primordial function of the brain’19, he means knowledge (not knowledge*)—otherwise he would not think that it is the task of future neuroscience to solve the problems of epistemology (but only, presumably, of epistemology*). Similarly, when Young talks of the brain’s containing knowledge and information, which is encoded in the brain ‘just as knowledge can be recorded in books or computers’20, he means knowledge (not knowledge*)—since it is knowledge and information (not knowledge* and information*) that can be recorded in books and computers. When Milner, Squire and Kandel talk of ‘declarative memory’, they explain that this phrase signifies ‘what is ordinarily meant by the term “memory”’21, but then go on to declare that such memories (not memories*) are ‘stored in the brain’. That presupposes that it makes sense to speak of storing memories (in the ordinary sense of the word) in the brain.22
 
Reply to Ullman: David Marr on ‘representations’ The accusation of committing the mereological fallacy cannot be that easily rebutted. But Simon Ullman may appear to be on stronger grounds when it comes to talk of internal representations and symbolic representations (as well as maps) in the brain. If ‘representation’ does not mean what it ordinarily does, if ‘symbolic’ has nothing to do with symbols, then it may indeed be innocuous to speak of there being internal, symbolic representations in the brain. (And if ‘maps’ have nothing to do with atlases, but only with mappings, then it may also be innocuous to speak of there being maps in the brain.) It is extraordinarily ill-advised to multiply homonyms, but it need involve no conceptual incoherence, as long as the scientists who use these terms thus do not forget that the terms do not have their customary meaning. Unfortunately, they typically do forget this and proceed to cross the new use with the old, generating incoherence. Ullman, defending Marr, insists (perfectly correctly) that certain brain events can be viewed as representations* of depth or orientation or reflectance23, i.e. that one can correlate certain neural firings with features in the visual field (denominating the former ‘representations*’ of the latter). But it is evident that this is not all that Marr meant. He claimed that numeral systems (Roman or Arabic numerals, binary notation) are representations. However, such notations have nothing to do with causal correlations, but with representational conventions. He claimed that ‘a representation for shape would be a formal scheme for describing some aspects of shape, together with rules that specify how the scheme is applied to any particular shape’24, that a formal scheme is ‘a set of symbols with rules for putting them together’25, and that ‘a representation, therefore, is not a foreign idea at all—we all use representations all the time. However, the notion that one can capture some aspect of reality by making a description of it using a symbol and that to do so can be useful seems to me to be a powerful and fascinating idea’.26 But the sense in which we ‘use representations all the time’, in which representations are rule-governed symbols, and in which they are used for describing things, is the semantic sense of ‘representation’—not a new homonymical causal sense. Marr has fallen into a trap of his own making.27 He in effect conflates Ullman’s representations*, that are causal correlates, with representations, that are symbols or symbol systems with a syntax and meaning determined by conventions.
 
Reply to Ullman: Young on ‘maps’ and Frisby on ‘symbolic representations’ Similarly, it would be misleading, but otherwise innocuous, to speak of maps in the brain when what is meant is that certain features of the visual field can be mapped onto the firings of groups of cells in the ‘visual’ striate cortex. But then one cannot go on to say, as Young does, that the brain makes use of its maps in formulating its hypotheses about what is visible. So too, it would be innocuous to speak of there being symbolic representations in the brain, as long as ‘symbolic’ has nothing to do with semantic meaning, but signifies only ‘natural meaning’ (as in ‘smoke means fire’). But then one cannot go on to say, as Frisby does, that ‘there must be a symbolic description in the brain of the outside world, a description cast in symbols which stand for the various aspects of the world of which sight makes us aware’.28 For this use of ‘symbol’ is evidently semantic. For while smoke means fire, in as much as it is a sign of fire (an inductively correlated indication), it is not a sign for fire. Smoke rising from a distant hillside is not a description of fire cast in symbols, and the firing of neurons in the ‘visual’ striate cortex is not a symbolic description of objects in the visual field, even though a neuroscientist may be able to infer facts about what is visible to an animal from his knowledge of what cells are firing in its ‘visual’ striate cortex. The firing of cells in V1 may be signs of a figure with certain line orientations in the animal’s visual field, but they do not stand for anything, they are not symbols, and they do not describe anything.
 
Reply to the second objection that in ascribing psychological attributes to the brain, neuroscientists are not committing the mereological fallacy, but merely extending the psychological vocabulary analogically The thought that neuroscientific usage, far from being conceptually incoherent, is innovative, extending the psychological vocabulary in novel ways, might seem to offer another way of sidestepping the accusation that neuroscientists’ descriptions of their discoveries commonly transgress the bounds of sense. It is indeed true that analogies are a source of scientific insight. The hydrodynamical analogy proved fruitful in the development of the theory of electricity, even though electrical current does not flow in the same sense as water flows and an electrical wire is not a kind of pipe. The moot question is whether the application of the psychological vocabulary to the brain is to be understood as analogical.
The prospects do not look good. The application of psychological expressions to the brain is not part of a complex theory replete with functional, mathematical relationships expressible by means of quantifiable laws as are to be found in the theory of electricity. Something much looser seems to be needed. So, it is true that psychologists, following Freud and others, have extended the concepts of belief, desire and motive in order to speak of unconscious beliefs, desires and motives. When these concepts undergo such analogical extension, something new stands in need of explanation. The newly extended expressions no longer admit of the same combinatorial possibilities as before. They have a different, importantly related, meaning, and one which requires explanation. The relationship between a (conscious) belief and an unconscious belief, for example, is not akin to the relationship between a visible chair and an occluded chair—it is not ‘just like a conscious belief only unconscious’, but more like the relationship between √1 and √-1. But when neuroscientists such as Sperry and Gazzaniga speak of the left hemisphere making choices, of its generating interpretations, of its knowing, observing and explaining things—it is clear from the sequel that these psychological expressions have not been given a new meaning. Otherwise it would not be said that a hemisphere of the brain is ‘a conscious system in its own right, perceiving, thinking, remembering, reasoning, willing and emoting, all at a characteristically human level’.29
It is not semantic inertia that motivates our claim that neuroscientists are involved in various forms of conceptual incoherence. It is rather the acknowledgement of the requirements of the logic of psychological expressions. Psychological predicates are predicable only of a whole animal, not of its parts. No conventions have been laid down to determine what is to be meant by the ascription of such predicates to a part of an animal, in particular to its brain. So the application of such predicates to the brain or the hemispheres of the brain transgresses the bounds of sense. The resultant assertions are not false, for to say that something is false, we must have some idea of what it would be for it to be true—in this case, we should have to know what it would be for the brain to think, reason, see and hear, etc. and to have found out that as a matter of fact the brain does not do so. But we have no such idea, and these assertions are not false. Rather, the sentences in question lack sense. This does not mean that they are silly or stupid. It means that no sense has been assigned to such forms of words, and that accordingly they say nothing at all, even though it looks as if they do.
 
Reply to the third objection (Blakemore) that applying psychological predicates to the brain is merely metaphorical The third methodological objection was raised by Colin Blakemore. Of Wittgenstein’s remark that ‘only of a living human being and what resembles (behaves like) a living human being can one say: it has sensations; it sees; is blind; hears; is deaf; is conscious or unconscious’, Blakemore observes that it ‘seems trivial, maybe just plain wrong’. Addressing the accusation that neuroscientists’ talk of there being ‘maps’ in the brain is pregnant with possibilities of confusion (since all that can be meant is that one can map, for example, aspects of items in the visual field onto the firing of cells in the ‘visual’ striate cortex), Blakemore notes that there is overwhelming evidence for ‘topographic patterns of activity’ in the brain.
 
Since Hughlings Jackson’s time, the concept of functional sub-division and topographic representation has become a sine qua non of brain research. The task of charting the brain is far from complete but the successes of the past make one confident that each part of the brain (and especially the cerebral cortex) is likely to be organized in a spatially ordered fashion. Just as in the decoding of a cipher, the translation of Linear B or the reading of hieroglyphics, all that we need to recognize the order in the brain is a set of rules—rules that relate the activity of the nerves to events in the outside world or in the animal’s body.30
 
To be sure, the term ‘representation’ here merely signifies systematic causal connectedness. That is innocuous enough. But it must not be confused with the sense in which a sentence of a language can be said to represent the state of affairs it describes, a map to represent that of which it is a map, or a painting to represent that of which it is a painting. Nevertheless, such ambiguity in the use of ‘representation’ is perilous, since it is likely to lead to a confusion of the distinct senses. Just how confusing it can be is evident in Blakemore’s further observations:
 
Faced with such overwhelming evidence for topographic patterns of activity in the brain it is hardly surprising that neurophysiologists and neuroanatomists have come to speak of the brain having maps, which are thought to play an essential part in the representation and interpretation of the world by the brain, just as the maps of an atlas do for the reader of them. The biologist J.Z. Young writes of the brain having a language of a pictographic kind: ‘What goes on in the brain must provide a faithful representation of events outside it, and the arrangements of the cells in it provide a detailed model of the world. It communicates meanings by topographical analogies’.31 But is there a danger in the metaphorical use of such terms as ‘language’, ‘grammar’, and ‘map’ to describe the properties of the brain? … I cannot believe that any neurophysiologist believes that there is a ghostly cartographer browsing through the cerebral atlas. Nor do I think that the employment of common language words (such as map, representation, code, information and even language) is a conceptual blunder of the kind [imagined]. Such metaphorical imagery is a mixture of empirical description, poetic licence and inadequate vocabulary.32
 
Whether there is any danger in a metaphorical use of words depends on how clear it is that it is merely metaphorical, and on whether the author remembers that that is all it is. Whether neuroscientists’ ascriptions to the brain of attributes that can be applied literally only to an animal as a whole is actually merely metaphorical (metonymical or synecdochical) is very doubtful. Of course, neurophysiologists do not think that there is a ‘ghostly cartographer’ browsing through a cerebral atlas—but they do think that the brain makes use of the maps. According to Young, the brain constructs hypotheses, and it does so on the basis of this ‘topographically organized representation’.33 The moot question is: what inferences do neuroscientists draw from their claim that there are maps or representations in the brain, or from their claim that the brain contains information, or from talk (J.Z. Young’s talk) of ‘languages of the brain’? These alleged metaphorical uses are so many banana-skins in the pathway of their user. He need not step on them and slip, but he probably will.
 
Blakemore’s confusion Just how easy it is for confusion to ensue from what is alleged to be harmless metaphor is evident in the paragraph of Blakemore quoted above. For while it may be harmless to talk of ‘maps’, i.e. of mappings of features of the perceptual field onto topographically related groups of cells that are systematically responsive to such features, it is anything but harmless to talk of such ‘maps’ as playing ‘an essential part in the representation and interpretation of the world by the brain, just as the maps of an atlas do for the reader of them’ (our italics). In the first place, it is not clear what sense is to be given to the term ‘interpretation’ in this context. For it is by no means evident what could be meant by the claim that the topographical relations between groups of cells that are systematically related to features of the perceptual field play an essential role in the brain’s interpreting something. To interpret, literally speaking, is to explain the meaning of something, or to take something that is ambiguous to have one meaning rather than another. But it makes no sense to suppose that the brain explains anything, or that it apprehends something as meaning one thing rather than another. If we look to J.Z. Young to find out what he had in mind, what we find is the claim that it is on the basis of such maps that the brain ‘constructs hypotheses and programs’—and this only gets us deeper into the morass.
More importantly, whatever sense we can give to Blakemore’s claim that ‘brain-maps’ (which are not actually maps) play an essential part in the brain’s ‘representation and interpretation of the world’, it cannot be ‘just as the maps of an atlas do for the reader of them’. For a map is a pictorial representation, made in accordance with conventions of mapping and rules of projection. Someone who can read an atlas must know and understand these conventions, and read off, from the maps, the features of what is represented. But the ‘maps’ in the brain are not maps, in this sense, at all. The brain is not akin to the reader of a map, since it cannot be said to know any conventions of representations or methods of projection or to read anything off the topographical arrangement of firing cells in accordance with a set of conventions. For the cells are not arranged in accordance with conventions at all, and the correlation between their firing and features of the perceptual field is not a conventional but a causal one.34