10. Neuroimaging and its Limits
We live in an era of reductionist neuroscience, where virtually any aspect of human experience is held to be based upon specific forms of brain activity. This notion has been reinforced significantly by emerging technologies that allow us non-invasively to ‘listen in’ to the living brain. In light of this advance, there has even been a highly questionable trend to rebrand areas traditionally not considered science with a ‘neuro’ prefix—as for example ‘neurophilosophy’, ‘neuroethics’ and even, apparently, ‘neuroarthistory’.[1]
In the current chapter, I will examine the practical and statistical limitations of brain-scanning and problems with the underlying theories. Neuroimaging technology has transformed cognitive science within the last two decades, seeming to confirm a picture of the brain that is a highly compartmentalized agglomeration of semi-autonomous ‘modules’, each of which has its own aims, talents and influences.[2] The nature of much of this material can be quite technical, but I think it is necessary to examine this closely to gain an understanding of the technology and its implications.
Some neuroscientists even hope that one day we might be able to build a cerebroscope; a machine that translates brain activity into a report about the subjective state of the participant.[3] According to some, this would entail unlocking the neural code of the brain, just as geneticists unlocked DNA in the 1950s,[4] and a number of developments suggest this possibility. Carter noted that it is already possible to identify, from a large set of completely novel natural images, which specific image was seen by an observer via neural patterns and that ‘it may soon be possible to reconstruct a picture of a person’s visual experience from measurements of brain activity alone’.[5] To this we might add striking developments, such as the ‘decoding’ of neural patterns of severely paralysed people via microelectrodes, which allows brain signals to be translated into words.[6] As Carter notes, ‘whenever someone says of brain scanning “it’ll never be able to do that,” it goes and does it’.[7]
Neuroimaging experiments often rely upon a range of implicit a priori assumptions to work, including an commitment to some form of neurological reductionism and the idea that the mind/brain is modular in nature and consists of basically stable elements. It is also often assumed that the underlying ‘cognitive architecture’ is robust and uniform between humans, or at least comparable. But the central assumption seems to be that mental states can be considered purely a result of specific structures and functions of the brain and could, at least in principle, be read off in various ways. Collectively, these sorts of assumptions narrow the field of enquiry and make a research programme possible and practical. Whilst there is nothing wrong with this narrowing in terms of a research programme, it is important to understand that this work is predicated strongly upon them, and will necessarily produce a partial picture of human beings.
How to Map the Brain
There are a number of ways to map brain activity. Electroencephalograms (EEG) measure changes in electrical potential using scalp electrodes, a technique first used in 1929 by Hans Berger.[8] Positron Emission Tomography (PET) scans rely on the administration of a radioactive substance, which becomes incorporated into blood glucose, and emits positrons. These are detected by rings of radiation detectors that build up a picture of slices of the brain. fMRI (functional Nuclear Magnetic Resonance Imaging) measures radio signals emitted by hydrogen atoms when they are placed in a magnetic field. Hydrogen atoms are common in the brain’s water molecules. In a magnetic field, these atoms emit measurable energy. This signal effect is finite, and decays over time, resulting in decreased image intensity,[9] so there is only a limited time window in which to acquire images. Oxygenated and deoxygenated blood have different magnetic properties, and an increase in deoxygenated blood in the brain also affects image intensity, whereas a decrease has the opposite effect. fMRI can produce very detailed pictures of the human brain, especially with the use of the BOLD (Blood Oxygen Level Dependent) contrast, which lets researchers measure local brain metabolism.[10]
An fMRI study is performed as follows.[11] Experiments produce two kinds of 3D pictures, one of which shows an anatomical scan, the other a functional scan. Functional scans show the BOLD signal, which is represented by roughly cube shaped areas called voxels or ‘volumetric pixels’. The total number of these voxels is between 40,000 and 500,000, depending upon the scanner settings. A new image is produced every two to three seconds in a scan.
This data is then pre-processed for the reduction of noise and to allow comparison between different brains. For statistical reasons, voxels are then often mapped onto a model ‘average brain’. In experiments that involve behavioural tests, participants are often told to move back and forth between different tasks, and then differences in the BOLD signals between different tasks are analysed. This will provide two different data sets for the researcher to analyse. Voxel sets are analysed statistically as matrices, and investigators often end up selecting between subsets of voxels and measure averages across them. Voxels are often selected on the basis of their anatomy and functionality in a scan.
Sometimes, such an analysis reveals extraordinary correlations. For example, in early 2009, a team at the University College London found that they could accurately predict the position of subjects within a virtual environment solely from the pattern of activity in a part of the brain called the hippocampus.[12] They concluded that the results showed ‘that highly abstracted representations of space are expressed in the human hippocampus’.[13] Whether or not the ‘memory trace’ interpretation is correct,[14] this finding demonstrates a very close match between ‘thinking’ of a particular spatial location and a particular neural activity.
The picture muddies somewhat with more complex cognitive functions, where persistent correlations seem harder to find and the evidence more contradictory. In a systematic literature review and meta-analysis, Cabeza and Nyberg compared regions that were activated during similar tasks. They found that regions associated with sensory and motor activities, such as the perceptual studies mentioned above, do seem to show localization, but for higher cognitive processes (problem solving or ‘working memory’), the data is far more diffuse. In the latter, the peaks of brain activity spread to from a quarter to a half of the brain.[15]
Issues with Statistics and Data Selection
A number of different imaging and statistical methods are used for sorting through the data. A central assumption is that the fMRI signal is proportional to a measure of local neural activity, averaged over a few millimetres in the brain and a time of several seconds. This is known as the linear transform model, and is claimed to allow for a complex relationship between an outside stimulus (say a word or picture) and the activity of the brain. The model is acknowledged to be ‘at best an approximation of the complex interactions between neuronal activity, metabolic demand and blood flow and oxygenation’.[16] This model assumes a more-or-less straightforward relationship between what the neurons do, the energy they need and different levels of bloodflow.
There is some question whether this model is realistic. Heeger and Ress admit that ‘these measurements … would be worthless if the linear transform were not a valid approximation’.[17] Some of the mechanics of blood flow challenge this view. Signals can derive from larger veins, smaller vessels and capillaries, and different techniques can emphasize these different signal sources differently.
A new technique, multivariate pattern analysis (MVPA), uses a form of Artificial Intelligence to analyse differences between brain-scans. When a person thinks about tennis, for example, the program can detect a corresponding signal in the pattern of activity among motor area voxels.[18] From this data, it can make predictions about how the data relates to a person’s mental state. MVPA can also make distinctions between large numbers of brain states and mental states; a new pattern-recognition program can guess which of 1,000 pictures a person just viewed from scans alone.[19]
Many of the specific neural patterns detected by MVPA are context-dependent. Bor, who is generally enthusiastic about the potential of mind-reading admits that ‘studies that demonstrate that the technique makes accurate predictions are statistically significant, but that often means that the computer’s guess is a hair’s breadth above chance’. He admits that many studies that rely on MVPA to pick between two alternatives score around 60 percent accuracy, where a blind guess would give 50 percent.[20] He also points out that subjects could easily break the rules, and think about other targets, without the experimenter’s knowing.
There are other, more serious problems with data selection and circular analysis. Kriegeskorte et al. noted that ‘… the more we search a noisy data set for active locations, the more likely we are to find spurious effects by chance’.[21] They cast doubt on the efficacy of widely-used statistical techniques to meet the goals of identifying voxels containing particular effects and to estimate the size of the effect. They concluded that slight distortions were common in the literature, although severe errors were less common, and that there was a lack of consensus on statistical methods, such as the definition of a large effect size.
More severe problems have been observed. There have been myriad claims that particular brain regions are involved in some very specific areas of social cognition like social distress (associated with the anterior cingulate cortex or ACC), ‘empathy-related manipulation’ (the ACC), or anxiety proneness (right cuneus). In 2009, Vul et al. caused a storm of controversy by claiming that the ‘impossibly’ strong effect sizes claimed in 55 articles were probably the result of analysis problems. They concluded that the investigators were using statistical techniques that guaranteed to offer greatly inflated correlation estimations. They added that, ‘The underlying problems … appear to be common in fMRI research of many kinds—not just in studies of emotion, personality and social cognition’.[22] The bottom line seems to be that much of this work is exploratory and the meaning of the results less clear cut than is often claimed.
Experimental Limits
Another issue is the relationship between fMRI scan results and experimental protocols that investigate correlates of behaviour. Nichols and Newsome report that, ‘[e]quipped with sound conceptual frameworks originating in behaviour, neurophysiologists can then study underlying brain function at several levels’.[23] But this assumes that firstly a behaviour has been or can be defined in a ‘sound’ way, and that said behaviour has a specific underlying brain function that can be reliably discriminated.
There is no doubt that, in some cases, some useful results can be obtained. Kosslyn, for example, wished to see whether mental images of strongly emotive subjects affected the autonomic part of the nervous system. The autonomic system controls involuntary bodily functions, like heart rate and breathing, the blush response, and erections in males. They asked subjects to visualize babies with tumours over their eyes or bodies whilst undergoing a PET scan.[24] The control or comparison group had to visualize neutral images like trucks or lamps. When the ‘aversive’ brain scans were compared to the ‘neutral’ controls, it was found that, as predicted, the anterior insula, a major component of the autonomic system, was activated. So theories can be successfully tested using this equipment.
Limitations Due to the Function and Structure of the Brain
In a review article, Logothetis pointed out a number of limitations that have less to do with the technological or statistical limitations of the technology, and more to do with the brain itself.[25] The modular organization of different brain systems he accepted as an established fact. A module was defined as ‘the classical neuronal circuits repeated iteratively within a structure (for example, the columns or swirling, slab-like tangential arrangements of the neocortex), as well as entities within which modules might be grouped by sets of dominating external connections’.[26] Even given distinguishable modularity, the issue is how far fMRI can go in revealing the ‘neuronal mechanisms’ of behaviour. Logothetis observed that the limitations of fMRI are mainly due to that it reflects mass action, and less to do with technological limits. Logothetis:
… only in certain special cases can [fMRI] be really useful for unambiguously selecting [a particular hypothesis], or for explaining the detailed neural mechanisms underlying the studied cognitive capacities.[27]
Logothetis then highlighted the widely-held assumption that brain structures can, in principle, be thought of as ‘information-processing’ entities with an input, a local processing capacity, and an output. He disputed this interpretation for areas of the cerebral cortex, calling the input-elaboration-output model an oversimplification. The reason is that input to the cortex from the subcortical regions is weak and there are a massive number of feedback connections. Quite often, too, BOLD outputs reflect not just rises in cortical activity, but changes in the balance between excitatory and inhibitory circuits. fMRI cannot be used to easily distinguish between the two, and Logothetis pointed out that
the [thalamo-cortical] organization … evidently complicates both the precise definition of the conditions that would justify the assignment of a functional role of an ‘active’ area, and the interpretation of fMRI maps.[28]
He concluded that the ultimate limitations of fMRI had more to do with the circuitry and functional organization of the brain, as well as with experimental protocols that ignore this.
Logothetis’s thoughts in some respects echo those of William Uttal, perhaps the most ardent critic of neuroimaging. Uttal observes that the fMRI studies represent the latest incarnation of a long search for localized regions of the brain that represent or control specific cognitive functions.[29] Uttal accepts that some processes are more or less localized, such as the sensory, motor, emotional and appetitive regions, but claims that assumption may not hold for higher-level cognitive processes. One problem is the difficulty in defining specific ‘cognitive functions’, another was the
surprising lack of appreciation that the brain was a highly complex, heavily interconnected system displaying nonlinear properties that precluded its simple analysis into independent functional units.[30]
These observations demonstrate the limits of any given methodology that restricts itself to a narrow set of assumptions. Clearly, some sometimes striking results have been obtained—but only at a cost of an adherence to a model that only has limited application within its problem domain.
Some Philosophical Assumptions
The Mind/Brain is Modular in Nature and ‘Mental Functions’ can be Localized
It is worth examining this issue a little further. Donaldson points out, in his apologia for fMRI studies; ‘the best neuroimaging studies always aim to go beyond asking “where,” and try to answer questions about “what” the activity reflects—why the activity is occuring.’[31] He asserts that fMRI is for parsing and not just mapping the brain, and that mixed designs can do this:
The basic idea is that fMRI can be used to ‘parse’ rather than simply ‘map’ the brain. In linguistics, ‘parsing’ refers to the assignment of constituent structure to a sentence. Without adequate organization, a sentence is potentially ambiguous or even meaningless. An analogous task is required if neuroimaging is to provide psychologically meaningful data.[32]
In mixed designs, the idea is to measure the temporal profile of activity in the brain. This, according to Donaldson, allows the separation of transient and sustained activity. He suggests these two patterns of change in signals allow us to deduce functional differences in
what kind of role a region plays in supporting behaviour. Specifically, the distinction between transient and sustained signal changes maps onto a functional distinction between item (trial) related and state (task)-related processing, and provides one clear way of characterizing brain regions in functional terms …[33]
Essentially, whichever regions light up at specific times are deemed to be functionally necessary for a specific task.
Uttal, however, provides a number of reasons why this sort of programme may be limited.[34] It is difficult to know whether an active area is responsible for all processing of a task, or whether it is a critical line of communication between other areas. This sort of problem is recognized, even in the popular literature. Mariette di Christina, the editor-in-chief of Scientific American Mind, noted that ‘The more researchers may attempt to look at a single processing question, the more it turns out to be interrelated with many other things going on in the brain’.[35]
With higher level cognitive processes like decision-making or problem solving, it is more likely that each region of the brain mediates many cognitive processes and that ‘each cognitive process is encoded by activity in widely distributed brain regions’.[36] Uttal’s is not an argument for functional homogeneity, as some of the holists proposed, but for some kind of compromise between those who believe the ‘mind’ can be decomposed into modular units and those who think we have to look at the system as a whole. The ‘workspace’ theory may also be regarded as a working towards such a compromise.[37]
Conscious awareness also poses problems for localizers. Kanwisher, whilst claiming a strong causal relationship between awareness and the activation of local brain regions during sensory experiences, acknowledges that the activation of, say, the fusiform face area by itself is not enough to ‘explain’ awareness of a face. To illustrate this, she provides a thought experiment by which we imagine said specific neural system to be dissected out and isolated from the rest of the organism and still able to perform its function.[38] Intuitively, it seems unlikely that such an isolated system would be aware in any sense of the word. Kanwisher concludes that to be aware, even localized functions would need to be able to receive ‘information’ from the rest of the brain and nervous system.
There is also the difficulty of localizing consciousness in a specific brain area, although parts of the brain, like the cerebellum, are hypothesized not to be conscious at all.[39] As we saw previously, human-type conscious awareness is currently supposed to be generally localizable in the cerebral cortex and thalamocortical loops. One of the reasons for supposing this is that various kinds of injuries that impair the function of these brain areas also disrupt consciousness.
Disorders that disrupt consciousness, like epilepsy, are associated with widespread disruption in the cortex and also thalamic areas.[40] Comas and ‘persistent vegetative states’ can also be defined as global disorders of consciousness. However, a recent experiment demonstrated that a patient in a vegetative state following severe brain injury showed the same pattern of brain activity as non-damaged people when asked to imagine playing tennis or to imagine visiting all rooms in her house.[41] The fact that a patient could so respond would suggest that they are still aware in some sense.
Certain kinds of brain injury also cause a loss of consciousness. These can include damage to the cortex and also discrete injuries to midline subcortical structures. The reason for this is that such structures are associated with brain arousal, an example being the Reticular Activating System, which regulates waking and sleep. Damage to the nuclei in this area can cause coma.[42] So we have a fair idea about the gross areas of the brain that are necessary to support waking awareness. The next question is: to what extent can the contents of consciousness, aka mental contents, be ‘translated’ into ‘neural language’?
The Mind as Neural Code
There is currently a massive effort in neuroscience to unlock the neural code, that is, to translate the mind—which is conceived as a kind of software—to the functional neural language in which it is written. One neuroscientist, John Chapin, even stated that ‘if you’re a real neuroscientist, that’s the game you want to play’.[43] The idea that the mind can be understood in this way harks back to the computational and cybernetic models developed in the 1950s (chapter eight), and we have already seen that much data from neuroscience gets interpreted as ‘processing information’ and ‘representing’ data. Neuronal functions, too, lend themselves to interpretations in terms of information theory, as the constituents of perception (colours, edges, movements, etc.) are thought to be ‘represented’ as neural patterns. For example, visual objects have been theorized to be ‘represented’ via synchronized neurons in the visual cortex which detect specific features of visual objects that are encoded by temporal correlations.[44]
This sort of idea has been applied to many functions within the brain, the aim being to find a sort of ‘Rosetta stone’ that can match ‘mind language’ and ‘brain language’.[45] Ideally, one should be able to put mind language—mental states—parallel to brain states, and learn the translation rules between the two. To do this, as described above, one performs experiments, looks at the difference in the brain between tasks, and tries to determine what has changed in structure or function to glean the ‘neural’ counterpart of the subjective state.
The writings of Steven Rose are interesting in this regard, because whilst sympathetic to this idea—he coined the ‘Rosetta stone’ analogy—he is also honest about its limits. In his discussion of memory—a prime constituent of mental states—he notes a number of problems and unknowns that are not always apparent from reading other popular works.[46] He observes:[47]
• Problems with defining what constitutes a memory.
• Unknowns concerning the persistence of memory even after apparent forgetting or extinction.
• Problems with the separation between a ‘memory’ and retrieving it, and the ‘mental scanning’ for a memory.
• Some temporal recovery of memory after brain injuries despite heavy kinds of amnesia.
• Problems with mapping temporal scales and regional distribution studies of memory onto molecular and cellular studies.
• Problems with the idea that memory is encoded via synaptic change, as in the theories of Donald Hebb, despite evidence for the truth of some of this theory.[48]
He concludes by admitting that
… we don’t know how memories are made, how and in what form they are stored in the brain (if ‘storage’ as in a filing cabinet or computer memory is even an appropriate way of speaking) or the processes by which they are retrieved.[49]
Many of these blanks will, no doubt, be filled in as neuroscience progresses, yet Rose observes deeper conceptual problems. Neuroimaging, he says, serves asa description rather than an explanation of brain function, so can provide us with correlations but not explanations on its own. He also sees disarray between theoretical concepts at multiple levels, as psychological models of memory often seem to map poorly onto both neuroimaging and cellular/molecular studies. This, of course, is an indication of the necessary methodological pluralism of neuroscience, and one can argue for a ‘patchwork unity’ of methods,[50] but Rose seems to be saying that deeper conceptual revisions are needed:
Empiricism is not enough. Simply, we currently lack a theoretical framework within which such mountains of data can be accommodated. We are, it seems to me, still trapped within the mechanistic reductionist mind-set within which our science has been formed.[51]
Rose’s favoured alternative seems to be a form of dialectical materialism, such as that favoured by the Marxists, where different explanatory levels exist, even though everything is ultimately composed of matter. He also seems unwilling to break away from standard accounts of determinism, and is scornful of ‘holistic’ accounts. In my view, however, at least some of the problems of memory may require a holistic and non-mechanistic approach of some kind, but I do not know what form this revised theory of memory might take to be viable and to produce workable experimental results.
Representations, the Brain and the Missing Person
Many neuroscientists seem to take it for granted that minds can be conceptualized at least partly as a system of representations running on the brain. For example, Kosslyn writes that ‘information processing systems are organized in neural tissue’ and accepts that representation occurs in the brain: ‘… A “representation” is a physical state that serves as a repository of information, and a “buffer” stores representations.’[52] Logothetis also accepts that there is an ‘underlying neural representation’ of a stimulus.[53]
Kanwisher makes an argument for a type-token hypothesis, arguing that
awareness of a particular perceptual event requires not only activation of a representation of that attribute, but also individuation of that perceptual information as a distinct event.[54]
She observes that perceptual experience is made up of discrete objects that appear in particular spatial locations and at specific times. Because of this, she claims that activated perceptual attributes must become associated with ‘representations of specific objects and/or events in order to be experienced as fully conscious percepts’.[55] Kosslyn also invokes a very similar type-token theory.[56]
The main problem is that representations, like ‘illusions’, generally need an observer. If there are representations in the brain, then to what or to whom do they ‘represent’ the world? Representations presuppose consciousness. But mechanistic views like those espoused above attempt to speak of representations in the absence of consciousness which, as Tallis observes, seems deeply illogical.[57]
The reason for this may well be ideological. I have already noted that researchers often redefine subjective experience in narrow, mechanistic and reductionistic terms to accommodate it within the current frames of neuroscientific explanation. Wallace even suggests that various cognitive terms, like ‘information’ and ‘representations’ became purged of their subjective content so they could be viewed as objective and mechanical processes in line with the ideology of the cognitive revolution of the 1950s.[58]
At first glance, these objections may seem to contradict reams of data from neuroscience that do seem to suggest that both ‘information-processing’ and ‘representations’ occur there. However, this is, I suggest, an example of the power of paradigm-led thinking.[59] Bennett and Hacker observe that the brain does not process symbols, information or representations. What happens is that researchers perform, say, fMRI experiments, and derive information from the results. But this, they observe, is not information the brain has or evidence for a brain language ‘any more than dendrochronological information about the severity of winters in the 1930s is written in the tree trunk in arboreal patois’.[60] In this view, ‘representations’, ‘information’ and ‘symbols’ constitute theoretical concepts or metaphors that theoretically frame the information derived from neuroimaging and other experiments. To ascribe them to the brain seems to me an exercise in literalism. In short, we are looking at manufactured knowledge and mistaking it for nature.
Mind-Readers, Bill Clinton and Metaphysics
Some of the limitations of the current approaches can be better understood by a thought experiment which imagines what would happen if the aspirations of the neural decoders become fulfilled. In doing so, I am deliberately ignoring a range of significant problems and limitations with the ‘decoding’ exercise.[61]
Let us suppose that they succeed in building a mind-reader that can, for example, stimulate a grandmother neuron and translate the pulses from this and the patterns in the visual cortex into an image of Bill Clinton.[62] Surely, this would prove that physicalism, the machine-code metaphor and representationalism were ‘true’? In what follows, I am going to be perverse and suggest that even if we accomplished such a feat, that on its own this still would not resolve the argument in favour of these propositions. First of all, we should acknowledge what it would achieve.
(1) It would demonstrate the highly impressive pragmatic utility of neural coding theory, including the efficacy of representational metaphors when considered in the context of the enabling technology itself.
(2) It would demonstrate an extremely close relationship between at least some mental states and the ‘neural code’.
(3) It would strongly imply that we can understand the storage/retrieval (if those are the correct terms) of at least some memories in representational terms, and in terms of something like machine code, and would constitute powerful evidence that these particular memories are not stored outside unless by some form of duplication.[63]
(4) It would constitute a powerful emotional argument for information physicalism being ‘true’.
Surely, such an achievement would mean that the mind/body problem had finally been solved, and that physicalism was an objective fact, rather in the way that the Earth ‘is’ really round, and it really does orbit the sun? Surely, it would mean that our minds ‘are’ nothing but machine code that use representations that run on biological machines?
Whilst I am willing to concede that such an achievement would at the very least invite a revision of certain metaphysics, I would assert that looking to such a development as either proof of physicalism or a ‘solution’ to the mind/body problem misconstrues the nature of the debate. To deal with the representational issue first: I have stated, along with Tallis, Bennett and Hacker, and others that ‘representations’ can be seen as derived concepts and not things or processes that can be said to be inherent in neural tissue. But if we accept this, and the invention of such a machine, then how might we explain the picture of Bill Clinton on the TV screen? A potential answer to this can be found by recalling the constructed nature of light, and by taking a basically instrumentalist view of these occurrences (which is suggested by the strongly metaphorical nature of the ‘representation’ issue).
The physicist David Bohm, amongst others, observed that facts can be said to be manufactured. This is suggested by the Latin root of ‘fact’, facere, which means that which has been made. Bohm goes on to say that:
… in a certain sense we ‘make’ the fact. That is to say, beginning with immediate perception of an actual situation, we develop the fact by giving it further order, form and structure … In classical physics, the fact was ‘made’ in terms of the order of planetary orbits … In quantum theory, the fact was ‘made’ in terms of the order of energy levels, quantum numbers, symmetry groups, etc.[64]
And in the current case, the fact is ‘made’ in terms of information-processing systems, representations, information-processing, etc. So each way of ‘making’ a fact involves weaving observations into a conceptual matrix that cannot really be said to be ‘real’, but is more a creative expression of the observing minds.
In addition, facts can only continue to be made if nature remains cooperative to the theoretical approach that is adopted. This was one of Feyerabend’s key criticisms against the adoption of any rigid methodology in science. Certain given approaches—such as the assertion that minds can be understood in terms of a ‘code’—only make sense if they are structured in a way that allows us to come up with usable results.[65] Now, it may be that nature will prove sufficiently cooperative to allow us to produce that image of Bill Clinton on the screen; but equally, the metaphor of the Rosetta stone might already be seen to have certain limits.
Today, such facts (neural correlates of subjective states) are ‘made’ by comparing subject reports with brain-scans, but the neural decoders would like to bypass this by producing a lexicon and grammar that allows us to ‘read’ neural states without having to ask subjects for their reports. But currently this cannot really be done, as an individual’s ‘neural code’ is often idiosyncratic, varied and can even change according to circumstances. So the facts that are created in neuroscience remain of a correlative nature, even though some correlations can be locally striking.
So in the absence of a neural base-code, the decoders would have to rely on matching the phenomenology of the perception to the neural bursts. Processing and translating this would also, presumably, involve some sort of computer program that could translate the pulses/neural patterns to a picture. Something like this may be possible if/when we develop technology that can decipher the neurological events in the visual cortex, which has specialized cortical cells that react selectively to different visual features. If one were ‘translating’ these, then one could presumably use a program on a powerful computer to ‘stick’ the various cellular reactions together and form a picture that way, but in the absence of a detailed binding theory, one would have to ‘cheat’ in order to stick this disparate data together. But even if all this could be done, I would still argue that any ‘representations’ that would result would be (1) manufactured and (2) in the eye of the beholder, namely, the experimenter.
Even more significant problems occur with the argument that this finally ‘solves’ the mind/body problem. The reason is that the mind/body problem seems better characterized as a metaphysical rather than an empirical dispute, and interpreting it as something that can be finally resolved empirically constitutes, I think, a category error. To understand why, we need to look again at the nature of scientific explanation and its relationship to technology.
Consider Newton’s laws of motion, which, ceteris paribus, successfully describe the motion of the planets. Newtonian theory has also proved very successful in technological feats like putting humans on the moon, and sending probes to the planets. There is no disputing that Newton’s theory works, and very well. There is also no question that it has predictive power, so, for example, one can predict the movements of the planets centuries hence. However, there is a significant difference between acknowledging predictive power and pragmatic utility and asserting that Newton’s theory is the objective truth.
Part of the reason is that Newtonian physics must imagine shibboleths to work, as Mach pointed out in the nineteenth century. In particular, Mach singled out Newton’s notion of absolute space and inertia because they were purely ‘thought thing[s] that cannot be pointed to in experience’.[66] Secondly, Mach opposed the idea that Newton’s laws could be applied universally, attacking ‘the conceptual monstrosity of absolute space’. And, as history demonstrated, it’s possible to reformulate ideas of space and time via relativity in a way that allowed anomalies that were previously unexplained, like the movements of Mercury, to be accommodated by the new theory.
The usual story here is that Newtonian physics were accommodated as a special case of relativity, which could potentially explain more, but it is possible to interpret what happened in a different way. Feyerabend argued strongly that the idea that new physical theories could explain more than the older constitutes a sort of epistemological illusion. This is because new theories, ideologies and traditions (which is how Feyerabend characterized science) tend to create entirely new problem domains and are often not directly concerned with ‘solving’ the problems of the older ideology. Where older theories are accommodated, they occur in an ad hoc way or even simply via assertion.
So, in the case of the neural decoders, we have a programme that is predicated upon an assumption that it is appropriate to think of the ‘mind’ in terms of a sort of natural machine code running on wetware. The problem, then, is to ‘decode’ this natural neural ‘language’. A different sort of theory—say, for example, Myers’ subliminal mind—would have little or nothing to say about this, and rightly so, because they deal with different problem domains. Myers theory, by contrast, was concerned with co-ordinating a wide range of what today seem rather peculiar phenomena, and has little or nothing to say about neural codes. To be sure, there is a domain overlap, as today we have some people trying to explain, for example, Out of Body Experiences in representational and computational terms, but on the whole the theories do not cover the same ground and moreover have clashing metaphysics.
What I am trying to say is that it remains possible to conceptualize both the mind and consciousness in a number of possible ways, and the fact that one approach has produced even amazing pragmatic fruit does not necessarily negate alternative approaches, even, maybe, ones that at least appear to contradict the favoured approaches. Very often the issue is not who is right and who is wrong, but whether, how and why a theory is formulated in the first place, and for what purpose. This latter point is very difficult to see mainly because of the scientism implicit in much of cognitive science, which insists that to be viable, a theory must ape physics or reductive biology. As a result, theories of human nature that do not pass this supposed gold standard get written off as ‘unscientific’.
Finally, and most importantly, and despite a considerable emotional pull, such a development would not necessarily negate alternative interpretations of the mind/body link, simply because of the gap between direct experience and whatever neural pattern would be decoded. Zapping a grandmother neuron and producing an image of Bill Clinton on a TV screen would involve a number of things, but mainly it would be a translation of a pattern written in neural ‘language’. This would mean that, even if a fairly accurate reproduction of an experience were possible, it would still be derived from the neural pattern or code and not the experience itself. This is why it would be misleading to actually term this machine a ‘mind-reader’; a more accurate term would be brain decoder.
And the interpretation of the relationship between this code and subjective experiences would still be up for grabs. This point was recently made clear by Edward Kelly, who observed that even if we were able to divide the streams of conscious experience into a sequence of states and pair them with the corresponding physiological processes in the brain and find the perfect 1:1 correspondence, then we still would not have solved the mind/body problem. For a start, even perfect correlation would not necessarily entail identity, and he asserts that ‘it remains at least conceptually possible that minds and brains are distinct’.[67] Fantasies of a mind-reader or cerebroscope, I suggest, posit just such a 1:1 correspondence, and are useful because they demonstrate the limitations as well as the strengths of the technology.
The conclusion I draw from this is that (1) the mind/brain problem and consciousness probably cannot be solved by empirical means alone, and (2) that the emergence of such technology would not necessarily preclude alternative views of the mind and consciousness, any more than the success of Newtonian mechanics precluded alternative ways of conceiving space and time. The danger is that we become so enamoured of our amazing technologies that alternatives get trampled in an ideological stampede.
Conclusion
Reviewing the neuroimaging work, and neuroscience in general, is a strange business. Superficially, the literature gives one the impression that most problems are or will be solved via standard, mechanistic science; that sometime in this century the ‘neural code’ will be cracked and that we will understand everything—including consciousness—in terms of models that are more-or-less current. Then one comes across a little tear in the tapestry’s apparently seamless fabric—an empirical unknown here, or a conceptual difficulty there—which can soon widen into a gaping hole, if one persists in poking it.
The primary issue remains an unwavering commitment to reductionistic neurobiology and the Cartesian method, and the proposed alternatives within conventional science may be said to constitute extensions and elaborations of this basic metaphysics rather than a substantial revision. Additionally, we have the assumption that these gaps will somehow get solved by additional knowledge and more results. But knowledge is not understanding, and whilst we have unprecedented levels of the former, we have a surfeit of the latter. And whilst holistic biology can take us a little further, it provides no obvious way of reconciling subjective experiences, even with this expanded knowledge, if it means a reduction to the flatland holism of science. It is time to look at subjective views, actual and whole.
1 Scruton, 2009.
2 Carter, 2008a, commenting on Minsky, 1986.
3 Rose, 2006.
4 The Churchlands, in Blackmore, 2005.
5 Carter, 2008b.
6 Kellis et al., 2010.
7 Carter, 2008b.
8 Blackmore, 2003, p. 228.
9 Heeger & Ress, 2002.
10 Blackmore, 2003, p. 229.
11 Based upon a summary in Vul et al., 2009.
12 Hassabis et al., 2009.
13 Hassabis et al., 2009, p. 549.
14 I would note that the ‘engram’ interpretations of such findings, tempting though they may be, still fall foul of two of Gauld’s (in Kelly et al., 2007) objections to trace theories, namely (1) that representations require interpreters who are already knowledgeable enough to comprehend them, and (2) that to understand membership of a category one must possess a network of further concepts, and that it is not clear where such would be stored (see Kelly et al., 2007, p. 269). See also my discussion later in the chapter.
15 Cabeza & Nyberg, 2000. See also Uttal, 2005.
16 Heeger & Ress, 2002, p. 143.
17 Heeger & Ress, 2002, p. 143.
18 Bor, 2010.
19 Bor, 2010.
20 Bor, 2010, p. 57.
21 Kriegeskorte et al., 2010.
22 Vul et al., 2009, p. 274.
23 Nichols & Newsome, 1999, p. 36.
24 Kosslyn, 1999.
25 Logothetis, 2008.
26 Logothetis, 2008, p. 869.
27 Logothetis, 2008, p. 870.
28 Logothetis, 2008, p. 873.
29 Uttal, 2005.
30 Uttal, 2005, p. 3.
31 Donaldson, 2004, p. 442.
32 Donaldson, 2004, p. 442.
33 Donaldson, 2004, p. 442.
34 Uttal, 2001; 2005.
35 Editorial in Scientific American Mind, July/August 2010, p. 1.
36 Uttal, 2005, p. 5.
37 Kelly et al., 2007.
38 Kanwisher, 2001.
39 Mormann & Koch, 2007.
40 Mormonn & Koch, 2007.
41 Owen et al., 2006; in Mormann & Koch, 2007.
42 Mormonn & Koch, 2007.
43 Interviewed by Horgan, 2004.
44 Engel et al., 1997.
45 Rose, 2006.
46 For example, reading Carter, 2002, one gets the impression that virtually all of these problems are solved, or will be soon.
47 Rose, 2006, chapter eight.
48 ‘Memories do not seem to be encoded within these specific changes in connectivity, but rather, over time, other brain regions and connections become involved. The imaging studies add to these problems of interpretation by revealing the dynamic nature of the cortical processes involved in both learning and recalling …’ (Rose, 2006, p. 211).
49 Rose, 2006, p. 212.
50 Craver, 2007.
51 Rose, 2006, p. 215.
52 Kosslyn, 1999, p. 1283.
53 Logothetis, 2008, p. 871.
54 Kanwisher, 2001, p. 107. See Kanwisher, 1987; 1991, for a fuller exposition of this theory.
55 Kanwisher, 2001, p. 107.
56 Kosslyn, 1999.
57 Tallis, 2004, p. 91.
58 Wallace, 2000.
59 Griffin, 1998.
60 Bennett & Hacker, 2003, p. 153.
61 See Horgan, 2004; Rose, 2006, chapter eight, for a discussion of these problems and limitations.
62 This particular suggestion is prompted by the discovery by Fried and Koch of a neuron in a patient’s brain that only fired when he was shown pictures of Bill Clinton! Described in Horgan, 2004.
63 So it would constitute, in my view, persuasive evidence that this particular memory was not stored in a sort of ‘cosmic reservoir’ a la William James and/or Henri Bergson, unless one supposes—somewhat redundantly—that a duplicate were made. But see chapter five for theoretical problems with this.
64 Bohm, 1983, p. 142.
65 Feyerabend, 1978.
66 Quoted in Burke, 1985, p. 295.
67 Kelly et al., 2007, p. 27.