capacity limits and consciousness Capacity limits refer to limits in how much information an individual can process at one time.
1. A brief history
2. Objective and subjective sources of evidence of capacity limits and consciousness
3. Capacity limits, type 1: information processing, attention, and consciousness
4. Capacity limits, type 2: working memory, primary memory, and consciousness
5. Reconciling limits in attention, primary memory, and consciousness
Early in the history of experimental psychology, it was suggested that capacity limits are related to the limits of conscious awareness. For example, James (1890) described limits in how much information can be attended at once, in a chapter on *attention; and he described limits in how much information can be held in mind at once, in a chapter on *memory. In the latter chapter, he distinguished between primary memory, the trailing edge of the conscious present comprising the small amount of information recently experienced and still held in mind; and secondary memory, the vast amount of information that one can recollect from previous experiences, most of which is not in conscious awareness at any one time. Experimental work supporting these concepts was already available to James from contemporary researchers, including Wilhelm Wundt, who founded the first experimental psychology laboratory. In modern terms, primary and secondary memory are similar to *working memory and long-term memory although, according to most investigators, working memory is a collection of abilities used to maintain information for ongoing tasks and only part of it is associated with consciousness.
In the late 1950s and early 1960s, the concepts of capacity limits began to receive further clarification with the birth of the discipline known as cognitive psychology. Broadbent (1958) in a seminal book described some work from investigators of the period indicating tight limits on attention. For example, individuals who received different spoken messages in both ears at the same time were unable to listen fully to more than one of these messages at a particular moment. Miller (1956) described work indicating limits on how long a list has to be before people can no longer repeat it back. This occurs in adults for lists longer than 5–9 items, with the manageable list length within that range depending on the materials and the individuals involved. One of the most important questions we must address is how attention and primary memory limits are related to one another. Are they different and, if so, which one indicates how much information is in conscious awareness? This will be discussed.
Philosophers worry about a distinction between objective sources of information used to study capacity limits, and subjective sources of information used to understand consciousness. For objective information, one gives directions to research participants and then collects and analyses their responses to particular types of stimuli, made according to those directions. The only kind of subjective information is one’s own experience of what it is like to be conscious (aware) of various things or ideas. People usually agree that it is not possible to be conscious of a large number of things at once, so it makes sense to hypothesize that the limits on consciousness and the limits on information processing have the same causes. However, logically speaking, this need not be the case.
Certain experimental methods serve as our bridge between subjective and objective sources of information. If an experimental participant claims to be conscious of something, we generally give credit for the individual being conscious of it. Often, we verify this by having the participant describe the information. For example, it is not considered good methodology to ask an individual, ‘Did you hear that tone?’ One could believe one is aware of a tone without really hearing the intended tone. It is considered better methodology to ask, ‘Do you think a tone was presented?’ On some trials, no tone is presented and one can compare the proportion of ‘yes’ responses on tone-present and tone-absent trials. Nevertheless, an individual could be conscious of some information but could still say ‘no’, depending on how incomplete information is interpreted.
There seem to be solid demonstrations that individuals can process some information outside the focus of attention and, presumably, outside conscious awareness. One demonstration is found, for example, in early work on selective listening (Broadbent 1958). Only one message could be comprehended at once but a change in the speaker’s voice within the unattended message (say, from a male to a female speaker) automatically recruited attention away from the attended message and to the formerly unattended one. The evidence was obtained by requiring that the attended message be repeated. In that type of task, breaks in repetition typically are found to occur right after the voice changes in the unattended message, and participants in that situation often note the change or react to it and can remember it.
There has been less agreement about whether higher-level semantic information can be processed outside attention. Moray (1959) found that people sometimes noticed their own name when it was included in the unattended message, implying that the name had to have been identified before it was attended. However, one important question is whether the individuals who noticed actually were focusing their attention steadily on the message that they were supposed to repeat. When Conway et al. (2001) examined this for individuals in the highest and the lowest quartiles of ability on a working memory span task (termed high- and low-span, respectively), they found that only 20% of the high-span individuals noticed their names, whereas 65% of the low-span individuals noticed their names. This outcome suggests that the low-span individuals may have noticed their names only because their attention often wandered away from the assigned message, or was not as strongly focused on it as in the case of high-span individuals, making attention available for the supposedly unattended message. This tends to negate the idea that one’s name can be automatically processed without attention, in which case high-span individuals would be expected to notice their names more often than low-span individuals.
There are some clear cases of processing without consciousness. In *blindsight, a particular effect of one kind of brain damage, an individual claims to be unable to see one portion of the visual field but still accurately points to the location of an object in that field, if required to do so (even though such patients often find the request illogical). Processing without consciousness of the processed object seems to occur.
In normal individuals, one can find *priming effects in which one stimulus influences the interpretation of another one, without awareness of the primed stimulus. This occurs, for example, if a priming word is presented with a very short interval before a masking pattern is presented, and is followed by a target word that the participant must identify, such as the word ‘dog’. This target word can be identified more quickly if the preceding priming word is semantically related (e.g. ‘cat’) than if it is unrelated (e.g. ‘brick’), even on trials in which the participant denies having seen the priming word at all and later shows no memory of it.
The question arises as to how much can be processed not only without conscious awareness, but also without attention. In the above cases, participants attended to the location of the stimulus in question, even when they remained unaware of the stimulus itself. As in the early work using selective listening procedures, work on vision by Ann Treisman and others has suggested that individuals can process simple physical features automatically, whereas attention is needed to process combinations of those features. This has been investigated by presenting arrays in which participants had to find a specific target item (e.g. a red square) among other, distracting items with a common feature distinguishing them from the target (e.g. all red circles, or all green squares) or among distracting items that shared multiple features with the target (e.g. some red circles and some green squares on the same trial). In the former case (a common distinguishing feature), searching for the target is rapid no matter how many distracting objects are included in the array. This suggests that participants can abstract physical features from many objects at once, and that an item with a unique feature automatically stands out (e.g. the only square or the only red item in the array). However, when the target can be distinguished from the distracting objects only by the particular conjunction of features (e.g. the only red square), searching for the target occurs slowly and depends on how many distracting objects are present. Thus, it takes focused attention, and presumably conscious awareness, to find an object with a particular conjunction of features. This attention must be applied relatively slowly, to just one object or a small number of objects at a time. Further research along these lines (Chong and Treisman 2005) suggests that it is possible for the brain automatically to compute statistical averages of features, such as the mean size of a circle in an array of circles of various sizes.
An especially interesting procedure that illustrates a limit on attention and awareness is *change blindness. If one views a scene and it cuts to another scene, something in the scene can change and, often, people will not notice the change. For example, in a scene of a table setting, square napkins might be replaced with triangular napkins without people noticing. This appears to occur because only a small number of objects can be attended at once and unattended objects are processed to a level that allows the entire scene to be perceived and comprehended in some holistic sense, but not to a level that allows individual details of most objects to be registered in memory.
The previous discussion implies that attention is closely related to conscious awareness (although for differences between the two see ATTENTION AND AWAREDNESS). Next, consider the other main faculty of the mind that may be linked to consciousness, namely primary memory. Here, the case may not be as straightforward as one would think. Miller (1956) showed that people can repeat lists of about seven items, but are they conscious of all seven at once? Not necessarily. Miller also showed that people can improve performance by grouping items together to form larger units called chunks. For example, it may be much more difficult to remember a list of nine random letters than it is to remember the nine letters IRS–FBI–CIA, because one may recognize acronyms for three prominent United States agencies in the latter case and therefore may have to keep in mind only three chunks. Once the grouping has occurred, however, it is not clear if one is simultaneously aware of all of the original elements in the set, in this example including I, R, S, F, B, C, and A. Miller did not specifically consider that the seven random items that a person might remember could be memorable only because new, multi-item chunks are formed on the spot. For example, if one remembers the telephone number 548-8634, one might have accomplished that by quickly memorizing the digit groups 548, 86, and 34. After that there might be simultaneous awareness of the three chunks of information, but not necessarily of the individual digits within each chunk.
People have a large number of strategies and resources at their disposal to remember word lists and other stimuli, and these strategies and resources together make up working memory. For example, they may recite the words silently to themselves, and this covert rehearsal process may take attention only for its initiation (Baddeley 1986). Rehearsal might have to be prevented before one can fairly measure the conscious part of working memory capacity (i.e. the primary memory of William James). The chunking process also may have to be controlled so that one knows how many items or chunks are being held. A large number of studies appearing to meet those requirements seem to suggest that most adults can retain 3–5 items at once (Cowan 2005). This is the limit, for example, in a type of method in which an array of coloured squares is briefly presented, followed after a short delay by a second array identical to the first or differing in the colour of one square, to be compared to the first array (Luck and Vogel 1997). A similar limit of 3–5 items occurs for verbal lists when one prevents effective rehearsal and grouping by presenting items rapidly with an unpredictable ending point of the list, or when one requires that a single word or syllable be repeated over and over during presentation of the list in order to suppress rehearsal.
What is essential in such procedures is that the research participant has insufficient time to group items together to form larger, multi-item chunks (Cowan 2001). Another successful technique is to test free recall of lists of multi-item chunks that have a known size because they were taught in a training session before the recall task. Chen and Cowan (2005) did that with learned pairs and singletons, and obtained similar results (3–5 chunks recalled).
A limit in primary memory of 3–5 items seems to be analogous to the limits in attention and conscious awareness. The latter are assumed to be general in that attention to, and conscious awareness of, stimuli in one domain detracts from attention and awareness in another domain. For example, listening intently to music would not be a good idea while one is working as an air traffic controller because attention would sometimes be withdrawn from details of the air traffic display to listen to parts of the music. Similarly, in the case of primary memory, Morey and Cowan (2004) found that reciting a random seven-digit list detracted from carrying out the two-array comparison procedure of Luck and Vogel that has just been described, whereas reciting a known seven-digit number (the participant’s telephone number) had little effect.
It is not clear where the 3–5-chunk working memory capacity limit comes from, or how it may help the human species to survive. Cowan (2001, 2005) summarized various authors’ speculations on these matters. The capacity limit may occur because each object or chunk in working memory is represented by the concurrent firing of neurons signalling various features of that object. Neural circuits for all objects represented in working memory must be activated in turn within a limited period and, if too many objects are included, there may be contamination between the different circuits representing two or more objects. Capacity limits may be beneficial in highlighting the most important information to guide actions in the right direction; representation of too much at once might result in actions that were based on confusions or were dangerously slow in emergency situations. Some mathematical analyses suggest that forming chunks of 3–5 items allows optimal searching for the items. To acquire complex tasks and skills, chunking can be applied in a reiterative fashion to encompass, in principle, any amount of information.
A major question that remains is how to reconcile the different capacity limits of attention vs primary memory. People generally can attend to only one message at a time, whereas they can keep several items at once in primary memory. Can these somehow represent compatible limits on conscious awareness? Perhaps so. There are several possible resolutions of the findings with attention vs primary memory. It might be that only a single message can be attended and understood because several ideas in the message must be held in primary memory at once, long enough for them to be integrated into a coherent message. Alternatively, the several (3–5) ideas that can be held in primary memory at once may have to be sufficiently uniform in type to be integrated into a coherent scene, in effect becoming like one message. According to this account, one would have difficulty remembering, say, one tone, one colour, one letter, and one shape at the same time because an integration of these events may not be easy to form. The more severe limit for paying attention, compared to the primary memory limit, might also occur because the items to be attended are fleeting, whereas items to be held in working memory theoretically might be entered into attention one at a time, or at least at a limited rate, and must be made available long enough for that to happen (Cowan 2005).
We at least know that individuals who can hold more items in primary memory seem to be many of the same individuals who are capable of carrying out difficult attention tasks. Two such tasks are (1) to go against what comes naturally by looking in the direction opposite to where an object has suddenly appeared, called anti-saccade eye movements (Kane et al. 2004); and (2) efficiently to filter out irrelevant objects so that only the relevant ones have to be retained in working memory (e.g. Conway et al. 2001). However, one study suggests that the capacity of primary memory and the ability to control attention are less than perfectly correlated across individuals (Cowan et al. 2006), and that both of these traits independently contribute to intelligence. It may be that the focus of attention and conscious awareness need to be flexible, expanding to apprehend a field of objects or contracting to focus intensively on a difficult task such as making an anti-saccade movement. If so, attention and primary memory tasks should interfere with one another to some extent, and this seems to be the case (Bunting et al. in press). There may also be additional skills that help primary memory but not attention, or vice versa. The present field of study of memory and attention and their relation to conscious awareness is exciting, but there is much left to learn.
See also AUTOMATICITY; GLOBAL WORKSPACE THEORY
This chapter was prepared with funding from NIH grant number R01-HD 21338.
NELSON COWAN
Baddeley, A. D. (1986). Working memory.
Broadbent, D. E. (1958). Perception and communication.
Bunting, M. F., Cowan, N., and Colflesh, G. H. (2008). ‘The deployment of attention in short-term memory tasks: tradeoffs between immediate and delayed deployment’. Memory and Cognition, 36.
Chen, Z. and Cowan, N. (2005). ‘Chunk limits and length limits in immediate recall: a reconciliation’. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31.
Chong, S.C. and Treisman, A. (2005). Statistical processing: computing the average size in perceptual groups’. Vision Research, 45.
Conway, A. R. A., Cowan, N., and Bunting, M. F. (2001). ‘The cocktail party phenomenon revisited: the importance of working memory capacity’. Psychonomic Bulletin and Review, 8.
Cowan, N. (2001). ‘The magical number 4 in short-term memory: a reconsideration of mental storage capacity’. Behavioral and Brain Sciences, 24.
—— (2005). Working Memory Capacity.
——, Fristoe, N. M., Elliott, E. M., Brunner, R. P., and Saults, J. S. (2006). ‘Scope of attention, control of attention, and intelligence in children and adults’. Memory and Cognition, 34.
James, W. (1890). The Principles of Psychology.
Kane, M. J., Hambrick, D. Z., Tuholski, S. W., Wilhelm, O., Payne, T. W., and Engle, R. W. (2004). ‘The generality of working-memory capacity: a latent variable approach to verbal and visuospatial memory span and reasoning’. Journal of Experimental Psychology: General, 133.
Luck, S. J. and Vogel, E. K. (1997). ‘The capacity of visual working memory for features and conjunctions’. Nature, 390.
Miller, G. A. (1956). ‘The magical number seven, plus or minus two: some limits on our capacity for processing information’. Psychological Review, 63.
Moray, N. (1959). ‘Attention in dichotic listening: affective cues and the influence of instructions’. Quarterly Journal of Experimental Psychology, 11.
Morey, C. C. and Cowan, N. (2004). ‘When visual and verbal memories compete: evidence of cross-domain limits in working memory’. Psychonomic Bulletin and Review, 11.
Cartesian dualism See DUALISM
cerebellum See BRAIN
change blindness Change blindness, a term coined by Ronald Rensink and colleagues (Rensink et al. 1997), refers to the striking difficulty people have in noticing large changes to scenes or objects. When a change is obscured by some disruption, observers tend not to detect it, even when the change is large and easily seen once the observer has found it. Many types of disruption can induce change blindness, including briefly flashed blank screens (e.g. Rensink et al. 1997), visual noise or ‘mudsplashes’ flashed across a scene (O’Regan et al. 1999, Rensink et al. 2000), eye movements (e.g. Grimes 1996), eye blinks (O’Regan et al. 2000), motion picture cuts or pans (e.g. Levin and Simons 1997), and real-world occlusion events (e.g. Simons and Levin 1998). It can also occur in the absence of a disruption, provided that the change occurs gradually enough that it does not attract attention (Simons et al. 2000).
Change blindness is interesting because the missed changes are surprisingly large: a failure to notice one of ten thousand people in a stadium losing his hat would be unsurprising, but the failure to notice that a stranger you were talking to was replaced by another is startling. People incorrectly assume that large changes automatically draw attention, whereas evidence for change blindness suggests that they do not. This mistaken intuition, known as change blindness blindness, is evidenced by the finding that people consistently overestimate their ability to detect change (Levin et al. 2000).
The phenomenon of change blindness has challenged traditional assumptions about the stability and robustness of internal representations of visual scenes. People can encode and retain huge numbers of scenes and recognize them much later (Shepard 1967), suggesting the possibility that people form extensive representations of the details of scenes. Change blindness appears to contradict this conclusion—we fail to notice large changes between two versions of a scene if the change signal is obscured or eliminated. The phenomenon has led some to question whether internal representations are even necessary to explain our conscious experience of our visual world (e.g. Noë 2005).
Change blindness has received increasing attention over the past decades, in part as a result of the advent of readily accessible image editing software. Limits on our ability to detect changes have been investigated using simple stimuli since the 1950s; most early studies documented the inability to detect changes to letters and words when the changes occurred during an eye movement. More recent change blindness research built on these paradigms, but extended them to more complex and naturalistic stimuli. In a striking demonstration, participants studied photographs of natural scenes and then performed a memory test (Grimes 1996). While they viewed the images, some details of the photographs were changed, and if the changes occurred during eye movements, people failed to notice them. Even large changes, such as two people exchanging their heads, went unseen. This demonstration, coupled with philosophical explorations of the tenuous link between visual representations and visual awareness, sparked a resurgence of interest in change detection failures as well as paradigms to study change blindness.
The flicker task is perhaps the best-known change blindness paradigm (Rensink et al. 1997). In this task, an original image and a modified image alternate rapidly with a blank screen between them until observers detect the change. The inserted blank screen makes this task difficult by hiding the change signal. Most changes are found eventually, but they are rarely detected during the first cycle of alternation, and some changes are not detected even after one minute of alternation. Unlike earlier studies of change blindness, the flicker task allows people to experience the extent of their change blindness. That is, people can experience a prolonged inability to detect changes, but once they find the change, the change seems trivial to detect. The method also allows a rigorous exploration of the factors that contribute to change detection and change blindness in both simple and complex displays.
Change blindness also occurs for other tasks and with more complex displays, even including real-world interactions (see Rensink 2002 for a review). For example, in one experiment an experimenter approached a pedestrian (the participant) to ask for directions, and during the conversation, two other people interrupted them by carrying a door between them. Half of the participants failed to notice when the original experimenter was replaced by a different person during the interruption (Simons and Levin 1998). Change blindness has also been studied using simple arrays of objects, complex photographs, and motion pictures. It occurs when changes are introduced during an eye movement, a flashed blank screen, an eye blink, a motion picture cut, or a real-world interruption.
Despite the generalizability of change blindness from simple laboratory tasks to real-world contexts, it does not represent our normal visual experience. In fact, people generally do detect changes when they can see the change occur. Changes typically produce a detectable signal that draws attention. In demonstrations of change blindness, this signal is hidden or eliminated by some form of interruption or distraction. In essence, the interruption ‘breaks’ the system by eliminating the mechanism typically used to detect changes. In so doing, change blindness provides a tool to better understand how the visual system operates. Just as visual *illusions allow researchers to study the default assumptions of our visual system, change blindness allows us to better understand the contributions of attention and memory to conscious perception.
In the absence of a change signal, successful change detection requires observers to encode, retain, and then compare information about potential change targets across views. Successful change detection requires having both a representation of the pre-change object and a successful comparison of it to the post-change object. If any step in the process fails—encoding, representation, recall, or comparison—change detection fails. Evidence for change blindness suggests that at least some steps in this are not automatic—when the change signal is eliminated, people do not consistently detect changes. A central question in the literature is which part or parts of this process fail when change blindness occurs.
Change blindness has often been taken as evidence for the sparseness or absence of internal representations. If we lacked internal representations of previously viewed scenes, there would be no way to compare the current view to the pre-change scene, so changes would go unnoticed due to a representation failure. However, it is crucial to note that change blindness does not logically require the absence of representations. Change blindness could occur even if the pre-change scene were represented with photographic precision, provided that observers failed to compare the relevant aspects of the scene before and after the change. In fact, even when observers represent both the initial and changed version of a scene, they sometimes fail to detect changes (Mitroff et al. 2004).
The completeness of our visual representations in the face of evidence for change blindness remains an area of extensive investigation. Some researchers use evidence of change blindness to support the claim that our visual experience does not need to rely on complete or detailed representations. In essence, our visual representations can be sparse, provided that we have just-in-time access to the details we need to support conscious experience of our visual world. Others argue that our representations are, in fact, fairly detailed. Such detailed representations might underlie our long-term memory for scenes, but change blindness occurs either due to a disconnection between these representations and the mechanisms of change detection, or because of a failure to compare the relevant aspects of the pre- and post-change scenes. The differences in these explanations for change blindness have implications both for the mechanisms of visual perception and representation and for the link between representations and consciousness (see Simons 2000, Simons and Rensink 2005 for detailed discussion of these and other possible explanations for change blindness).
In addition to spurring research on visual representations and their links to awareness, change blindness has also yielded interesting insights into the relationship between *attention and awareness. In the presence of a visual disruption, the change signal no longer attracts attention and change blindness ensues. Even when observers know that a change is occurring, they cannot easily find it and have to shift attention from one scene region to another looking for change. Once they attend to the change, it becomes easy to detect. This finding, coupled with evidence that changes to attended objects (e.g. the ‘centre of interest’ of the scene) are more easily detected, led to the conclusion that attention is necessary for successful change detection. However, attention to an object does not always eliminate change blindness. Observers may fail to detect changes to the central object in a scene. Thus, attention to the changing object may not be sufficient for detection.
In the past several years, the phenomenon of change blindness also has attracted the attention of neuroscientists who have used *functional brain imaging techniques to investigate the neural underpinnings of change blindness and change detection. Most of these studies suggest a role for both *frontal and parietal (particularly right parietal) cortex in change detection (e.g. Beck et al. 2006). Other imaging studies have examined the role of focused attention in change detection as well as the correlation between conscious change perception and neural activation. Such neuroimaging measures hold promise as a way to explore the mechanisms of change detection by providing additional measures of change detection even in the absence of a behavioural response or a conscious report of change. In that way, they might help determine the extent to which visual representation and change detection occur in the absence of awareness.
In summary, change blindness has become increasingly central to the field of visual cognition, and through its study we can improve our understanding of visual representation, attention, scene perception, and the neural correlates of consciousness.
XIAOANG IRENE WAN, MICHAEL S. AMBINDER, AND DANIEL J. SIMONS
Beck, D. M., Muggleton, N., Walsh, V., and Lavie, N. (2006). ‘Right parietal cortex plays a critical role in change blindness’. Cerebral Cortex, 16.
Grimes, J. (1996). ‘On the failure to detect changes in scenes across saccades’. In Akins, K. (ed.) Vancouver Studies in Cognitive Science: Vol. 5. Perception.
Levin, D. T. and Simons, D. J. (1997). ‘Failure to detect changes to attended objects in motion pictures’. Psychonomic Bulletin and Review, 4.
——, Momen, N., Drivdahl, S. B., and Simons, D. J. (2000). ‘Change blindness blindness: the metacognitive error of overestimating change-detection ability’. Visual Cognition, 7.
Mitroff, S. R., Simons, D. J., and Levin, D. T. (2004). ‘Nothing compares 2 views: change blindness results from failures to compare retained information’. Perception and Psychophysics, 66.
Noë, A. (2005). ‘What does change blindness teach us about consciousness?’ Trends in Cognitive Sciences, 9.
O’Regan, J. K., Rensink, R. A., and Clark, J. J. (1999). ‘Change-blindness as a result of “mudsplashes”’. Nature, 398.
——, Deubel, H., Clark, J. J., and Rensink, R. A. (2000). ‘Picture changes during blinks: looking without seeing and seeing without looking’. Visual Cognition, 7.
Rensink, R. A. (2002). ‘Change detection’. Annual Review of Psychology, 53.
——, O’Regan, J. K., and Clark, J. J. (1997). ‘To see or not to see: the need for attention to perceive changes in scenes’. Psychological Science, 8.
——, ——, —— (2000). ‘On the failure to detect changes in scenes across brief interruptions’. Visual Cognition, 7.
Shepard, R. N. (1967). ‘Recognition memory for words, sentences and pictures’. Journal of Verbal Learning and Verbal Behavior, 6.
Simons, D. J. (2000). ‘Current approaches to change blindness’. Visual Cognition, 7.
—— and Levin, D. T. (1998). ‘Failure to detect changes to people during a real-world interaction’. Psychonomic Bulletin and Review, 5.
—— and Rensink, R. A. (2005). ‘Change blindness: past, present, and future’. Trends in Cognitive Sciences, 9.
——, Franconeri, S. L., and Reimer, R. L. (2000). ‘Change blindness in the absence of a visual disruption’. Perception, 29.
Charles Bonnet syndrome The natural scientist and philosopher Charles Bonnet (1720–93) wrote on topics as diverse as parthenogenesis, worm regeneration, metaphysics, and theology. Influencing Gall’s system of organology and the 19th century localizationist approach to cerebral function, Bonnet viewed the brain as an ‘assemblage of different organs’, specialized for different functions, with activation of a given organ, e.g. the organ of vision, responsible not only for visual perception, but also visual imagery and visual memory—a view strikingly resonant with contemporary cognitive neuroscience. His theory of the brain and its mental functions was first outlined in his ‘Analytical essay on the faculties of the soul’ (Bonnet 1760), in which passing mention was made of a case he had encountered that was so bizarre he feared no one would believe it. The case concerned an elderly man with failing eyesight who had experienced a bewildering array of silent visions without evidence of memory loss or mental illness. The visions were attributed by Bonnet to the irritation of fibres in the visual organ of the brain, resulting in *hallucinatory visual percepts indistinguishable from normal sight. Bonnet was to present the details of the case in full at a later date but never returned to it other than as a footnote in a later edition of the work identifying the elderly man as his grandfather, the magistrate Charles Lullin (1669–1761).
Bonnet’s description of Lullin, although brief, was taken up by several 18th and 19th century authors as the paradigm of hallucinations in the sane. However, it would likely have become little more than a historical curiosity, were it not for the chance finding of Lullin’s sworn, witnessed testimony among the papers of an ophthalmological collector and its publication in full at the beginning of the 20th century (Flournoy 1902). In 1756, three years after a successful operation to remove a cataract in his left eye, Lullin developed a gradual deterioration of vision in both eyes which continued despite an operation in 1757 to remove a right eye cataract (his visual loss was probably related to age-related macular disease). The hallucinations occurred from February to September of 1758 when he was aged 89. They ranged from the relatively simple and mundane (e.g. storms of whirling atomic particles, scaffolding, brickwork, clover patterns, a handkerchief with gold disks in the corners, and tapestries) to the complex and bizarre (e.g. framed landscape pictures adorning his walls, an 18th century spinning machine, crowds of passers-by in the fountain outside his window, playful children with ribbons in their hair, women in silk dresses with inverted tables on their heads, a carriage of monstrous proportions complete with horse and driver, and a man smoking—recognized as Lullin himself). Some 250 years on, it has become clear that experiences identical to Lullin’s are reported by c.10% of patients with moderate visual loss.
In 1936, Georges de Morsier, then a recently appointed lecturer in neurology at the University of Geneva, honoured Bonnet’s account of Lullin’s hallucinations by giving the name ‘Charles Bonnet syndrome’ to the clinical scenario of visual hallucinations in elderly individuals with eye disease (de Morsier 1936). However, de Morsier made it clear that eye disease was incidental, not causal, and in 1938 removed it entirely from the definition. In his view, Lullin’s hallucinations, and those of patients like him, were the result of an unidentified degenerative brain disease which did not cause dementia as it remained confined to the visual pathways. While the idea of honouring Bonnet was widely embraced by the medical community, de Morsier’s definition of the syndrome was not. Parallel uses of the term have emerged, some following de Morsier, others describing complex visual hallucinations with insight (ffytche 2005). Yet the use of the term that has found most favour describes an association of visual hallucinations with eye disease, reflecting mounting evidence for the pathophysiological role played by loss of visual inputs (see Burke 2002). While arguments continue over the use of the term, what is beyond dispute is that visual hallucinations are relatively common and occur in patients able to faithfully report their hallucinated experiences without the potential distortion of memory loss or mental illness. As foreseen by Bonnet, the hallucinations in such patients, and their associated brain states, provide important insights into the neural correlates of the contents of consciousness.
DOMINIC. H. FFYTCHE
Bonnet, C. (1760). L’Essai analytique sur les facultés de l’âme.
Burke, W. (2002). ‘The neural basis of Charles Bonnet hallucinations: a hypothesis’. Journal of Neurology, Neurosurgery and Psychiatry (London), 73.
de Morsier, G. (1936). ‘Les automatismes visuels. (Hallucinations visuelles rétrochiasmatiques)’. Schweizerische Medizinische Wochenschrift, 66.
ffytche, D. H. (2005). ‘Visual hallucinations and the Charles Bonnet syndrome’. Current Psychiatry Reports, 7.
Flournoy, T. (1902). ‘Le cas de Charles Bonnet: hallucinations visuelles chez un vieillard opéré de la cataracte’. Archives de Psychologie (Geneva), 1.
Chinese room argument John Searle’s Chinese room argument goes against claims that computers can really think; against ‘strong *artificial intelligence’ as Searle (1980a) calls it. The argument relies on a thought experiment. Suppose you are an English speaker who does not speak a word of Chinese. You are in a room, hand-working a natural language understanding (NLU) computer program for Chinese. You work the program by following instructions in English, using data structures, such as look-up tables, to correlate Chinese symbols with other Chinese symbols. Using these structures and instructions, suppose you produce responses to written Chinese input that are indistinguishable from responses that might be given by a native Chinese speaker. By processing uninterpreted formal symbols (‘syntax’) according to rote instructions, like a computer, you pass for a Chinese speaker. You pass the *Turing test for Chinese. Still, it seems, you would not know what the symbols meant: you would not understand a word of the Chinese. The same, Searle concludes, goes for computers. And since ‘nothing depends on the details of the program’ or the specific psychological processes being ‘simulated’, the same objection applies against all would-be examples of artificial intelligence. That is all they ever will be, simulations; not the real thing.
In his seminal presentations, Searle speaks of ‘intentionality’ (1980a) or ‘semantics’ (1984), but to many it has seemed, from Searle’s reliance on ‘the first person point of view’ (1980b) in the thought experiment and in fending off objections, that the argument is really about consciousness. The experiment seems to show that the processor would not be conscious of the meaning of the symbols; not that the symbols or the processing would be meaningless … unless it is further assumed that meaning requires consciousness thereof. Searle sometimes bridles at this interpretation. Against Daniel Dennett, for example, Searle complains, ‘he misstates my position as being about consciousness rather than about semantics’ (1999:28). Yet, curiously, Searle himself (1980b, in reply to Wilensky 1980) admits he equally well ‘could have made the argument about pains, tickles, and anxiety’, when pains, tickles, and undirected anxiety are not intentional states. They have no semantics! The similarity of Searle’s scenario to ‘absent *qualia’ scenarios generally, and to Ned Block’s (1978) Chinese nation example in particular, further supports the thought that the Chinese room concerns consciousness, in the first place, and *intentionality only by implication (insofar as intentionality requires consciousness). What the experiment, then, would show is that hand-working an NLU program for Chinese would not give rise to any sensation or first-person impression of understanding; that no such computation could impart—not meaning itself—but phenomenal consciousness thereof.
Practical objections to Searle’s thought experiment concern whether human-level conversational ability can be computationally achieved at all, by look-up table or otherwise; and if it could be, whether such a program could really be hand-worked in real time, as envisaged. Such objections raise questions about computability and implementation that do not directly concern consciousness. More theoretical replies—granting the ‘what if’ of the thought experiment—however, either go directly to consciousness themselves; or else Searle’s responses immediately do.
The systems reply says the understander would not be the person in the room, but the whole system (person, instruction books, look-up tables, etc.). Even if meaning does require consciousness, according to this reply, from the fact that the person in the room would not be conscious of the meanings of the Chinese symbols, it does not follow that the system would not be; perhaps the system, consequently, understands. Many proponents of this reply, additionally, however, would not grant the supposition that meaning requires consciousness. According to conceptual role semantics, inscriptions in the envisaged scenario get their meanings from their inferential roles in the overall process. If they instance the right roles, they are meaningful; unconsciousness notwithstanding. The causal theory of reference (inspiring the robot reply), on the other hand, says that inscriptions acquire meaning from causal connections with the actual things they refer to. If computations in the room were caused by real-world inputs, as in perception; and if they caused real-world outputs, such as pursuit or avoidance (put the room in the head of a robot); then the inscriptions and computations would be about the things perceived, avoided, etc. This would give the inscriptions and computations semantics, unconsciousness notwithstanding.
Searle responds to the systems and robot replies, initially (1980a), by tweaking the thought experiment. Suppose you internalize the system (memorize the instructions and data structures) and take on all the requisite conceptual roles yourself. Put the room in a robot’s head to supply causal links. Still, Searle argues, this would not make the symbols meaningful to you, the processor, so as to give them ‘intrinsic intentionality’ (Searle 1980b) besides the ‘derived’ or ‘observer relative intentionality’ they have from the conventional Chinese meanings of the symbols. Derived intentionality, Searle explains, exists ‘only in the minds of beholders like the Chinese speakers outside the room’: this too is curious. Inferential roles and causal links are not observer-relative in this way. Inferential roles are system-internal by definition. On the other hand, if I avoid real dangers to myself (in the robot head) by heeding written warnings in Chinese, the understanding would effectively seem to be mine (or the robot’s) independently of any outside observers. Searle’s response in either case is to take the ‘in’ of ‘intrinsic’ to mean, not just objectively contained, but subjectively experienced; not just physically in but metaphysically inward or ‘ontologically subjective’: to take it in a way that seems to suppose meaning requires consciousness thereof. Searle’s later advocacy of ‘the Connection Principle’ (1990b) can be viewed as an attempt to discharge this supposition.
According to Searle’s Connection Principle, ‘ascription of an unconscious intentional phenomenon to a system implies the phenomenon is in principle accessible to consciousness’. As he goes on to explain it, unconscious psychological phenomena do not actually have meaning while unconscious ‘for the only occurrent reality’ of that meaning ‘is the shape of conscious thoughts’ (1992). Searle, however, remains vague about what these ‘shapes’ are and how their subjective and qualitative natures (on which he insists) could be meaning constitutive in ways that objective factors like inferential roles and causal links (as he argues) cannot be. Nor is this the only reason Searle’s Connection Principle has won few adherents. Psychological phenomena such as ‘unconscious *priming’ (e.g. subjects previously exposed to the word-pair ‘ocean–moon’ being subsequently more likely to respond ‘Tide’ when asked to name a laundry detergent) seem to show that unconscious mental states and processes do have intentionality while unconscious, since they are subject to semantic effects while unconscious. Furthermore, the supposition of unconscious intentionality has proved scientifically fruitful. ‘Cognitive’ theories such as Noam Chomsky’s theories of language and David Marr’s theories of vision suppose the existence of unconscious representations (e.g. language rules, and preliminary visual sketches) which are intentional (about the grammars they rule, or scenes preliminarily sketched) but not accessible to introspection as Searle’s Connection Principle requires. In contrast, the view of the mental that Searle endorses in accord with his Principle—according to which ‘the actual ontology of mental states is a first-person ontology’ and ‘the mind consists of qualia, so to speak, right down to the ground’ (1992)—seems, to many, scientifically regressive; seeming to ‘regress to the Cartesian vantage point’ (Dennett 1987) of dualism.
Of the Chinese room, Searle says, the ‘point of the story is to remind us of a conceptual truth that we knew all along’ (1988), that ‘syntax is not sufficient for semantics’(1984). But the insufficiency of syntax in motion (playing inferential roles), and in situ (when causally connected) are hardly conceptual truths we knew all along; they are empirical claims the experiment has to support (Hauser 1997). The support offered would seem to be the intuition that such processing would not make the processor conscious of the meanings of the symbols. But, of course, it is supposed to be an understanding program, not a consciousness thereof (or introspective access) program: if the link between intentionality and consciousness his Connection Principle articulates were not already being presupposed, it seems Searle’s famous argument against ‘strong AI’ would go immediately wrong. However much Searle might like his Chinese room example to make a case against AI that stands independently of this ‘conceptual connection’ (Searle 1992), then, it seems extremely doubtful that it does.
See also, COGNITION, UNCONSCIOUS; CONSCIOUSNESS, CONCEPTS OF; DUALISM; INTENTIONALITY; QUALIA; SUBJECTIVITY
LARRY. S. HAUSER
Block, N. (1978). ‘Troubles with functionalism’. In Savage, C. W. (ed.) Perception and Cognition: Issues in the Foundations of Psychology.
Dennett, D. (1987). The Intentional Stance.
Hauser, L. (1997). ‘Searle’s Chinese box: debunking the Chinese room argument’. Minds and Machines, 7.
Preston, J. and Bishop, M. (eds) (2001). Views into the Chinese Room: New Essays on Searle and Artificial Intelligence.
Searle, J. R. (1980a). ‘Minds, brains, and programs’. Behavioral and Brain Sciences, 3.
Searle, J. R. (1980b). ‘Intrinsic intentionality’. Behavioral and Brain Sciences, 3.
Searle, J. R. (1984). Minds, Brains and Science.
Searle, J. R. (1988). ‘Minds and brains without programs’. In Blakemore, C. (ed.) Mindwaves.
Searle, J. R. (1990a). ‘Is the brain’s mind a computer program?’ Scientific American, 262(1).
Searle, J. R. (1990b). ‘Consciousness, explanatory inversion, and cognitive science’. Behavioral and Brain Sciences, 13.
Searle, J. R. (1992). The Rediscovery of the Mind.
Searle, J. R. (1999). The Mystery of Consciousness.
Wilensky, R. (1980). ‘Computers, cognition and philosophy’. Behavioral and Brain Sciences, 3.
cocktail party effect See ATTENTION; CAPACITY LIMITS AND CONSCIOUSNESS
cognition, unconscious The unconscious mind was one of the most important ideas of the 20th century, influencing not just scientific and clinical psychology but also literature, art, and popular culture. Sigmund Freud famously characterized the ‘discovery’ of the unconscious as one of three unpleasant truths that humans had learned about themselves: the first, from Copernicus, that the Earth was not the centre of the universe; the second, from Darwin, that humans are just animals after all; and the third, ostensibly from Freud himself, that the conscious mind was but the tip of the iceberg (though Freud apparently never used this metaphor himself), and that the important determinants of experience, thought, and action were hidden from conscious awareness and conscious control.
In fact, we now understand that Freud was not the discoverer of the unconscious (Ellenberger 1970). The concept had earlier roots in the philosophical work of Leibniz and Kant, and especially that of Herbart, who in the early 19th century introduced the concept of a limen, or threshold, which a sensation had to cross in order to be represented in conscious awareness. A little later, Helmholtz argued that conscious perception was influenced by unconscious inferences made as the perceiver constructs a mental representation of a distal stimulus. In 1868, while Freud was still in short trousers, the Romantic movement in philosophy, literature, and the arts set the stage for Hartmann’s Philosophy of the Unconscious (1868), which argued that the physical universe, life, and individual minds were ruled by an intelligent, dynamic force of which we had no awareness and over which we had no control. And before Freud was out of medical school, Samuel Butler, author of Erewhon, drew on Darwin’s theory of evolution to argue that unconscious memory was a universal property of all organized matter.
Nevertheless, consciousness dominated the scientific psychology that emerged in the latter part of the 19th century. The psychophysicists, such as Weber and Fechner, focused on mapping the relations between conscious sensation and physical stimulation. The structuralists, such as Wundt and Titchener, sought to analyse complex conscious experiences into their constituent (but conscious) elements. James, in his Principles of Psychology, defined psychology as the science of mental life, by which he meant conscious mental life—as he made clear in the Briefer Course, where he defined psychology as ‘the description and explanation of states of consciousness as such’. Against this background, Breuer and Freud’s assertion, in the early 1890s, that hysteria is a product of repressed memories of trauma, and Freud’s 1900 topographical division of the mind into conscious, preconscious, and unconscious systems, began to insinuate themselves into the way we thought about the mind.
On the basis of his own observations of hysteria, fugue, and hypnosis, James understood, somewhat paradoxically, that there were streams of mental life that proceeded outside conscious awareness. Nevertheless, he warned (in a critique directed against Hartmann) that the distinction between conscious and unconscious mental life was ‘the sovereign means for believing what one likes in psychology, and of turning what might become a science into a tumbling-ground for whimsies’. This did not mean that the notion of unconscious mental life should be discarded; but it did mean that any such notion should be accompanied by solid scientific evidence. Unfortunately, as we now understand, Freud’s ‘evidence’ was of the very worst sort: uncorroborated inferences, based more on his own theoretical commitments than on anything his patients said or did, coupled with the assumption that the patient’s resistance to Freud’s inferences were all the more proof that they were correct—James’s ‘psychologist’s fallacy’ writ large. Ever since, the challenge for those who are interested in unconscious mental life has been to reduce the size of the tumbling-ground by tightening up the inference from behaviour to unconscious thought.
Unfortunately, the scientific investigation of unconscious mental life was sidetracked by the behaviourist revolution in psychology, initiated by Watson and consolidated by Skinner, which effectively banished consciousness from psychological discourse, and the unconscious along with it. The ‘cognitive revolution’ of the 1960s, which overthrew behaviourism, began with research on *attention, short-term memory, and imagery—all aspects of conscious awareness. The development of cognitive psychology led ultimately to a rediscovery of the unconscious as well, but in a form that looked nothing like Freud’s vision. As befits an event that took place in the context of the cognitive revolution, the rediscovery of the unconscious began with cognitive processes—the processes by which we acquire knowledge through perception and learning; store knowledge in memory; use, transform, and generate knowledge through reasoning, problem-solving, judgement, and decision-making; and share knowledge through language.
The first milestone in the rediscovery of the unconscious mind was a distinction between *automatic and controlled processes, as exemplified by various phenomena of perception and skilled reading. For example, the perceiver automatically takes distance into account in inferring the size of an object from the size of its retinal image (this is an example of Helmholtz’s ‘unconscious inferences’). And in the *Stroop effect, subjects automatically process the meaning of colour words, which makes it difficult for them to name the incongruent colour of the ink in which those words are printed. In contrast to controlled processes, automatic processes are inevitably evoked by the appearance of an effective stimulus; once evoked, they are incorrigibly executed, proceeding to completion in a ballistic fashion; they consume little or no attentional resources; and they can be performed in parallel with other cognitive activities. While controlled processes are performed consciously, automatic processes are unconscious in the strict sense that they are both unavailable to introspective access, known only through inference, and involuntary.
It is one thing to acknowledge that certain cognitive processes are performed unconsciously. As noted earlier, such a notion dates back to Helmholtz, and was revived by Chomsky, at the beginning of the cognitive revolution, to describe the unconscious operation of syntactic rules of language. But it is something else to believe that cognitive *contents—specific percepts, memories, and thoughts—could also be represented unconsciously. However, evidence for just such a proposition began to emerge with the discovery of spared *priming and source *amnesia in patients with the amnesic syndrome secondary to damage to the hippocampus and other subcortical structures. This research, in turn, led to a distinction between two expressions of episodic memory, or memory for discrete events: explicit memory entails conscious recollection, usually in the form of recall or recognition; by contrast, implicit memory refers to any effect of a past event on subsequent experience, thought, or action (Schacter 1987; see AUTONOETIC CONSCIOUSNESS).
Preserved priming in amnesic patients showed that explicit and implicit memory could be dissociated from each other: in this sense, implicit memory may be thought of as unconscious memory. Similar dissociations have now been observed in a wide variety of conditions, including the anterograde and retrograde amnesias produced by electroconvulsive therapy, general *anaesthesia, conscious sedation by benzodiazepines and similar drugs, *dementias such as Alzheimer’s disease, the forgetfulness associated with normal ageing, posthypnotic amnesia, and the ‘functional’ or ‘psychogenic’ amnesias associated with the psychiatric syndromes known as the dissociative disorders, such as ‘hysterical’ fugue and *dissociative identity disorder (also known as multiple personality disorder).
Implicit memory refers to the influence of a past event on the person’s subsequent experience, thought, or action in the absence of, or independent of, the person’s conscious recollection of that event. This definition can then serve as a model for extending the cognitive unconscious to cognitive domains other than memory. Thus, implicit perception refers to the influence of an event in the current stimulus environment, in the absence of the person’s conscious perception of that event (Kihlstrom et al. 1992). Implicit perception is exemplified by so-called subliminal perception (see PERCEPTION, UNCONSCIOUS), as well as the *blindsight of patients with lesions in striate cortex. It has also been observed in conversion disorders (such as ‘hysterical’ blindness); in the anaesthesias and negative *hallucinations produced by hypnotic suggestion; and in failures of conscious perception associated with certain attentional phenomena, such as unilateral neglect, dichotic listening, parafoveal vision, *inattentional blindness, repetition blindness, and the *attentional blink. In each case, subjects’ task performance is influenced by stimuli that they do not consciously see or hear—the essence of unconscious perception.
Source amnesia shades into the phenomenon of implicit *learning, in which subjects acquire knowledge, as displayed in subsequent task performance, but are not aware of what they have learned. Although debates over unconscious learning date back to the earliest days of psychology, the term implicit learning was more recently coined in the context of experiments on *artificial grammar learning (Reber 1993). In a typical experiment, subjects were asked to memorize a list of letter strings, each of which had been generated by a set of ‘grammatical rules’. Despite being unable to articulate the rules themselves, they were able to discriminate new grammatical strings from ungrammatical ones at better-than-chance levels. Later experiments extended this finding to concept learning, covariation detection, *sequence learning, learning the input–output relations in a dynamic system, and other paradigms. In source amnesia, as an aspect of implicit episodic memory, subjects have conscious access to newly acquired knowledge, even though they do not remember the learning experience itself. In implicit learning, newly acquired semantic or procedural knowledge is not consciously accessible, but nevertheless influences the subjects’ conscious experience, thought, and action.
There is even some evidence for unconscious thought, defined as the influence on experience, thought, or action of a mental state that is neither a percept nor a memory, such as an idea or an image (Kihlstrom et al. 1996). For example, when subjects confront two problems, one soluble and the other not, they are often able to identify the soluble problem, even though they are not consciously aware of the solution itself. Other research has shown that the correct solution can generate priming effects, even when subjects are unaware of it. Because the solution has never been presented to the subjects, it is neither a percept nor a memory; because it has been internally generated, it is best considered as a thought. Phenomenologically, implicit thought is similar to the *feeling of knowing someone we cannot identify further, or the experience when words seem to be on the tip of the tongue; it may also be involved in intuition and other aspects of creative problem-solving.
With the exception of implicit thought, all the phenomena of the cognitive unconscious are well accepted, although there remains considerable disagreement about their underlying mechanisms. For example, it is not clear whether explicit and implicit memory are mediated by separate brain systems, or whether they reflect different aspects of processing within a single *memory system. The theoretical uncertainty is exacerbated by the fact that most demonstrations of implicit perception and memory involve repetition priming that can be based on an unconscious perceptual representation of the prime, and the extent of unconscious semantic priming, especially in the case of implicit perception, has yet to be resolved. One thing that is clear is that there are a number of different ways to render a percept or memory unconscious; the extent of unconscious influence probably depends on the particular means chosen.
Occasional claims to the contrary notwithstanding, the cognitive unconscious revealed by the experiments of modern psychology has nothing in common with the dynamic unconscious of classic psychoanalytic theory (Westen 1999). To begin with, its contents are ‘kinder and gentler’ than Freud’s primitive, infantile, irrational, sexual, and aggressive ‘monsters from the Id’; moreover, unconscious percepts and memories seem to reflect the basic architecture of the cognitive system, rather than being motivated by conflict, anxiety, and repression. Moreover, the processes by which emotions and motives are rendered unconscious seem to bear no resemblance to the constructs of psychoanalytic theory. This is not to say that emotional and motivational states and processes cannot be unconscious. If percepts, memories, and thoughts can be unconscious, so can feelings and desires. Of particular interest is the idea that stereotypes and prejudices can be unconscious, and affect the judgements and behaviours even of people who sincerely believe that they have overcome such attitudes (Greenwald et al. 2002).
Mounting evidence for the role of automatic processes in cognition, and for the influence of unconscious percepts, memories, knowledge, and thoughts, has led to a groundswell of interest in unconscious processes in learning and thinking. For example, many social psychologists have extended the concept of *automaticity to very complex cognitive processes as well as simple perceptual ones—a trend so prominent that automaticity has been dubbed ‘the new unconscious’ (Hassin et al. 2005). An interesting characteristic of this literature has been the claim that automatic processing pervades everyday life to the virtual exclusion of conscious processing—‘the automaticity of everyday life’ and ‘the unbearable automaticity of being’ (e.g. Bargh and Chartrand 1999). This is a far cry from the two-process theories that prevail in cognitive psychology, and earlier applications of automaticity in social psychology, which emphasized the dynamic interplay of conscious and unconscious processes. Along the same lines, Wilson has asserted the power of the ‘adaptive unconscious’ in learning, problem-solving, and decision-making (Wilson 2002)—a view popularized by Gladwell as ‘the power of thinking without thinking’ (Gladwell 2005). Similarly, Wegner has argued that conscious will is an illusion, and that the true determinants of conscious thoughts and actions are unconscious (Wegner 2002). For these theorists, automaticity replaces Freud’s ‘monsters from the Id’ as the third unpleasant truth about human nature. Where Descartes asserted that consciousness, including conscious will, separated humans from the other animals, these theorists conclude, regretfully, that we are automatons after all (and it is probably a good thing, too).
The stance, which verges on *epiphenomenalism, or at least conscious inessentialism, partly reflects the ‘conscious shyness’ of psychologists and other cognitive scientists, living as we still do in the shadow of functional behaviourism (Flanagan 1992)—as well as a sentimental attachment to a crypto-Skinnerian situationism among many social psychologists (Kihlstrom 2008). But while it is clear that consciousness is not necessary for some aspects of perception, memory, learning, or even thinking, it is a stretch too far to conclude that the bulk of cognitive activity is unconscious, and that consciousness plays only a limited role in thought and action. ‘Subliminal’ perception appears to be analytically limited, and earlier claims for the power of *subliminal advertising were greatly exaggerated (Greenwald 1992). Assertions of the power of implicit learning are rarely accompanied by a methodologically adequate comparison of conscious and unconscious learning strategies— or, for that matter, a properly controlled assessment of subjects’ conscious access to what they have learned. Similarly, many experimental demonstrations of automaticity in social behaviour employ very loose definitions of automaticity, confusing the truly automatic with the merely incidental. Nor are there very many studies using techniques such as Jacoby’s process-dissociation procedure to actually compare the impact of automatic and controlled processes (Jacoby et al. 1997; see MEMORY, PROCESS-DISSOCIATION PROCEDURE).
So, despite the evidence for unconscious cognition, reports of the death of consciousness appear to be greatly exaggerated. At the very least, consciousness gives us something to talk about; and it seems to be a logical prerequisite to the various forms of social learning by precept, including sponsored teaching and the social institutions (like universities) that support it, which in turn make cultural evolution the powerful force that it is.
JOHN F. KIHLSTROM
Bargh, J. A. and Chartrand, T. L. (1999). ‘The unbearable automaticity of being’. American Psychologist, 54.
Ellenberger, H. F. (1970). The Discovery of the Unconscious: the History and Evolution of Dynamic Psychiatry.
Flanagan, O. (1992). Consciousness Reconsidered.
Gladwell, M. (2005). Blink: the Power of Thinking Without Thinking.
Greenwald, A. G. (1992). ‘New Look 3: Unconscious cognition reclaimed’. American Psychologist, 47.
——, Banaji, M. R., Rudman, L. A., Farnham, S. D., Nosek, B. A., and Mellott, D. S. (2002). ‘A unified theory of implicit attitudes, stereotypes, self-esteem, and self-concept’. Psychological Review, 109.
Hassin, R. R., Uleman, J. S., and Bargh, J. A. (eds) (2005). The New Unconscious.
Jacoby, L. L., Yonelinas, A. P., and Jennings, J. M. (1997). ‘The relation between conscious and unconscious (automatic) influences: a declaration of independence’. In Cohen, J. and Schooler, J. (eds) Scientific Approaches to Consciousness.
Kihlstrom, J. F. (2008). ‘The automaticity juggernaut’. In Baer, J. et al. (eds) Psychology and Free Will.
——, Barnhardt, T. M., and Tataryn, D. J. (1992). ‘Implicit perception’. In Bornstein, R. F. and Pittman, T. S. (eds) Perception Without Awareness: Cognitive, Clinical, and Social Perspectives.
——, Shames, V. A., and Dorfman, J. (1996). ‘Intimations of memory and thought’. In Reder, L. M. (ed.) Implicit Memory and Metacognition.
Reber, A. S. (1993). Implicit Learning and Tacit Knowledge: an Essay on the Cognitive Unconscious.
Schacter, D. L. (1987). ‘Implicit memory: history and current status’. Journal of Experimental Psychology: Learning, Memory, and Cognition, 13.
Wegner, D. M. (2002). The Illusion of Conscious Will.
Westen, D. (1999). ‘The scientific status of unconscious processes: is Freud really dead?’ Journal of the American Psychoanalytic Association, 47.
Wilson, T. D. (2002). Strangers to Ourselves: Discovering the Adaptive Unconscious.
cognitive control and consciousness In a forced-choice reaction time task, responses are slower after an error. This is one example of dynamic adjustment of behaviour, i.e. control of cognitive processing, which according to Botvinick et al. (2001) refers to a set of functions serving to configure the cognitive system for the performance of tasks. We focus on the question whether cognitive control requires conscious awareness. We wish to emphasize that the question refers not to the awareness of all aspects of the world surrounding the performing organism, but to the conscious awareness of the control process itself, i.e. consciousness of maintaining the task requirements, supporting the processing of information relevant to the goals of the current task, and suppressing irrelevant information (van Veen and Carter 2006).
During the last quarter of the 20th century, the term control was contrasted with *automaticity (e.g. Schneider and Shiffrin 1977). Automatic processes were defined as being effortless, unconscious, and involuntary, and the terms unconscious and automatic were used by some interchangeably, leading to the conclusion that control should be viewed as constrained to conscious processing. However, it was shown that phenomena considered to be examples of automatic processing, such as the flanker effect and the *Stroop effect, showed a dynamic adjustment to external conditions, corresponding to the notion of control. In particular, some (e.g. Gratton et al. 1992) showed an increase in the flanker effect after an incompatible trial, while others (e.g. Logan et al. 1984) showed sensitivity of the Stroop effect to the various trial types. Consequently, Tzelgov (1997) proposed to distinguish between monitoring as the intentional setting of the goal of behaviour and the intentional evaluation of the outputs of a process, and control, referring to the sensitivity of a system to changes in inputs, which may reflect a feedback loop.
According to these definitions, monitoring can be considered to be the case of conscious control, that is, the conscious awareness of the representations controlled and the very process of their evaluation. The *global workspace (GW) framework proposed by Dehaene et al. (1998) may be seen as one possible instantiation of the notion of monitoring in neuronal terms. Accordingly, unconscious processing reflects the activity of a set of interconnected modular systems. Conscious processing is possible due to a distributed neural system, which may be seen as a ‘workspace’ with long-distance connectivity that interconnects the various modules, i.e. the multiple, specialized brain areas. It allows the performance of operations that cannot be accomplished unconsciously and ‘are associated with planning a novel strategy, evaluating it, controlling its execution, and correcting possible errors’ (Dehaene et al. 1998:11). Within such a framework the anterior cingulate cortex (ACC) and the prefrontal cortex (PFC), two neural structures known to be involved in control, may be seen as parts of the GW and consequently, are supposed to indicate conscious activity.
In contrast, there are models of control that do not assume involvement of consciousness. For example, Bodner and Masson (2001) argue that the operations applied to the prime in order to identify and interpret it result in new memory *representations, which can later be used without awareness. Jacoby et al. (2003) referred to such passive control as ‘automatic’. A computational description of passive control is provided by the conflict-monitoring model of the Stroop task (Botvinick et al. 2001), which includes a conflict detection unit (presumed to correspond to the ACC) that triggers the control-application unit (presumed to correspond to the PFC). To be more specific, consider a presentation of an incongruent Stroop stimulus, for example, the word ‘BLUE’ printed in red ink, with instructions to respond with the colour of the ink. This results in a strong activation to respond with the word presented, and in parallel, the instructions cause activation of the colour the stimulus is written in. The resulting conflict in the response unit is detected by the ACC, which in turn augments the activation of the colour unit in the PFC, leading to the relevant response. Notice that no conscious decisions are involved in this process.
Thus, the question of the relation between consciousness and cognitive control may be restated in terms of whether cognitive control requires conscious monitoring as implied by the GW and similar frameworks, or whether it can be performed without the involvement of consciousness, as hypothesized by the conflict-monitoring model. Mayr (2004) reviewed an experimental framework for analysing the consciousness-based vs consciousness-free approaches to cognitive control by focusing on behavioural and neural (e.g. ACC and ERN activity) indications of control. He proposed to contrast these indications under a condition of conscious awareness vs absence of awareness, of the stimuli presumed to trigger control by generating conflict. For example, in the study of Dehaene et al. (2003), the contrast is between an unmasked (high awareness) condition in which the participants can clearly see the prime, and a *masked (low awareness) condition. After reviewing a few studies that applied such a design, Mayr had to conclude that the emerging picture is still inconclusive. We agree that the proposed approach is very promising, yet some refinements are needed. First, in most cases discussed by Mayr, the manipulation of awareness was achieved by masking a prime stimulus. The critical assumption, that under masking conditions the participants are totally unaware of the masked stimulus and yet process it up to its semantic level, is still controversial (Holender 1986). Second, concerning the casual order: consider the case in which both behavioural and neuronal (i.e. ACC activity) markers of conflict are obtained only when the person is fully aware of the conflict-triggering stimulus. At face value, it seems to indicate that the causal link is from awareness to markers of conflict; however, it could be equally true that the causal link is from the markers of conflict to awareness. Mayr (2004:146) hints at this point by suggesting the ‘possibility that rather than consciousness being a necessary condition for conflict related ACC activity, conflict related ACC activity might be the necessary condition for awareness of conflict’. Third, concerning the assumed notion of conscious control: what is supposed to be manipulated in the awareness-control design is the awareness of the conflict. Actually, however, what is manipulated is the awareness of the stimulus generating the conflict. Such awareness may be seen as a precondition for applying deliberate monitoring.
Thus, in order to advance answering the question whether cognitive control requires consciousness, future research should distinguish between awareness of the stimulus that causes the conflict, awareness of the conflict per se and awareness of the very process of control as implied by the notion of monitoring. Furthermore, such research should emphasize the distinction between consciousness as a condition for control processes and consciousness as a result of control processes.
JOSEPH TZELGOV AND GUY PINKU
Bodner, G. E. and Masson, M. E. J. (2001). ‘Prime validity affects masked repetition priming: evidence for an episodic resource account of priming’. Journal of Memory and Language, 45.
Botvinick, M., Braver, T., Barch, D., Carter, C., and Cohen, J. (2001). ‘Conflict monitoring and cognitive control’. Psychological Review, 108.
Dehaene, S., Kerszberg, M., and Changeux, J. P. (1998). ‘A neuronal model of a global workspace in effortful cognitive tasks’. Proceedings of the National Academy of Sciences of the USA, 95.
——, Artiges, E., Naccache, L. et al. (2003). ‘Conscious and subliminal conflicts in normal subjects and patients with schizophrenia: the role of the anterior cingulate’. Proceedings of the National Academy of Sciences of the USA, 100.
Gratton, G., Coles, M. G. H., and Donchin, E. (1992). ‘Optimizing the use of information: strategic control of activation and responses’. Journal of Experimental Psychology: General, 121.
Holender, D. (1986). ‘Semantic activation without conscious identification in dichotic listening, parafoveal vision, and visual masking: a survey and appraisal’. Behavioral and Brain Sciences, 9.
Jacoby, L. L., Lindsay, D. S., and Hessels, S. (2003). ‘Item-specific control of automatic processes: Stroop process dissociations’. Psychonomic Bulletin and Review, 10.
Logan, G., Zbrodoff, N., and Williamson, J. (1984). ‘Strategies in the color-word Stroop task’. Bulletin of the Psychonomic Society, 22.
Mayr, U. (2004). ‘Conflict, consciousness and control’. Trends in Cognitive Sciences, 8.
Schneider, W. and Shiffrin, R. M. (1977). ‘Controlled and automatic human information processing: I. Detection, search and attention’. Psychological Review, 84.
Tzelgov, J. (1997). ‘Specifying the relations between automaticity and consciousness: a theoretical note’. Consciousness and Cognition, 6.
van Veen, V. and Carter, C. S. (2006). ‘Conflict and cognitive control in the brain’. Current Directions in Psychological Science, 15.
cognitive feelings Cognitive feelings are a loose class of experiences with some commonality in their phenomenology, representational content, and function in the mental economy. Examples include *feelings of knowing, of familiarity, of preference, tip-of-the-tongue states, and the kinds of hunches that guide behavioural choice in situations ranging from implicit *learning paradigms to consumer choice. The concept overlaps with those of intuition, *metacognition, and *fringe consciousness, and probably has some degree of continuity with *emotional feeling.
Cognitive feelings play a monitoring and control function in the mental economy. They are conscious experiences in the sense that they can be reported, either verbally or in the form of predictive introspective ratings, and can be used to guide behaviour in a flexible and contextually appropriate manner (Price 2002). Specifically, the experiences provide us with metacognitive information about aspects of our mental processes that are otherwise rather inaccessible to consciousness. The inaccessibility of the non-conscious antecedents of the experience may be especially salient because, in other situations, similar experiences would be accompanied by conscious access to their antecedents. The feelings may also feel vague in the sense that their detailed nature may be difficult to describe and communicate verbally, whether to others or oneself.
Before discussing these characteristics of cognitive feelings further, we briefly summarize some classic examples of cognitive feelings that cut across the research domains of memory, perception, attention, language production, problem-solving, and decision-making.
1. Varieties of cognitive feeling
2. The defining characteristics of cognitive feelings
3. Fringe consciousness and the function of cognitive feelings
4. Cognitive vs emotional feelings
5. Neurocognitive basis
6. Shades of consciousness
The feeling of knowing (FOK) state is usually studied by asking people to answer a set of questions about existing or newly learned knowledge. For questions that people cannot currently answer, introspective ratings of FOK nevertheless predict the likelihood that the correct answer will be picked out in a subsequent multiple-choice recognition test. In problem-solving tasks, FOK ratings of perceived closeness to the solution have been referred to as warmth ratings. For non-insight problems, these ratings again have some predictive accuracy, although this is not found for problems requiring insight.
The more naturally occurring tip-of-the-tongue (TOT) phenomenon can be considered as a variety of the FOK state where people feel that memory retrieval of a word is imminent, even though they are currently unable to retrieve it. The feeling of imminence, in the absence of full access to the word, is salient, verbally reportable, and predictive of the actual likelihood of future recall. The feeling also seems stronger and more emotional than illusory TOT states which are induced experimentally. This predictive quality shows that the TOT feeling veridically reflects aspects of the cognitive processes involved in the memory search.
Feelings of familiarity (FOF) refer to a sense of knowing one has encountered something before, despite being unable to explicitly remember the learning episode (Gardiner and Richardson-Klavehn 2000). Whether people rate a previously exposed item as remembered (R) vs just known (K, in this so-called RK paradigm) is known to be influenced by different variables. One explanation for FOF is that previously exposed stimuli seem subjectively more perceptually fluent since they are processed faster, and are therefore assumed to be familiar. Fluency seems to be a learned heuristic based on environmental contingencies. Enhancing perceptual fluency by, for example, increasing the orthographic regularity of non-word letter strings, makes it more likely that previously unseen stimuli are falsely rated as recognized. Manipulating perceptual fluency can bias a variety of intuitive judgements, including the felt truth of written statements.
People tend to prefer things they have encountered before. This mere exposure effect is at the heart of commercial advertising. Feelings of preference (FOP) for previously seen stimuli (e.g. random shapes) in laboratory experiments occur even when people cannot discriminate whether they have been shown the items before, and are even claimed for situations in which the exposure is conducted subliminally (Bornstein 1992). As with other examples of cognitive feelings, the FOP may therefore occur without awareness of its information processing antecedents (i.e. previous exposure). Intuitive decisions based on rapid FOP, rather than on deliberative analysis, are sometimes advantageous. For example, long-term satisfaction with wall posters increases when they are chosen quickly and intuitively in a non-deliberative manner (Wilson et al. 1993).
Cognitive feelings are probably important in many domains beyond the classic paradigms described above. For example, during implicit learning, complex regularities in our environment are encoded without full awareness of what has been learned, or sometimes even without awareness that learning has occurred at all. It has been proposed that such learning can nevertheless result in the development of intuitive feelings of rightness which help guide behaviour. For example, in the context of laboratory studies where artificial rule-governed sequences are learned without verbalizable knowledge of the nature of the sequences (see ARTIFICIAL GRAMMAR LEARNING), intuitive feelings of anticipation may help people to predict the next move in the sequence, and feelings of familiarity may allow rule-governed sequences to be distinguished from random novel sequences (Norman et al. 2006). In everyday life, cognitive feelings derived from implicit learning may underlie social intuitions such as gut feelings about people’s intentions. When information becomes complex, consumer choice may also benefit more than ever from intuitive rather than deliberative judgements. Other varied examples of experiences that could be labelled as cognitive feelings include the intuitive hunches that drive the cognitive heuristics used in everyday decision-making, including stock market speculation.
These examples of cognitive feelings help to clarify the type of informational content carried by the feelings— which is what makes them cognitive. They also illustrate the nature of the interplay between conscious and unconscious processes involved in these experiences— which is what encourages us to label them as feelings.
Take the informational content first. TOT states, and feelings of knowing, familiarity or preference, all convey something about our cognitive processes, and are therefore examples of what has been called online metacognition, which refers to the monitoring and control of ongoing cognition. For example, we know that we are about to retrieve a word, or that we could recognize a currently unrecallable item, or that we have probably encountered an item before. However, in these examples we are only aware of the information conveyed by the feeling. We do not have conscious access to the antecedents of the feelings, such as the encoding episode that gives rise to the feeling of familiarity. Koriat (2000) uses the term experience-based metacognitive judgements to refer to this variety of online metacognition whose information-processing antecedents are not (or not yet) consciously accessible to the person, in contrast to information-based metacognitive judgements which are based on explicit inferential processes. This type of ‘sheer metacognitive feeling’ (Koriat 2000:163) is argued to be a ‘direct unmediated experience’ of the type that people sometimes refer to as ‘an intuitive feeling, a hunch’ (Koriat 2000:152).
It is this absence of cognitive transparency that makes it appropriate to apply the folk psychological label of feeling to these experiences. Dictionaries usually give many definitions of feelings, from the experience of physical sensation or emotion, to a sense of impression and intuitive understanding. Common to these various definitions is a phenomenology characterized by the relative absence of reasoned, analytical thought processes. For sensory feelings, the feelings may appear to stem directly from external perceptual receptors (e.g. a feeling of touch). For emotional feelings, they may seem to derive heavily from internal bodily sensations. But even if the dominant component of a conscious experience is some aspect of our thoughts rather than a perceptual or emotional sensation, the experience may still have the quality of a feeling if the antecedents of the mental state seem largely inaccessible. Hence we continue to talk metaphorically about a gut feeling that we prefer one picture over another without being able to express why. Feelings can therefore be cognitive, even if the term cognitive feeling seems initially self-contradictory, because feelings are often taken to be the antithesis of the rational and analytical cognitive end of the mental spectrum.
An additional criterion is useful in constraining the class of experiences that are regarded as cognitive feelings (Price 2002): the gap between the experience and its non-conscious antecedents is typically somewhat anomalous or unexpected. When we recognize a class of object, we do not find it curious that we are unaware of the antecedent information-processing stages of object recognition. Here we take it for granted that consciousness is the tip of the information-processing iceberg. However, when we intuitively prefer one painting over another, or experience the familiarity of a face despite no episodic recall of having met its owner before, our judgements have the character of feelings or intuitions to the extent that we notice the absence of conscious supporting evidence that could in principle be present.
This outline of cognitive feelings has much overlap with aspects of the Jamesian concept of fringe consciousness, as revived and elaborated by Mangan (2001). Fringe consciousness refers to all components of the stream of consciousness other than the nucleus of focally attended sensory information. The concept is particularly concerned with feelings of relation which provide a summary signal of the degree of fit or integration between this nucleus and its contextual background of largely non-conscious information processing. Although the range of experiences subsumed under the concept of fringe consciousness is considerably broader than just cognitive feelings, Mangan considers that the types of empirically studied cognitive feelings listed above are important examples of fringe consciousness, and suggests they are manifestations of a core relational feeling of rightness.
Mangan’s proposed function for fringe consciousness in the cognitive economy is also similar to Koriat’s (2000) suggested function for experience-based metacognitive feelings. Within both theoretical frameworks, the cognitive feeling has the monitoring role of summarizing aspects of currently unavailable non-conscious processing that are relevant to ongoing, conscious, mental tasks. By acting as an interface between non-conscious and conscious processing, this summary feeling then has a control role in directing behaviour—e.g. choosing the object which feels right or familiar or preferred—and in directing ongoing cognition—e.g. attempting to retrieve the non-conscious context information into consciousness, as during the ‘tip of the tongue’ state. The functional advantage of having a conscious feeling to do these jobs, rather than relying on non-conscious automatic processes, is that conscious processes are qualitatively more flexible. The owner of the feeling therefore benefits from a much higher level of behavioural choice.
According to Mangan, attempting to attend to a fringe feeling will often instantly retrieve its previously non-conscious antecedents into consciousness and the feeling will therefore have a very transient and fleeting nature. Norman (2002) argues that in situations where the background context information is relatively inaccessible, fringe consciousness has a ‘frozen’ sustained quality, allowing it to be attended and introspectively reported. Cognitive feelings, which are amenable to conscious introspection, correspond closely to this subclass of fringe consciousness.
More speculatively, the overlap between cognitive feelings and fringe consciousness may extend to other examples such as the general sense of understanding the text we are reading, which is argued by Mangan (2001) to be a variety of fringe feeling which expresses a global metacognitive impression of coherence and meaning.
There are several similarities between cognitive feelings and the affective feelings present during emotional experience. (1) Both may share phenomenological qualities such as descriptive vagueness or fleetingness. (2) As with cognitive feelings, emotions may be consciously experienced even if their antecedent causes are not consciously accessible, and act as a summary alarm system to warn us of the presence of the currently unavailable non-conscious information (Damasio 1994). (3) Cognitive feelings are by no means always affectively neutral, even if they are usually associated with less salient somatic responses than strong emotions. The positive or negative evaluative valence of feelings such as feelings of rightness, preference, familiarity, and so on has been commented on by many authors writing on fringe consciousness. (4) Just as theories of emotion such as Damasio’s (1994) somatic marker hypothesis stress the crucial role of autonomic bodily signals in shaping the gut feelings of emotional experience, so it has been argued that cognitive feelings are often grounded in bodily sensations. For example, sensorimotor phenomenology may be integral to experiences such as TOT or even to abstract relational fringe feelings such as ‘and ness’ (Epstein 2000), and feelings of rightness or wrongness may be rooted in postural maintenance systems, as suggested by everyday language such as making a ‘solid’ vs a ‘shaky’ argument, and having ‘no leg to stand on’ (Woody and Szechtman 2002). Somatic markers have been implicated in implicit learning paradigms by showing that skin-conductance measures of autonomic activity correlate with the accuracy of intuitively made judgements about implicitly learned perceptual patterns (Bierman et al., 2005).
The distinction between cognitive and emotional feelings therefore seems to be one of gradation, with the metacognitive vs emotional contents of a feeling both varying in an at least semi-independent fashion.
The extent to which various cognitive feelings share common neurocognitive processes remains an open question, especially as there is already disagreement over the processes underlying specific feelings such as the feeling of knowing. Attempts have been made to map fringe feelings to aspects of global cognitive architectures such as the interactive cognitive subsystems (ICS) model, and to formally model metacognitive feelings as higher-order representations of the state of neural networks. It has also been speculated that these feelings may involve the associative memory networks of the mediotemporal hippocampal system, which originally evolved to support external spatial navigation, but then came to support internal cognitive navigation of a less explicitly topographical nature by providing a summary sketch of critical relationships between thoughts, and by pointing to the potential availability of further information (Epstein 2000). This idea that the feelings are embodied in spatial representation systems has some resonance with evidence that implicit learning can influence moment-to-moment visuospatial orienting by guiding attention to the spatial locations where useful information is implicitly anticipated to lie.
Abnormalities of cognitive feelings among clinically abnormal or brain-damaged patients may also provide a useful future source of evidence for underlying processes. Some groups of patients illustrate the behavioural problems associated with the absence of cognitive feelings. People with Capgras syndrome seem not to experience the usual feelings of familiarity when meeting significant people in their life, and have the delusional belief that these people have been replaced by impostors. In a gambling task which involves learning simple predictive patterns, patients with damage to prefrontal cortex fail to experience the usual hunch phase in their learning, which in normals is characterized by behavioural signs of having implicitly learned the predictive pattern at an intuitive level while still unable to verbalize the predictive rule (Bechara et al. 1997). Later, the subset of patients who manage to eventually verbalize the rule are still unable to use it to guide their behaviour.
Conversely, other groups of patients illustrate the residual usefulness of isolated feelings. One patient with severe encephalitis, resulting in profound *amnesia and agnosia, experienced an intuitive feeling of preference for sweet over saline drinks in the absence of the usual antecedent taste experiences that would justify the preference (Adolphs et al. 2005). Some of the shades of consciousness reported by so-called *blindsight patients may also provide examples of intuitive cognitive feelings that guide behaviour in the absence of usual visual experiences.
Cognitive feelings have a conscious phenomenology, even if sometimes vague, but are grounded in non-conscious processing. They can therefore seem to be a kind of halfway house between conscious and non-conscious mental representations, making people unsure which of these two categories to assign them to. This notion of a shady interface between the conscious and non-conscious is captured by the connotations of the folk psychological terms hunch or intuition. Confusion is further exacerbated in situations where there is dissociation between different operational criteria for whether mental representations are conscious, such as dissociation between subjective verbal report and more objective behavioural tests (Price 2002, Norman et al. 2006). However, by characterizing and measuring intermediate states of awareness, the empirical study of cognitive feelings can help resolve polarization in debates such as whether learning in so-called implicit learning paradigms is purely implicit or explicit. Some authors characterize such intermediate states in terms of the presence of some types of knowledge, such as metaknowledge (or judgement knowledge), in the absence of other qualitatively different types of knowledge, such as detailed structural knowledge about learned information (Dienes and Scott 2005). Others see these intermediate states as occupying points on a more continuous gradation of consciousness. In either case, studying the variables that modulate these intermediate states may provide future insights into the dynamic construction of consciousness.
MARK C. PRICE AND ELISABETH NORMAN
Adolphs, R., Tranel, D., Koenigs, M., and Damasio, A. R. (2005). ‘Preferring one taste over another without recognising either’. Nature Neuroscience, 8.
Bechara, A., Damasio, H., Tranel, D., and Damasio, A. (1997). ‘Deciding advantageously before knowing the advantageous strategy’. Science, 275.
Bierman, D., Destrebecqz, A., and Cleeremans, A. (2005). ‘Intuitive decision making in complex situations: somatic markers in an implicit artificial grammar learning task’. Cognitive, Affective, and Behavioral Neuroscience, 5.
Bornstein, R. F. (1992). ‘Subliminal mere exposure effects’. In Bornstein, R. F. and Pittman, T. S. (eds) Perception Without Awareness; Social, Clinical and Social Perspectives.
Damasio, A. R. (1994). Descartes’ Error: Emotion, Reason, and the Human Brain.
Dienes, Z. and Scott, R. (2005). ‘Measuring unconscious knowledge: distinguishing structural knowledge and judgment knowledge’. Psychological Research, 69.
Epstein, R. (2000). ‘The neural-cognitive basis of the Jamesian stream of thought’. Consciousness and Cognition, 9.
Gardiner, J. M. and Richardson-Klavehn, A. (2000). ‘Remembering and knowing’. In Tulving, E. and Craik, F. I. M. (eds) The Oxford Handbook of Memory.
Koriat, A. (2000). ‘The feeling of knowing: some metatheoretical implications for consciousness and control’. Consciousness and Cognition, 9.
Mangan, B. (2001). ‘Sensation’s ghost: the non-sensory “fringe’ of consciousness’. Psyche, 7, http://psyche.cs.monash.edu.au/v7/psyche-7-18-mangan.html
Norman, E. (2002). ‘Subcategories of “fringe consciousness” and their related nonconscious contexts’. Psyche, 8. http://psyche.cs.monash.edu.au/v8/psyche-8-15-norman.html
——, Price, M. C., and Duff, S. C. (2006). ‘Fringe consciousness in sequence learning: the influence of individual differences’. Consciousness and Cognition, 15.
Price, M. C. (2002). ‘Measuring the fringes of consciousness’. Psyche, 8, http://psyche.cs.monash.edu.au/v8/psyche-8-16-price.html
Wilson, T. D., Lisle, D. J., Schooler, J. W., Hodges, S. D., Klaaren, K. J., and LaFleur, S. J. (1993). ‘Introspecting about reasons can reduce post-choice satisfaction’. Personality and Social Psychology Bulletin, 19.
Woody, E. and Szechtman, H. (2002). ‘The sensation of making sense. Motivational properties of the “fringe”’. Psyche, 8(20). http://psyche.cs.monash.edu.au/v8/psyche-8-20-woody.html
colour, philosophical perspectives According to experience, tangerines are orange. According to science, tangerines are collections of colourless particles. There is some reason to think that the picture of the world provided by experience and the picture of the world provided by science are in conflict. This leads to perhaps the most central philosophical issue concerning colour, the issue of realism vs *eliminativism. Realists hold that the pictures may be reconciled. Tangerines really are orange. By contrast, eliminativists hold that the two pictures cannot be reconciled. Tangerines appear orange, but are not really orange: the appearances are misleading. This view, then, eliminates colours from the physical world. This was the view taken by Galileo, who held that colours are only ‘in the mind’. Notice that no analogous issue arises for other properties that we experience, e.g. shapes. There is no evident conflict between the picture of shape provided by experience and that provided by science.
Fig. C1. Philosophical views on colour.
There is a second philosophical issue concerning colour. Many philosophers wish to reductively explain all the properties of the common sense world in physical terms. This is due to the popularity of physicalism, the view that everything is explainable in physical terms (see PHYSICALISM and REDUCTIONISM). Typically, the issue of reduction is discussed in connection with the mind, but the same issue arises in connection with colour. There are two views, reductionism and primitivism. Reductionists hold that colours can be reduced to physical properties. As we shall see, reductionism comes in two different versions. Response-dependent reductionism explains colours in terms of how objects affect perceivers. Response-independent reductionism explains colours in terms of physical properties of objects that are independent of perceivers, such as properties concerning how objects reflect light. By contrast to reductionists of either stripe, primitivists hold that colours cannot be reduced to physical properties. Primitivism is so called because it maintains that colours are basic or primitive properties that cannot be explained in other terms, much like fundamental physical properties such as charge and mass. So if one combines realism and primitivism, one takes the view that objects have colours in addition to their physical properties. This view, then, rejects reductive physicalism. It bears an obvious analogy to dualism because it recognizes a dualism of physical and chromatic properties at the surfaces of physical objects (see DUALISM). Notice that no analogous issue arises for other properties that we experience, for instance shapes. Shapes are obviously physical properties. There is no *explanatory gap here. By contrast, many believe that, just as there is an explanatory gap between states of consciousness and physical properties, there is also an explanatory gap between colours and physical properties. So it is not obvious that colours are physical properties.
These two issues create a decision tree (see Fig. C1). If one accepts realism, one faces the choice between reductionism and primitivism. If one accepts reductionism, one faces the additional choice between response-dependent reductionism and response-independent reductionism. Alternatively, in view of the difficulties with realist theories, one might accept eliminativism, banishing colours from the external world. Let us now examine the four views at the end points in the decision tree, moving from left to right. We begin with views that combine realism and reductionism, which are popular among philosophers.
1. Response-dependent reductionism
2. Response-independent reductionism
3. Realist primitivism
4. Eliminativism
Response-dependent reductionism (McGinn 1983) maintains that the property of being orange is a secondary quality of external objects: it is defined in terms of the responses objects produce in human beings. In particular, the property of being orange is the property of being disposed to produce orange experiences in normal individuals under normal conditions. By an ‘orange experience’, I mean the kind of experience one has when one looks at orange objects. To put it crudely, on this view, if a tangerine is in the forest and no species exists to see it, the tangerine is not orange. On response-dependent reductionism, then, colour is a perceiver-dependent property, like being funny or being poisonous. Yet it is a form of realism, because it holds that tangerines really are orange: they are orange, because they are disposed to produce orange experiences in normal individuals under normal circumstances. It is also a form of reductionism, because the property of being disposed to produce orange experiences in normal individuals under normal conditions is a physical property of objects if orange experiences in turn may be identified with physical (e.g. neural) states of persons.
One argument for response-dependent reductionism derives from the possibility of biological variation in colour vision (McGinn 1983). Consider a hypothetical case (Pautz 2006). Maxwell and Mabel belong to different species. Owing to naturally evolved differences between their colour systems, a tangerine normally appears orange to the members of Maxwell’s species but pure red to the members of Mabel’s species. Who gets it right? One option is to say that both get it right. A second option is to say that one gets it right and the other gets it wrong: for instance, the tangerine is orange but not pure red. A third option is eliminativism: neither gets it right. The second option appears arbitrary, and the third flies in the face of experience and common sense. Therefore one might think that the first option— chromatic liberalism—is the best. Response-dependent reductionism secures this result. On this view, when Maxwell says ‘the tangerine is orange’, he attributes to the tangerine the disposition to normally produce orange experiences in members of his species. When Mabel says ‘the tangerine is pure red’, she attributes to the tangerine the disposition to produce pure red experiences in members of her species.
But there are problems with response-dependent reductionism. First, it is not clear that it has a sound motivation. Each of the above three options has a cost. True, the claim that only one individual gets it right appears arbitrary, and the claim that neither gets it right is contrary to common sense. But many would say that intuition goes against the verdict of response-dependent reductionism that both get it right. For Maxwell attributes the property of being orange to the tangerine and Mabel attributes the property of being pure red to the tangerine, and many have the intuition that a single object cannot be orange and pure red all over, contrary to response-dependent reductionism. So it is not obvious that this option is the best one. Indeed, in view of the problems with the various forms of realism, it may be that eliminativism is the best option. Second, many philosophers hold that response-dependent reductionism is phenomenologically implausible. Colours, they claim, do not look like dispositions to produce effects in us (Boghossian and Velleman 1989). Instead, they look like intrinsic, non-relational properties of objects on a par with shapes. Third, intuitively, to have an orange experience is to have an experience of the colour orange. The colour orange enters essentially into the specification of orange experiences. If so, then the response-dependent reductionist identifies the colour orange with the disposition to normally produce experiences of that very property, orange. This appears incoherent or circular (Boghossian and Velleman 1989).
By contrast to response-dependent reductionists, response-independent reductionists identify colours with response-independent properties of objects, that is, properties of objects that are completely independent of the responses objects produce in perceivers (Dretske 1995, Lycan 1996, Armstrong 1999, Tye 2000, Byrne and Hilbert 2003). On the most popular version of response-independent reductionism, colours are properties concerning the reflection of light, or reflectance properties for short. On this view, just as water is H2O, the colour orange is a certain reflectance property. Like response-dependent reductionism, this view is both realist and reductionist. Colours are real properties of physical objects, and they are physical properties of physical objects.
On response-independent reductionism, by contrast to response-dependent reductionism, if a tangerine is in a forest and no species exists to see it, the tangerine is still orange, since the tangerine has the reflectance property that is identical with orange. Likewise, on response-independent reductionism, if we evolved a new colour vision system, so that tangerines came to look pure red rather than orange to us, then the tangerines themselves would remain orange, because they would retain the reflectance property that is identical with orange. By contrast, on a simple form of response-dependent reductionism, the correct description of this scenario is that tangerines change from orange to pure red (somewhat as a substance could go from being poisonous to being non-poisonous as a result of a change in our neurophysiology). In short, response-independent reductionism differs from response-dependent reductionism because it holds colour is an objective property like shape, rather than a perceiver-dependent property like being poisonous.
Typically, response-independent reductionists about colours accept a *representational theory of our consciousness of colours. On this view, to be conscious of orange is simply to have an experience that represents or registers that something has the colour orange (which, on this view, is identical with a reflectance property). And, typically, they accept a tracking theory of sensory representation according to which the brain represents reflectance properties (on this view, colours) in the same way that a thermometer represents temperatures (see INTENTIONALITY). A pattern of neural firing represents or registers a certain reflectance property just in case it is caused by that reflectance property under optimal conditions (Tye 2000), or just in case it has the biological function of indicating that reflectance property (Dretske 1995). This philosophical view of colour fits well with the view in vision science that colour perception is a computational process whereby the reflectances and other properties of objects are recovered from the information arriving at the retina (Marr 1982).
What is the argument for response-independent reductionism? Like response-dependent reductionism, it is both realist and reductionist. So it agrees with experience and common sense, which have it that the world is coloured. And it agrees with physicalism, which seeks to explain everything in physical terms. At the same time, it avoids some of the difficulties with response-dependent reductionism. For instance, as noted above, many would say, against response-dependent reductionism, that colours do not look like dispositions to produce effects in us. Instead, they look like perceiver-independent of objects on a par with shapes. This is exactly what response-independent reductionism says colours are.
But there are also arguments against response-independent reductionism. First, what will the response-independent reductionist say about cases of biological variation, such as the case of Maxwell and Mabel? On a representational theory of colour experience, Maxwell represents the tangerine as orange and Mabel represents it as pure red. On response-independent reductionism, the represented properties orange and pure red are identical with different reflectance properties. Furthermore, response-independent reductionists hold that no surface can have both of these reflectance properties (Byrne and Tye 2006). Who then gets it right? One option for the response-independent reductionist is to say that the colour orange that Maxwell represents is identical with a reflectance property R that the tangerine does have, while the colour pure red that Mabel represents is identical with a different reflectance property X that the tangerine does not have (Byrne and Tye 2006). Call this asymmetrical misrepresentation. But there are two serious problems with this inegalitarian account of biological variation.
First, we may suppose that Maxwell and Mabel are alike at the receptoral level, so that their visual systems track exactly the same reflectance property R of the tangerine. They have different colour experiences because they naturally evolved different postreceptoral processing. So, when they view the tangerine and are put into different brain states, both of their visual systems are operating exactly as they were designed by evolution to operate. So, both brain states track R under optimal conditions, and both have the function of indicating R. So, given the stipulated basic physical facts of the situation, a tracking theory of representation predicts that both Maxwell and Mabel accurately represent the tangerine as having the reflectance property R. Thus, asymmetrical misrepresentation is incompatible with a tracking theory. In fact, no known theory of sensory representation supports this inegalitarian verdict. To underscore the problem, consider the following. According to asymmetrical misrepresentation, it is Maxwell who accurately represents the tangerine as having R, and it is Mabel who inaccurately represents it as having X. But if response-independent reductionism is correct, then another possibility is that it is Maxwell who inaccurately represents the tangerine as having X, and it is Mabel who accurately represents it as having R. (This option holds, contrary to the first option, that the orange colour that Maxwell represents is identical with X, and the pure red colour that Mabel represents is identical with R.) What could possibly make it the case that one of these possibilities obtains to the exclusion of the other? Apparently, the response-independent reductionist who favours asymmetrical misrepresentation must say that which of these possibilities actually obtains is a kind of primitive fact with no basis in the physical facts of the situation. In other words, he must give up on reduction. Yet achieving a reductive account was one of the motivations behind response-independent reductionism.
Second, the present account of biological variation may have the consequence that we cannot be said to know the colours of things. Maybe our own wiring, like Mabel’s, makes us normally represent objects as having reflectance properties (colours) that they do not possess. Then our colour beliefs are false. If, on the other hand, our wiring, like Maxwell’s, makes us normally represent objects as having reflectance properties that they do have, then this would seem to be a matter of luck. Either way, we cannot be said to know the colours of objects, which is contrary to common sense. So, given the account of biological variation under discussion, response-independent reductionism does not agree with the common-sense view of colour. Yet agreeing with common sense was one of its motivations.
A second argument against response-independent reductionism concerns colour structure (Hardin 1988). Blue resembles purple more than green does. Purple is a binary colour: every shade of purple is somewhat reddish and bluish. By contrast, green is a unitary colour. It does not contain a hint of any other colours. According to the opponent process theory of colour vision (see Hardin 1988 for an accessible account), we experience unitary and binary colours because of features of the colour vision system, although the neurobiological details remain poorly understood (see COLOUR, SCIENTIFIC PERSPECTIVES). But the belief that some colours are unitary while others are binary is justified on the basis of colour experience and experiments on colour naming. It does not stand or fall with any neurobiological theory of colour vision. Here now is the problem for response-independent reductionism. There is no obvious sense in which the blue-reflectance (the reflectance property the response-independent reductionist identifies with blue) resembles the purple-reflectance more than the green-reflectance. Nor is there any sense in which the purplere-flectance is binary, while the green-reflectance is unitary (Byrne and Hilbert 2003). So, the colours our colour experiences represent have structural features which are not possessed by the reflectance properties which normally cause those colour experiences. But then colours must be distinct from those reflectance properties.
A third argument against response-independent reductionism is based on the intuition that there is an explanatory gap: however closely the colour orange and the reflectance property R may be correlated, they are intuitively wholly distinct from each other, just as pain is intuitively wholly distinct from the correlated brain state.
Some philosophers are attracted to the realist view that external objects are coloured, because it agrees with experience and common sense. But, for the reasons we have discussed, they are dissatisfied with both response-dependent reductionism and response-independent reductionism. These philosophers accept realism but reject reductionism, and accept primitivism instead (Campbell 1993, McGinn 1996). This combination of views is called realist primitivism. It is realist because it holds that the tangerine has the property of being orange. It is primitivist because it holds that the property of being orange is an extra, primitive property of the tangerine that cannot be identified with its disposition to produce orange experiences or its reflectance property R. To highlight this feature of the view, we might call this property primitive orange. On this view, then, colours are fundamental properties of the world, like charge and spin. As noted in the introduction, this view bears an obvious analogy to dualism because it recognizes a dualism of physical and chromatic properties at the surfaces of physical objects.
Now, realist primitivists typically do not stop here. They typically say that the extra, primitive property of being orange ‘supervenes on’ or ‘emerges from’ some other properties of the tangerine (see EMERGENCE). On one version of this idea, the property of being orange emerges from the tangerine’s disposition to produce orange experiences in perceivers (McGinn 1996). So if a tangerine is in a forest and no species exists to see it, the tangerine does not have the emergent property of being orange, because it does not have such a disposition. This view is analogous to response-dependent reductionism. The difference is that it is a primitivist view, rather than a reductionist view: it holds that the property of being orange is an extra, emergent property of the tangerine, over and above its disposition to produce orange experiences in perceivers. Call it response-dependent primitivism. On another version, the property of being orange emerges from the tangerine’s reflectance property R. So if a tangerine is in a forest and no species exists to see it, the tangerine nevertheless has the emergent property of being orange, because it has the reflectance property R. This view is analogous to response-independent reductionism. Again, the difference is that it is a primitivist view, rather than a reductionist view: it holds that the property of being orange is an extra, emergent property of the tangerine, over and above its reflectance property R. Call it response-independent primitivism. So, although this is not represented in Fig. C4, primitivism as well as reductionism comes in response-dependent and response-independent versions.
What is the argument for realist primitivism of either variety? To begin with, it is a realist view, so it agrees with experience and common sense, which have it that external objects are coloured. At the same time, it avoids the problems with reductionism. For instance, it avoids Hardin’s (1988) problem about colour structure. Even though reflectance properties are not unitary or binary, the colour properties that emerge from reflectance properties might be unitary or binary. And realist primitivism avoids the explanatory gap problem, because it endorses the intuition that colours are wholly distinct from reflectance properties.
But there are also problems with realist primitivism. First, one motivation behind realist primitivism is to accommodate common sense, but it is unclear that either response-dependent primitivism or response-independent primitivism accommodates common sense in its entirety. In fact, unsurprisingly, they share some of the problems of their reductive cousins, response-dependent reductionism and response-independent reductionism. Response-dependent primitivism (McGinn 1996) entails that, in the case of Maxwell and Mabel, the tangerine instantiates both primitive orange and primitive pure red, since it normally produces experiences of orange in members of Maxwell’s species and it normally produces experiences of pure red in members of Mabel’s species. This goes against the intuition that nothing can be both orange and pure red all over. And response-independent primitivism (Campbell 1993) may have the consequence that we cannot be said to know the colours of things. If this view is correct, then objects had response-independent primitive colours prior to the evolution of colour vision. Now, what colour vision system evolved (and hence what primitive colours objects look to have) in any given species was independent of the actual primitive colours of objects. Instead, it was determined by the peculiar set of selection pressures that operated on its ancestors: their habits, dietary needs, predators, and environments. It follows that if a species happens to evolve a colour vision system that makes objects look to have the primitive colours that they actually do possess, this can only be an accident. This seems to imply that no species (including Homo sapiens) can be said to know the primitive colours of objects. What is the point of claiming that objects have primitive colours if we cannot be said to know what those primitive colours are?
A problem that attends both versions of realist primitivism is that they are complicated, as they are dualist views that hold that physical objects have ‘extra’ or ‘emergent’ colour properties over and above their physical properties. Therefore Occam’s razor counts against both versions of realist primitivism.
All forms of realism, then, face problems. These problems lend some support to eliminativism. On this view, a tangerine in the forest is not orange, even if someone is there to see it. On some versions, colours are ‘only in the mind’. We evolved to experience objects as coloured, not because they really are coloured, but because experiencing objects as coloured enhances adaptive fitness. Philosophers today generally favour realism. But in the past many favoured eliminativism, including Galileo, Newton, Descartes, and Locke. And many contemporary vision scientists favour eliminativism. Thus, Zeki writes ‘the nervous system … takes what information there is in the external environment, namely, the reflectance of difference surfaces for different wavelengths of light, and transforms that information to construct colours, using its own algorithms to do so’ (Zeki 1983:746, emphasis original).
I have noted that realist theories come in reductionist and primitivist versions. The same is true of eliminativist theories, although this is not represented in Fig. C4. The eliminativist might hold that colours (or colour *qualia) reduce to neural properties of the brain, which we somehow mistakenly project onto external objects (Hardin 1988). Alternatively, he might hold that colours are primitive properties, which absolutely nothing has (Mackie 1976). On this view, colour properties only live in the contents of our experiences. Similarly, absolutely nothing has the property of being a winged horse: this property only lives in the contents of our thoughts.
The argument for eliminativism is that it provides the best overall account of the facts about colours. Consider, for instance, Maxwell and Mabel, who exhibit a case of biological variation. We have seen that some response-independent reductionists accept asymmetrical misrepresentation: the verdict that one gets it right and the other gets it wrong. The problem is that they cannot provide an explanation of why one gets it right and the other gets it wrong, rather than the other way around. On eliminativism, both get it wrong, so there is no need to decide. And, crucially, the eliminativist may provide an explanation of why both get it wrong, which appeals (among other things) to the claim that objects do not have the colours presented to us in colour experience. The eliminativist can also account for facts involving the unitary–binary character of the colours. If he holds that colours are neural properties of the brain, he can treat them as neural facts. If he holds that colours are primitive properties that nothing has, he can treat them as primitive facts about colours. Finally, because eliminativism banishes colours from the external world, it is much simpler than realist versions of primitivism.
An obvious argument against eliminativism is that it flies in the face of experience and common sense, which have it that the world is coloured. In reply, the eliminativist might point out that the all of the realist theories we have examined depart considerably from common sense at some points. This illustrates a general feature of the philosophical debate concerning colour: here as elsewhere, there is no perfect theory. The best we can do is to try to draw up a balance sheet and see where the balance of considerations tilts.
ADAM PAUTZ
Armstrong, D. M. (1999). The Mind-Body Problem: an Opinionated Introduction.
Boghossian, P. and Velleman, D. (1989). ‘Colour as a secondary quality’. Mind, 98.
Byrne, A. and Hilbert, D. (2003). ‘Color realism and color science’. Behavioral and Brain Sciences, 26.
—— and Tye, M. (2006). ‘Qualia ain’t in the head’. Noûs, 40.
Campbell, J. (1993). ‘A simple view of colour’. In Haldane, J. and Wright, C. (eds) Reality, Representation, and Projection. Dretske, F. (1995). Naturalizing the Mind.
Hardin, C. L. (1988). Color for Philosophers: Unweaving the Rainbow.
Lycan, W. (1996). Consciousness and Experience.
Mackie, J. L. (1976). Problems from Locke.
Marr, D. (1982). Vision: a Computational Investigation into the Human Representation and Processing of Visual Information.
McGinn, C. (1983). The Subjective View: Secondary Qualities and Indexical Thoughts.
—— (1996). ‘Another look at color’. Journal of Philosophy, 93.
Pautz, A. (2006). ‘Sensory awareness is not a wide physical relation’. Noûs, 40.
Tye, M. (2000). Consciousness, Color and Content.
Zeki, S. (1983). ‘Colour coding in the cerebral cortex: the reaction of cells in monkey visual cortex to wavelengths and colours’. Neuroscience, 9.
colour, scientific perspectives The human visual system is sensitive to a narrow band of electromagnetic radiation with wavelengths between 400 and 700 nm. These wavelengths are those which pass, largely unattenuated, through the Earth’s atmosphere and make up the visible spectrum, a term coined by Isaac Newton in 1671. Newton was also aware that colour sensation does not derive from a property of light; ‘For the rays to speak properly are not coloured. In them there is nothing else than a certain power and disposition to stir up a sensation of this or that colour’. Here Newton correctly asserts in Opticks (1704) that colour is a product of our nervous system. Yet three centuries later, it is apparent that the task of elucidating the neural processes responsible for our chromatic world is far from straightforward. A naive description of the neural basis of colour vision may be caricatured as follows: the human retina contains distinct types of light-sensitive receptors, the retinal cones, adapted to detect light of specific wavelengths; different wavelengths thereby elicit different colour experiences. Nearly everything in this caricature is wrong. Misconceptions about colour perception range from erroneous assumptions about the functions of colour vision as a whole to subtle misunderstandings about the responses of specialized cells in the visual system. Advances in our understanding of the neurobiology of colour vision, often in conjunction with psychophysical studies, have resolved many of these issues and led to a clearer picture of the determinants of colour experience. It is, perhaps, easiest to understand the neurobiology of colour vision in the context of its function. We will initially consider the functions of colour vision, then summarize the mechanisms through which these functions might be realized, and then assess how our current understanding of the anatomy and physiology of the visual system might instantiate those mechanisms. Along the way we note how different aspects of colour-processing are related to the conscious experience of colour *qualia.
1. What is colour vision for?
2. Disentangling wavelength from intensity; disentangling surface reflectance from illumination
3. Anatomy and physiology of wavelength processing
Colour perception is mediated by differential responses of the visual system to light of different wavelengths. It is, however, misleading to assume that colour perception has evolved for seeing the wavelength of light, or that different colour qualia simply reflect differences in the wavelength of light. Colour vision, in common with the rest of vision, appears adapted for seeing objects in the world, and objects are seen by virtue of the fact that they reflect light. The ability to respond differentially to the wavelength of light, over and above its intensity, provides a perceiver with additional information about the visual world. Different materials vary in the efficiency with which they reflect light of different wavelengths. Thus determining the relative reflectance of different wavelengths will specify the nature of the surface material of which an object is composed. Unfortunately, the light illuminating the world can vary considerably. The distribution of intensity at different wavelengths, a light’s spectrum, differs markedly as sunlight changes over the day, the spectrum of skylight differs from horizon to zenith, and the spectrum of diffusely reflected light in shadows is influenced by the objects that have reflected it. The spectrum of light reaching our eyes from an object is therefore not just determined by the reflectance properties of the object’s surface, but also by the spectrum of the light illuminating it. Our ability to see an object as having an invariant colour despite changes in its illuminant is called colour constancy and the visual system appears adapted to achieving it, albeit imperfectly (see e.g. Brainard 2003).
We might then propose that colour perception is really for establishing the reflectance properties of materials. Is there any evidence that this is the case? Are the reflectance properties of materials with particular evolutionary value to an organism especially well discriminated? From the seminal work of Allen (1879), a number of studies have addressed this question by comparing the spectral reflectance properties of materials with the sensitivities of the pigments within retinal photoreceptors which are the first stage in permitting an animal to discriminate wavelength. Candidates have included colour variation signalling the ripeness or esculence of major items in animals’ diets (Regan et al. 2001) and sexual signals (Dixson 2000).
Is colour vision for anything other than perceiving the surface properties of objects? The ability to respond to lights of different wavelengths confers more potential abilities on an organism than estimating the reflectance properties of objects. If an object has a different reflectance function from its background, an organism can exploit this to segment the object from its background. Spectral differences may well provide better segmentation cues than light intensity (Sumner and Mollon 2000). Intensity variation resulting from direct lighting and shadows can break up the true contours of objects and provide less reliable information than spectral content. The use of wavelength information to segment an object from its background does not require any estimation of the nature of the light illuminating the visual scene and so does not depend on colour constancy. It is only necessary to signal that the reflectance properties of an object and its background are different. The neurological condition of cerebral *achromatopsia, where patients are rendered colour blind as a result of *brain damage, provides evidence that the perception of surface colour and the ability to use wavelength to segment objects from their backgrounds are independent of one another. Achromatopsic observers retain the latter but have no phenomenal experience of colour qualia.
Possessing multiple types of photoreceptors, sensitive to different wavelengths of light, may also be advantageous even if wavelength information is not extracted from their signals. Intensity information will be available over a wider range of wavelengths than would be possible using a single type of photoreceptor, and differently lit environments will be discriminable. The advantages multiple receptor types give for sight in differing lighting conditions provide a plausible mechanism for the earliest origins of colour vision as it does not depend upon the pre-existence of neural circuitry for disentangling wavelength from intensity signals (Pichaud et al. 1999).
Using multiple receptor types to extend the usable spectrum over which intensity variation can be seen requires little computation—the outputs of the receptors can simply be added together. Disentangling wavelength variation from intensity variation is a little more complex. Estimating the reflectance properties of surfaces is even more difficult.
If receptors are tuned to a specific wavelength to which they are most sensitive, but nevertheless respond less effectively over a range of wavelengths, then responses will be inherently ambiguous. Changes in the response of a single receptor could equally well be caused either by shifts in the wavelength or by the intensity of the light falling upon it. It is only by comparing changes in the responses of receptors which differ in their spectral tuning that these two possibilities can be distinguished. If the stimulating light is composed of a mixture of wavelengths then, although comparisons of receptor responses will distinguish between spectral and intensity changes, a number of different mixtures of lights will, nevertheless, produce identical responses in both types of receptors and hence be indistinguishable (such indistinguishable mixtures are known as metamers). In humans, the consequence of possessing fewer receptor types than normal is that colour mixtures that would otherwise be discriminable are seen as identical. The overwhelming causes of colour blindness are genetically based failures to produce one of more photoreceptor pigments, or production of pigments which are abnormally close to one another in the wavelengths of their peak sensitivity. If the number of receptor types increases above the normal complement then otherwise indistinguishable mixtures become discriminable, provided relevant comparisons are computed (see COLOUR VISION, TETRACHROMATIC).
Simple comparisons of the responses of different receptor types therefore allow wavelength and intensity to be disentangled. There is, however, no means by which surface reflectance can be extracted from the information available to the visual system. The problem is that the wavelength composition of reflected light depends on the spectrum of the illuminant. Without knowing the latter, the visual system must rely on heuristics which provide good estimates in most circumstances.
One approach to determining the reflectance property of a surface, in the absence of information about the illuminant, is to select an anchor. Anchors are other surfaces in the scene about which one can make educated guesses of their reflectance properties. Various heuristics may be applied to selecting anchors. An object whose surface colour is known can provide a good anchor (a memory colour). Alternatively, one might assume that very light surfaces are white and therefore reflect all wavelengths with equal efficiency. The reflectance of any other surface can then be estimated in comparison to one of these anchors simply by computing the chromatic contrast between it and the anchor. If the surface to be estimated and the anchor are not adjacent they can still be compared simply by taking the product of all of the contrasts at the surface boundaries lying on the path between the anchor and the target surface. This scheme forms the core of the retinex colour-constancy algorithm developed by Edwin Land (Land and McCann 1971). Its success or failure depends upon two key factors: the accuracy with which the reflectances of the anchors are estimated and, in the case of multiple illuminants, the ability of the observer to determine which parts of a scene are illuminated by lights with different spectra.
There are many other cues which may be used heuristically to gauge the composition of an illuminant and so discount it in the computation of surface reflectances. These include specular highlights on shiny objects within a scene, the colouring of shadows between mutually illuminated objects, and covariations of wavelength and intensity statistics within a scene.
It should be clear that many aspects of colour constancy are likely to depend upon cognitive factors in the interpretation of a visual scene. Nevertheless, understanding the anatomy and physiology of wavelength processing provides us with substantial insights into the mechanism colour vision and the potential neural correlates of colour experience.
Three different types of cone-shaped photoreceptors in the retina convert lights of differing wavelengths into neural signals (Fig. C2). Recently it has become possible to image, determine the type, and illuminate individual cones in the living human retina. One of the most remarkable findings of this work is that stimulating any type of cone can evoke any colour experience, demonstrating that colour experience is not determined by cone activations but by the way they are subsequently processed (Hofer et al. 2005). The three cone-types have peak sensitivities to lights with wavelengths of 560 nm, 530 nm and 430 nm, and are referred to as L, M, and S (long-, medium-, and short-wavelength sensitive) cones respectively (informally, red, green, and blue cones). The processes through which colour percepts are derived from wavelength information also begin in the retina but continue in dorsal lateral geniculate nucleus (dLGN) of the midbrain, striate cortex, and extrastriate areas beyond it.
Fig. C2. The retina contains three types of daylight photoreceptors called cones. Each type contains a different light-sensitive pigment which absorbs light and produces neural signals in response. The three pigments are sensitive to short- (blueish), medium- (greenish), or long- (reddish) wavelength lights as shown above, and are hence referred to as S, M, and L cones (see Colour Plate 6).
In the retina three types of ganglion cells receive inputs from cones and extract information from them—they form the start of the M (luminance contrast), P (red–green), and K (blue–yellow) visual processing channels. All three ganglion cells have centre–surround receptive field organization. The types of cones feeding each ganglion cell-type and the manner in which stimulation of the centre and surround components of their receptive fields interact determine the nature of the information conveyed from them (Dacey 2000).
Parasol cells pit excitation from both L and M cones against inhibition, again from both (few if any S cones contribute). As a consequence of the symmetry between inhibitory and excitatory inputs, such cells fail to signal overall changes in the intensity of light filling their entire receptive field but are sensitive to differential illumination of centre and surround and can thus signal the presence of a luminance-varying edge falling in the receptive field. These cells form the start of the M-channel and might be seen as implementing the broadening of sensitivity to luminance across the spectrum by integrating signals form different receptors discussed earlier (Fig. C3a).
Both midget ganglion cells and small bistratified cells pit signals from cones of one type against signals from other types and hence convey wavelength information. Midget ganglion cells (the start of the P-channel) pit signals from L and M cones against one another (Fig. C3b). Small bistratified cells (the start of the K-channel) pit signals from S cones against combined L and M cone signals. In the periphery of the retina this opponent receptive field structure breaks down for midget ganglion cells.
As discussed above, in principle, chromatic opponency permits the separation of wavelength and intensity signals. Changes in intensity will act equally on the inhibitory and excitatory fields and hence produce little response. A change in wavelength will, however, elicit a response as it differentially influences the excitatory and inhibitory fields. Such cells can be seen as disentangling wavelength from intensity. They do not, however, signal chromatic contrast—stimulating their centre and surround fields with lights of differing wavelengths produces weak or non-existent responses. There is evidence, nevertheless, that L/M cone opponent signals might help animals detect food sources by wavelength even if determining the shape of the food items requires additional processing (Lovell et al. 2005).
Fig. C3. Outputs from cones feed into cells further into the visual system with a ‘centre–surround’ spatial organization. (a) At the start of the M-channel combined L and M cone signals in one part of visual space inhibit a cell while the same combination in the surrounding part of space excite it (inhibition and excitation switch roles in other cells). The result is a cell which is sensitive to luminance contrast (edges). (b) At the start of the P-channel L and M cone signals are not combined but are themselves put into opposition. For example, L signals in one part of visual space excite a cell while the M signals in the surrounding part of space inhibit it (inhibition and excitation switch roles in other cells; there are also cells which have inputs from S cones). The result is a cell which is sensitive to variation in the wavelength of light but whose response does not change as luminance varies. (c) In primary visual cortex, combinations of cone signal with excitatory and inhibitory action are put into spatial opposition. Unlike the cells in (b) these ‘doubleopponent’ cells are sensitive to chromatic contrast (see Colour Plate 7).
In both the P- and K-channels the receptive field organization found in the retina is replicated in the LGN—in the parvocellular layers and in cells between layers respectively (although K-cells identified by cell-membrane chemistry are also found in the parvocellular layer, with a smaller number being found in the magnocellular layer). Both P- and K-channels project on to cells in the cytochrome-oxidase blobs of striate cortex (groups of cells identified by their membrane chemistry). Some K-cells also project directly to V2 and some P-cells also innervate *V1 interblobs.
There is recent evidence for cells in V1 that respond selectively to specific chromatic contrasts (Conway 2001, Johnson et al. 2001). Unlike the colour-opponent cells in the retina and LGN, these cortical cells have a double-opponent organization—their centre fields are both excited by one class of cone input and inhibited by another; in their surround fields the cones types exert the opposite influence (the central inhibitor is excitatory in the surround and vice versa; Fig. C3c). Activation of both the centre and surround fields of such cells will be modulated by changes in the wavelength, but not intensity, of light. The optimal stimulus will be one with different wavelengths in the centre and surround regions of the receptive field. These cells may have inputs from all three cone types, so cells selective for a wide range of contrasts, and for combinations of luminance and colour contrast, could exist. The cells, as has been demonstrated, respond well to borders between differently coloured areas of an image. Such cells could contribute both to the segmentation of visual scene on the basis of colour borders and to early stages of some colour constancy processes (Fig. C4).
Fig. C4. Chromatic contrast can be more useful in detecting the edges of objects than brightness contrast, which is often due to lighting effects such as shadows. The cone activations elicited by the image (a) were used to compute areas of high brightness contrast (b) and high colour contrast (from L and M cone signals). Brightness contrast is detected by luminance opponent cells in the retina and LGN. Colour contrast is detected only by double opponent cells in striate cortex. The photograph and cone activation data were kindly supplied by Professor T. Troscianko and Dr P. G. Lovell, from Lovell et al. (2005). The image processing is by the author (RWK). (See Colour Plate 8).
Cells in V1 blobs project to the ‘thin stripes’ in V2. There is tentative evidence that the location of these cells within each stripe is organized in a systematic manner—in colour maps (Xiao et al. 2003). Combined with the likelihood that these cells are not responding to wavelength per se, but to the chromatic contrast of light in a particular location against its background, this raises the intriguing possibility that the location of neural activation in V2 thin stripes will correlate quite well with perceived colour.
The evidence for colour processing in V1 and V2 is quite recent; earlier work on cortical colour processing concentrated on extrastriate areas in ventromedial occipital cortex. The clinical condition of cerebral achromatopsia, in which patients lose the ability to perceive colour, not as a result of retinal abnormalities, but rather as a consequence of brain damage, provides strong evidence that brain areas specialized for colour perception exist beyond striate cortex. *Functional brain imaging studies have also shown increases in cerebral blood flow (implying increased brain activity) in these areas when normal subjects observed coloured, as opposed to monochrome, images. The area was therefore dubbed the colour centre (Lueck et al. 1989). Responses from single neurons in monkeys in cortical area V4 when the animals were presented with coloured stimuli suggested that the colour centre might correspond to cortical area V4. A number of problems arose with this interpretation. The selectivity of the response of neurons to particular characteristics of stimuli differs only in degree between brain areas. Some neurons in nearly all visual areas respond selectively to wavelength—the proportion in V4 is not comparatively large. In addition, damage to area V4 in monkeys does not result in deficits in discriminations based on wavelength, although deficits were induced by damage to areas anterior to V4 (Heywood et al. 1995).
Areas distinct from those involved in colour perception but associated with the storage of colour knowledge are activated when recalling objects’ colours, or differentially activated when seeing objects in typical and atypical colours (Zeki and Marini 1998, Chao and Martin 1999). In addition to imaging studies which isolate areas lateral and anterior to the colour centre, the wide variety of colour-related pathologies resulting from brain damage suggests that different aspects of colour information are represented in anatomically distinct regions. Patients have been found who are unable to retrieve colour information about objects, but retain their ability to discriminate, name, and sort colours—just the deficit one might expect to result form damage to an area storing object–colour associations (Miceli et al. 2001). Other neurological conditions, in which colour discrimination is preserved, include those where colour-naming (Oxbury et al. 1969), short-term memory for colours (Davidoff and Ostergaard 1984), or colour-sorting are impaired (Beauvois and Saillant 1985). Although the functions lost in these disorders might contribute to colour experience through the role of cognitive factors in colour constancy, these functions cannot be necessary for the experience of colour qualia—patients with these deficits do not report any loss of colour experience. This can be contrasted with cerebral achromatopsia, in which colour experience is indeed lost. It has been suggested that the loss of colour experience in cerebral achromatopsia is related to failure of aspects of colour constancy and so, perhaps, the origin of colour experience arises out of the process of estimating the reflectance properties of surfaces (Kentridge et al. 2004).
Colour perception has been used as a favourite example in philosophical discussions of brain and consciousness (Jackson 1986). Although we now know many of its details, it is hard to see how even this knowledge could help us understand what it is like to see in colour even if we now know a lot more about how, and even why, we do so.
ROBERT W. KENTRIDGE AND CHARLES HEYWOOD
Allen, G. (1879). The Colour Sense: Its Origin and Development.
Beauvois, M. F. and Saillant, B. (1985). ‘Optic aphasia for colors and color agnosia—a distinction between visual and visuoverbal impairments in the processing of colors’. Cognitive Neuropsychology, 2.
Brainard, D. H. (2003). ‘Color constancy’. In Chalupa, L. M. and Werner, J. S. (eds) The Visual Neurosciences.
Chao, L. L. and Martin, A. (1999). ‘Cortical regions associated with perceiving, naming, and knowing about colors’. Journal of Cognitive Neuroscience, 11.
Conway, B. R. (2001). ‘Spatial structure of cone inputs to color cells in alert macaque primary visual cortex (V-1)’. Journal of Neuroscience, 21.
Dacey, D. M. (2000). ‘Parallel pathways for spectral coding in primate retina’. Annual Review of Neuroscience, 23.
Davidoff, J. B. and Ostergaard, A. L. (1984). ‘Colour anomia resulting from weakened short-term colour memory’. Brain, 107.
Dixson, A. F. (2000). Primate Sexuality.
Heywood, C. A., Gaffan, D., and Cowey, A. (1995). ‘Cerebral achromatopsia in monkeys’. European Journal of Neuroscience, 7.
Hofer, H., Singer, B., and Williams, D. R. (2005). ‘Different sensations from cones with the same photopigment’. Journal of Vision, 5.
Jackson, F. (1986). ‘What Mary didn’t know’. Journal of Philosophy, 83.
Johnson, E. N., Hawken, M. J., and Shapley, R. (2001). ‘The spatial transformation of color in the primary visual cortex of the macaque monkey’. Nature Neuroscience, 4.
Kentridge, R. W., Heywood, C. A., and Cowey, A. (2004). ‘Chromatic edges, surfaces and constancies in cerebral achromatopsia’. Neuropsychologia, 42.
Land, E. H. and McCann, J. J. (1971). ‘Lightness and retinex theory’. Journal of the Optical Society of America, 61.
Lovell, P. G., Tolhurst, D. J., Párraga, C. A. et al. (2005). ‘Stability of the color-opponent signals under changes of illuminant in natural scenes’. Journal of the Optical Society of America A, 22.
Lueck, C. J., Zeki, S., Friston, K. J. et al. (1989). ‘The colour centre in the cerebral cortex of man’. Nature, 340.
Miceli, G., Fouch, E., Capasso, R., Shelton, J. R., Tomaiuolo, F., and Caramazza, A. (2001). ‘The dissociation of color from form and function knowledge’. Nature Neuroscience, 4.
Oxbury, J. M., Oxbury, S. M., and Humphrey, N. K. (1969). ‘Varieties of colour anomia’. Brain, 92.
Pichaud, F., Briscoe, A., and Desplan, C. (1999). ‘Evolution of color vision’. Current Opinion in Neurobiology, 9.
Regan, B. C. et al. (2001). ‘Fruits, foliage and the evolution of primate colour vision’. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 356.
Sumner, P. and Mollon, J. D. (2000). ‘Catarrhine photopigments are optimised for detecting targets against a foliage background’. Journal of Experimental Biology, 203.
Xiao, Y., Wang, Y., and Felleman, D. J. (2003). ‘A spatially organized representation of colour in macaque cortical area V2’. Nature, 421.
Zeki, S. and Marini, L. (1998). ‘Three cortical stages of colour processing in the human brain’. Brain, 121.
colour vision, tetrachromatic The term tetrachromacy describes the physiological possession of four different classes of simultaneously functioning retinal photopigments (also called weak tetrachromacy). From an empirical standpoint, tetrachromatic colour vision (or strong tetrachromacy) additionally requires demonstrating that mixtures of four independent appropriately chosen primary lights will simulate all distinctions in appearance possible in visible colour space. Independence of the primary lights implies that no mixtures of any subset of these lights (or their intensity variants) will produce an identical match to any combination of mixtures of the remaining lights. By comparison, trichromacy empirically requires only three primaries to simulate all visible colours.
Established theory states that humans with normal colour vision are trichromats (as, primarily, are Old World monkeys and apes). The first element of trichromacy is the output from three simultaneously functioning retinal cone classes: short-, medium-, and long-wavelength sensitive (SWS, MWS, LWS) cones. Three cone classes alone do not establish a trichromat colour code, however. A postreceptoral code for three categories of signal is also needed. A standard assumption in vision science is that the postreceptoral recoding of cone outputs initiates the neural trivariant (or trichromatic) property of human colour perception, and the need for only three primary lights to match any test light.
1. Animal tetrachromacy
2. Potential human tetrachromacy
3. Empirical studies of human tetrachromacy
4. Tetrachromacy controversies
Tetrachromacy is an early vertebrate characteristic, existing in fish and reptiles, and is evolutionarily more ancient than primate trichromacy. Essentially all diurnal birds have four retinal cone types (two SWS classes, plus a MWS and a LWS class) which neurally produce fourdimensional colour experience, or tetrachromatic colour vision. Such birds probably perceive a greater number of distinct colours than humans do, and many more colours than dichromat mammals. Generally, non-human Old World primates tend to be trichromatic and New World primates dichromatic. Recent studies have found that some New World monkeys—the squirrel monkey, spider monkey, marmoset, and dusky titi— are colour vision polymorphic species in which the base condition is dichromacy, although a considerable proportion of individuals are trichromats (Jacobs 1996, Jacobs and Deegan 2005). Many animal species (e.g. squirrels, rabbits, some fishes, cats, and dogs) are dichromatic (as are some colour-deficient humans); they possess only two functioning classes of cone photopigments and need only two primary lights to match the colour of any test light.
Physiological considerations of potential human tetrachromacy began in the 1940s with genetic studies of inherited colour vision deficiencies or Daltonism. Approximately 8% of Caucasion males exhibit some degree of colour vision deficiency caused by inheriting altered LWS and MWS photopigment genes on the X chromosome. Males, possessing a single X chromosome, are less likely to express both LWS and MWS retinal photopigments than are females, who have two X chromosomes. Furthermore, a female carrying altered photopigment genes may not experience colour vision deficiency, although her male offspring will likely inherit it. Photopigment gene deletions during expression (due to intergenic non-homologous recombination), and alterations (due to missense mutations, coding sequence deletions, or intragenic crossover between different genes) underlie Daltonism. Failure to express either the LWS or MWS photopigment produces a Daltonian form of dichromacy, and expression of altered photopigment genes can lead to colour vision anomalies.
For many years scientists have known that some fraction of human females inherit the genetic potential to produce four cone photopigment variants, and actually express these variants as distinct retinal cone classes with four different spectral sensitivity distributions. Certain females of ‘heterozygous’ genotypes can express both altered and ‘normal’ forms of photopigment genes thought to underlie colour matching differences. Retinal expression of four distinct cone classes requires random X-inactivation during embryonic development so that genes from both altered and normal pigment genes are alternatively expressed as photopigments across the retina’s cone cell mosaic. The resulting mosaic may include a patchwork of usual SWS, MWS, and LWS cone types, plus, for example, a fourth long-wavelength class variant with peak sensitivity differing from the usual LWS class by 4–7 nm. Frequency estimates of Caucasian females who are potential tetrachromats range between 15% and 47% depending on the heterozygote genotypes considered. Less is known about the actual frequency of expressing four retinal cone classes.
While the potential for human tetrachromacy exists, the general theory suggests that humans process no better than a trivariant colour signal. Thus, four retinal cone classes are a necessary (but not a sufficient) condition for tetrachromatic colour perception, and for true tetrachromacy a tetravariant colour signal processing is also needed.
Some scientists conjecture that humans with four retinal photopigment classes might experience a dimension of perceptual experience denied to trichromat individuals (Jordan and Mollon 1993), implying that cortically humans might process four colour channels, or otherwise learn to use the additional information. New World primate trichromacy suggests a parallel: female spider monkeys possessing extra photopigment gene variants are trichromats, while both males and females without such variants experience only dichromat colour vision. Gene variants thereby allow some female monkeys to experience a dimension of colour experience that other females and males do not (Jordan and Mollon 1993).
Anomaloscope investigations. Typically, psychophysical anomaloscope ‘colour-matching’ investigations are used to study human tetrachromacy. In an anomaloscope task observers monocularly view a bipartite field of primary mixtures and adjust the primaries in one half-field until a ‘colour match’ with a fixed test light in the other half-field is obtained. Nagy et al. (1981) examined potential tetrachromacy using such a task with chromatic annulus-surround stimuli and a large-field Rayleigh match task variant. Jordan and Mollon (1993) used both large-field Rayleigh matching and a ratiomatching task where ratios of pairs of primary lights are mixed to match a test light. For evaluating signal processing mechanisms most anomaloscope investigations distinguish ‘weak’ and ‘strong’ forms of tetrachromacy to interpret mixture settings of potential tetrachromats. Weak tetrachromacy occurs if an observer has four different cone classes but lacks the postreceptoral capacity to transmit four truly independent colour signals. Nagy et al. (1981) demonstrated this form in potential tetrachromats who accepted trichromatic colour matches made in a context-free (black annulus) background condition, but did not exhibit the stability of such matches under different chromatic background conditions (unlike trichromats). The observation that matched fields become distinguishable in a coloured background clearly indicates weak tetrachromacy, suggesting that the kind of stimulus additivity found in trichromats fails for some potential tetrachromats, or that signals from the extra cone class produce perceptual differences when viewing is contextualized. Nagy et al. (1981) also imply that tetrachromat retinal mosaicism may be a contributing factor in their study.
Strong tetrachromacy arises from four different cone types plus the capacity to transmit four independent cone signals. Such observers would reject large-field trichromat colour matches and require four variables to match all colours. Jordan and Mollon (1993:1501) showed 8 out of 14 candidate tetrachromats refused large-field Rayleigh matches providing ‘preliminary evidence for [the strong form of] tetrachromacy’. They also identified two subjects with precise matches in a ratio matching task (as would have been expected from a tetrachromat in their experiment), suggesting one subject’s ‘tetrachromacy is not of the form we initially envisaged’ (1993:1503) although she ‘remains in play as a candidate tetrachromat in the strong sense’ (1993:1505). Jordan and Mollon (1993) nevertheless remain tentative concerning the existence of ‘strong’ human tetrachromacy.
Conservative interpretions of both Nagy et al. (1981) and Jordan and Mollon (1993) suggest weak tetrachromacy interferes with the ability of potential tetrachromats to repeat match mixture settings when producing mixtures with fewer than four variables. In this regard, at least, some potential tetrachromats differ from trichromats. Additional factors are likely to influence the empirical identification of human tetrachromats: complexity of colour experience will increase with scene, stimulus, and viewing complexity. Monocularly viewed stimuli used in anomaloscope investigations impose empirical constraints on the dimensionality of perceptual experience, whereas naturalistic binocular viewing of contextualized scenes is more likely to uncover tetrachromacy. Thus, the empirical detection of human tetrachromacy is more likely to occur under complex stimuli and viewing conditions (e.g. Bimler and Kirland 2009).
Non-anomaloscope investigations. Some investigations have employed increased stimulus complexity, examined more natural processing conditions and behaviours, and obtained human observer genotype information (Jameson et al. 2001, 2006, Sayim et al. 2005). These investigations used molecular genetic methods to identify potential retinal tetrachromats, and found differences in perceptual behaviours when a genetic potential existed for more than three photopigment classes. Behaviours that differentiated these potential tetrachromats from trichromat controls included perceiving more colours in diffracted spectra (Jameson et al. 2001); performance variation on a standardized test for trichromacy that was correlated with indices of richer colour experience (Jameson et al. 2006); and colour similarity and colour naming patterns showing cognitive colour processing variation among potential tetrachromats (Sayim et al. 2005). Although such investigations were not designed to address colour vision neural mechanisms or specify forms of ‘weak’ or ‘strong’ tetrachromacy, the results show that using empirical conditions that approximate more naturalistic viewing circumstances (e.g. binocular viewing and contextualized stimuli) makes tetrachromacy more apparent, and that the genetic potential to express more than three cone classes correlates with differences in colour categorization, naming, and colour similarity judgements. These findings are among the first to suggest human tetrachromat differences for such colour processing behaviours.
Despite the norm of human trichromacy, empirical support for human tetrachromacy exists, and other terrestrial species have evolved the neural hardware for tetrachromacy. Because the evolution of human colour vision capacities is not static, cortical rewiring for tetrachromacy could occur similar to the remapping seen in other visual processing types (e.g. *achromatopsia), suggesting that the assumed trivariant recoding of four retinal colour signals may be more a conservative theoretical constraint than an actual neural limitation. Other human sensory domains show specialization: gustatory ‘supertasters’ exhibit taste threshold differences linked to variation in taste sensor densities. Human colour vision abilities vary enormously across normal individuals and most of these differences have a genetic base, like the basis underlying tetrachromacy.
Anomaloscope results find a few ‘strong’ and ‘weak’ tetrachromat humans demonstrate subtle but reliable colour processing differences; thus, even under an assumed neural trivariance constraint, it is reasonable to expect some tetrachromat perceptual difference. Also, no radical hypotheses are needed for plausible human tetrachromacy given the prevalence of tetrachromacy in non-primate species, the precedents from New World primate trichromacy (Jacobs 1996), and primate diversity (Jacobs and Deegan 2005).
Exactly how the human visual system processes retinal signals to produce colour experience remains unknown. However, the visual system can inductively reconstruct information from the environment (often inferring more than that which is present in the signal alone), and processing extra dimensions of colour experience could be within the computational power of visual system neural circuitry.
Clearly, human tetrachromacy requires further empirical demonstration and discussion. Regardless of the frequency of occurrence of strong or weak tetrachromacy, the potential presence of retinal tetrachromats within a normal trichromat population provides additional opportunities to analyse relations between individual perceptual colour experience and colour-processing behaviours. Trichromacy allows humans to distinguish an estimated 2 million different colours. Even if retinal tetrachromacy produces only minor discriminable differences in a small proportion of human observers, these phenomena remain important from both a perceptual and an evolutionary modelling perspective. Given findings suggesting the possibility of human tetrachromacy, future research should clarify the nature of this potential variation in human perceptual experience.
KIMBERLY A. JAMESON
Bimler, D. and Kirkland, J. (2009). ‘Colour-space distortion in women who are heterozygous for colour deficiency’. Vision Research, 49.
Jacobs, G. H. (1996). ‘Primate photopigments and primate color vision’. Proceedings of the National Academy of Sciences of the USA, 93.
—— and Deegan II, J. F. (2005). ‘Polymorphic new world monkeys with more than three M/L cone types’. Journal of the Optical Society of America A, 22.
Jameson, K. A., Highnote, S. M., and Wasserman, L. M. (2001). ‘Richer color experience in observers with multiple photopigment opsin genes’. Psychonomic Bulletin and Review, 8.
——, Bimler, D., and Wasserman, L. M. (2006). ‘Re-assessing perceptual diagnostics for observers with diverse retinal photopigment genotypes’. In Pitchford, N. J. and Biggam, C. P. (eds) Progress in Colour Studies 2: Cognition.
Jordan, G. and Mollon, J. D. (1993). ‘A study of women heterozygous for colour deficiencies’. Vision Research, 33.
Nagy, A. L., MacLeod, D. I. A., Heyneman, N. E., and Eisner, A. (1981). ‘Four cone pigments in women heterozygous for color deficiency’. Journal of the Optical Society of America A, 71.
Sayim, B., Jameson, K. A., Alvarado, N., and Szeszel, M. K. (2005). ‘Semantic and perceptual representations of color: evidence of a shared color-naming function’. Journal of Cognition and Culture, 5.
coma See BRAIN DAMAGE
commissurotomy and consciousness Fifty years ago a surgical procedure for research with cats and monkeys initiated new research on cortical connections and consciousness. This commissurotomy or ‘split-brain’ procedure combined division of the commissures linking left and right cerebral cortices, including the massive corpus callosum, with an operation to the optic chiasma to separate the inputs from the two eyes.
1. Investigating the organ of consciousness
2. How animals act and know with a divided cortex
3. Manual dominance and antecedents of human cerebral asymmetry
4. Consciousness in the human split brain
5. Cerebral asymmetry, speaking, and knowing
6. Allocation of consciousness between the hemispheres
Brains of animals that swim, fly, or walk map directions for movements in a body-related behaviour space, and they have sensory fields that take up awareness of the world to guide them in the same space. They are bisymmetric, with left and right halves at every level. Larger, longer-lived, more agile, and more intelligent animals have bigger brains, the area of the cortex of the two cerebral hemispheres being in proportion to how long they live and learn. Humans have the largest cortex, and injury or malfunction in it disturbs both intentions of moving and the consciousness of perceiving.
Medical scientists using anatomical studies and clinical accounts of the effects of restricted *brain damage have long tried to fathom how the cortex mediates consciousness, and what the rest of the brain (the limbic system, thalamus, basal ganglia, brainstem, and cerebellum), with many more neurons, contributes. In the 17th century some proposed that the corpus callosum, the conspicuous bridge of fibres that connects the cortices of the two hemispheres, is the seat of the soul, conferring an essential unity to consciousness. The innate asymmetry of the human brain became an exciting topic in the 19th century with evidence that lesions confined to the left hemisphere could cause loss of speaking or comprehension of speech, as well as one-sided problems with intentions of the hands. It seemed to some that this indicates that only that the languageaware hemisphere is conscious. The work of neurologists Carl Wernicke, Hugo Liepmann, Jules Dejerine, and Kurt Goldstein confirmed that left and right cortices are different, specifically in the cultivated functions of intelligence, and they concluded that the corpus callosum must contribute to normal integration of mental states, given that damage to it could render the left hand of a right-handed person incapable of obeying verbal instructions, though its manipulatory habits remained normal.
The commissurotomy procedure was further refined for medical use in development of an operation that, by sectioning parts of the commissures, helped children and adults with life-threatening epilepsy to live more normal lives. Roger Sperry received the Nobel Prize for Physiology and Medicine in 1981 for his contributions to this research clarifying cerebral functions of consciousness in animals and humans. Now there is a very large literature on the effects of commissurotomy on perception, learning, and motor coordination. The cerebral asymmetry of human consciousness, and its great variety, is better appreciated, as well as its relation to motivations and emotions that involve the whole brain asymmetrically. Great popular interest has been excited concerning how, and why, individuals come to use left and right brains differently.
In 1952 Sperry published an essay entitled ‘Neurology and the mind–brain problem’ in which he argued that speculations about the nature of consciousness are best inferred from patterns of motor output—from actions of the body generated by expectant ‘motor images’ or ‘motor sets’—rather than from hypothetical ‘processing’ of sensory input by a disembodied intelligence. This theory guided Sperry’s investigation of the mechanisms of consciousness over the next 40 years. He believed that axonal connections spanning long distances in cortical white matter mediate intentions and awareness, and he proposed this idea should be tested by surgical experiments.
Also in 1952, Ronald Myers, working with Sperry in Chicago, divided the optic chiasma so each eye of a cat was connected only to the cortex on the same side, and then trained the cat to choose between visual stimuli with one eye at a time. Myers proved, first, that learning transferred between the separated inputs of the eyes, then that transfer was abolished when the forebrain commissures were divided in a second operation. The chiasma–commissure sectioned ‘split-brain’ cats could even be trained to make opposite choices with the two eyes. Apparently their visual consciousness was divided. Nevertheless, when free they walked normally, showing no obvious clumsiness or confusion about how to see the world. Presumably the freely moving cat could distribute its brain activity to see with one cortex at a time to access conflicting memories. Was its ‘will’ still one? Were the two hemispheres equal in potential for perception and memory? Could any information be shared through subcortical bridges? These questions remained unanswered.
In 1954 Sperry took the Hixon Chair of Psychobiology at the California Institute of Technology, where, with Myers and other graduate students and postdoctoral workers, his work with split-brain cats showed that touch perception of shapes or textures felt by left and right paws were also disconnected. Split-brain research was extended to rhesus monkeys, which were more likely to reveal the part one-sided motor intentions of the hands could play in directing consciousness. In test boxes with fixed face masks to control which eye could see stimuli, and a sliding barrier to control which hand could push response levers or feel objects to obtain a food reward, it was demonstrated that monkeys with the optic chiasma and forebrain commissures divided had, like the operated cats, split awareness for objects seen or felt in the hand, and they, too, appeared normally coherent in movements and motivation when out of the test box, except for a loss of binocular depth perception in near central space and occasional confusions of bimanual control.
It became clear that the separated cerebral cortices of vision were not acting on their own and that other levels of seeing could perceive and learn. Experiments showed that certain stimuli could ‘leak’ learning between the eyes of a split-brain cat or monkey through a brainstem visuomotor system. These stimuli differed in intensity, complexity, size, colour, or brightness— like textures, shadows, and reflections in the world that have usefulness in guiding movement through the time/space gradients of self-related ‘ex-proprioceptive’ awareness. From my doctoral research with Sperry I concluded that monkeys have two visual systems, operating at different levels of consciousness to guide movement on different scales of moving. One is ambient, informing whole-body locomotion and posture change, orientation of the head, and reaching with the arms. The other is focal, engaging foveal attention for object identification, and for directing fast sequential manipulations of the hands. Ambient vision, involving subcortical visuomotor systems, is undivided by commissurotomy. The focal system, dependent on cortical analysis and cortico-cortical motor-perceptual links, is split.
Using stimuli projected in horizontally and vertically polarized light to test for double awareness, Sperry and I obtained evidence that subhemispheric motor processes could channel visual consciousness when a split-brain monkey was using one hand. The monkeys viewed orthogonally polarized and conflicting stimulus pairs on response panels. With the chiasma and corpus callosum divided they could learn two mutually contradictory visual discrimination habits in the right and left half brains, confirming that the split brain could keep apart two realms of awareness and memory that could both be ready to guide actions. But this learning was not just an automatic impression from stimuli; it depended on which hand–eye combination was active. After learning the task with one hand, the split-brain monkeys knew the task with the eye on the opposite side, and they could not immediately change hands. When required to make reaching and grasping movements with the ‘ipsilateral’ limb (on the same side as the seeing hemisphere, i.e. with the eye on the same side), they were unwilling, and became clumsy, as if blind. Crossed, ‘contralateral’, pathways linking each half of the cortex to the opposite hand were much more effective for guiding, reaching to pick up objects, and fine exploratory movements of the fingers. John Downer found the same effects in split-brain monkeys, and in the 1970s their anatomical basis was clarified by Jacoba Brinkman and Hans Kuypers. The split-brain monkeys had two eye–hand systems and showed shifts of consciousness between the separated cortices correlated with intentions to move with one or other hand.
Research with split-brain baboons in Jacques Paillard’s laboratory in Marseilles found that the animals showed a preference for using one hand for fine manipulations with visual guidance, as humans do, though the side of ‘hand dominance’ was not consistent for the baboons. Tests with a puzzle-box task requiring use of both hands proved that when the hands had learned complementary moves, the skill for timing and sequencing of the motor strategy was established in the hemisphere opposite the preferred hand, for both hands. These studies support the conclusion that cerebral dominance evolved in primates with manual skill. They related to what Peter MacNeilage, discussing the regulation of speech, called the ‘frame and content’ strategy of complex motor articulations, which might be the basis for elaboration of the semantic and syntactic motor programmes for mimesis and language in the left hemisphere of predominantly right-handed humans. The baboons learned how to feel the components of the puzzle-box in a succession of complementary moves with the two hands, just as the organs of vocalization and the articulation of speech learn to feel and hear their different moving parts in the uttering of syllables, words, and phrases of speaking. Indeed, speech and manual skill had proved to be the functions most affected by commissurotomy in human subjects.
Sectioning of the cerebral commissures, of varying completeness, was performed on human patients in the early 20th century, to remove life-threatening tumours beneath the corpus callosum, and to prevent brain damage caused by epilepsy, but the function of the commissures remained unclear. A series of commissurotomies by Van Wagenen and Herren and tests by Akelaitis in the early 1940s showed equivocal psychological effects, partly because in many cases the sections were not complete and partly because visual testing was not sufficiently controlled, but there were signs that consciousness of the hands was partly divided, and that the right hemisphere could not speak.
The first conclusive demonstrations of profound effects of commissurotomy for human consciousness were made at Caltech in the 1960s. Los Angeles neurophysiologist and neurosurgeon Joseph Bogen, seeing that split-brain animals retained conscious control of their whole body, proposed to Sperry that selected *epileptic patients would benefit from this surgery without serious mental loss. Between 1962 and 1968, nine complete commissurotomies were performed by Philip Vogel and Bogen with success in reducing fits. Psychological tests performed by Sperry and Bogen assisted by Michael Gazzaniga soon revealed that, while, after a variable time in which speech was lost and the hands showed dissociated activities, the general psychological state and behaviour was, in most cases, little affected, there was a profound separation in mental activities. Other studies with commissurotomy patients carried out since, in the USA, France, and Australia, have produced similar findings.
After the operation, immediate central awareness of what is being focused on by eyes or hands is in two. The shape of an object felt in the left hand out of sight cannot be matched to the same kind of object felt separately and unseen in the right hand. If the eyes are stationary, an object a few degrees to the left of the fixation point cannot be compared to one on the right side. Comparable divisions in olfactory and auditory awareness may be demonstrated. Furthermore, although sight and touch communicate normally on each side, left visual field to left hand or right visual field to right hand, the crossed two-hemisphere combinations fail, as if experiences of eye and hand were obtained by separate persons.
The division of sight for detail is sharp at the midline as long as the patient keeps the eyes fixated. When he or she is free to look to left and right and to see in both halves of vision what both hands are holding, the division of awareness ceases to be apparent. Moreover, outside the discriminating centre of awareness, division of consciousness is incomplete. With touch on arms, legs, face, or trunk, there is transfer of feeling between the sides. Large, long-lasting stimuli moving in the periphery of the left visual field can be described. Intuitive seeing of surroundings by ‘ambient vision’ (also called *blindsight to emphasize that it is a ‘less conscious’ level of awareness than categorical object awareness)—necessary for walking, for maintaining balance, and to locate off-centre targets of attention before eyes move to fixate—is not divided by commissurotomy. Each cerebral hemisphere can initiate eye movements to left and right, and can reach to left and right with either hand. This ‘speculative’ or ‘expectant’ peripheral awareness, in absence of clear contrary evidence, can generate ‘false’ or illusory notions, and can fail to register or ‘neglect’ stimuli.
The most significant finding of the early tests performed by Sperry, Bogen, and Gazzaniga was the failure of the right cerebral cortex to articulate words. When conscious of stimuli in the left visual field or left hand, the subjects were often speechless. If urged to reply, they reported some weak and ill-defined event, or else they *confabulated experiences, unable to apply a test of truth or falsity to spontaneously imagined answers to questions. With stimuli in the right field the subject could name, compare, and describe objects or the occurrence or non-occurrence of stimulus events.
Commissurotomy patients offered a direct approach to questions that have been debated in clinical neurology since the discovery, over a century ago, that muteness or disturbance of language comprehension can result from brain injury confined to the left hemisphere. Could the right hemisphere comprehend spoken or written language at all? Could it express itself in signs, by writing, or by gesture? Could it make any utterance? Could it reason and think? Was it really conscious?
In the past 40 years, Gazzaniga and his colleagues have made many tests of commissurotomy subjects, attempting to measure selective attention and perceptual information processing by cognitive modules. His ‘cognitive neuroscience’ approach popularizes the view that consciousness is a product of logical processes and dependent on ‘interpretation’ by language. It does not explain how consciousness evolved to guide animal movement and with the benefit of affective regulations, or how it develops in a child. It leaves obscure how immediate sympathetic awareness of intentions and emotions in action is possible between animal subjects, and between humans, infant or adult, and how such awareness might lead to language.
The tests performed at Caltech proved that some comprehension of spoken and written language was present in the mute right side of the brain. Information about how this hemisphere should perform a test could be conveyed by telling it what to do, and if the name of a common object was projected to the right cortex only, the patient could retrieve a correct example by hand, or identify a picture of it by pointing. The right hemisphere could solve simple arithmetic problems by arranging plastic digits, out of sight, with the left hand. Nevertheless, vocabulary and sentence meaning as well as powers of calculation of the right hemisphere were greatly inferior to these abilities in the left hemisphere of the same patient. Rarely, a patient could start an utterance with the right hemisphere, but the vigilance of the more competent left hemisphere blocked the initiative after the first syllable or letter. In general only the left hemisphere could (or would) speak or calculate. Written responses to left field stimuli were more complete than spoken ones, which, as Bogen emphasized, may indicate that symbolic communication by hand activity is more fundamental or more ‘primitive’ than speech.
Jerre Levy found that the right hemisphere was superior on certain tasks that tested for non-verbal intelligence, such as visual or touch perception of configurations, and on judgements involving exploration of shapes by hand or manipulative construction of geometric assemblies or patterns. Robert Nebes confirmed that the right hemisphere was better able to recognize familiar objects with incomplete pictorial data, and better able to perceive whole shapes from parts seen or that were felt in the hand. Bogen described the right hemisphere thought as ‘appositional’ and the left as ‘propositional’.
Perceptual confabulations were prominent. With stimuli, pictures, drawings or words, crossing the vertical meridian both hemispheres gave ‘false’ responses showing perceptual ‘completion’ or ‘neglect’ in the ipsilateral field. Occasionally when the left hand attempted to touch an object perceived moving in the right field the subject said it ‘disappeared’ just as the move was started. This ‘erasure’ from consciousness appeared to delete apperceptions in both hemispheres at the moment when one hand was starting a movement to reach a goal object in the opposite side of the body.
To further test intentional effects on consciousness, Levy, Trevarthen, and Sperry gave split-brain subjects a free choice of which hemisphere to use to respond in tests. Halves of two different pictures were joined together down the vertical midline to make a double picture or stimulus chimera. When presented to the split-brain patient with the join on the fixation point, information about each half is received in a different hemisphere. The tasks were designed so that in every trial a correct choice could be made using either left or right experience. Preference for one half of the chimera depends on one-sided intentions, or preparations to think, that arise in response to the test instructions. With this test, preferred modes of understanding of the hemispheres can be revealed, as well as functions that allocate interest and expectation between the two hemispheres. Tests showed that the right hemisphere cannot imagine the sound of a word for an object seen, so it cannot perform rhyming ‘in the head’ to match names for drawings (e.g. ‘eye’ matches ‘pie’; ‘key’ matches ‘bee’). Evidently the dominance of the left hemisphere for motor control of speaking involves a one-sided ability to predict how words will sound.
Preference for the right hemisphere in matching appearances becomes strong for unfamiliar complex shapes with no simple name, for colours, and for faces. With pictures restricted to the left hemisphere, face recognition by commissurotomy patients is poor and identification is achieved by a laborious checklist of details such as glasses, moustache, or hat that must be memorized and searched for. Dahlia Zaidel used comparison of pictures in tests of visual imagination and memory to confirm that there is a stark contrast in cognitive style, imagination, and memory strategies between the hemispheres.
Occasionally commissurotomy patients show activation of one or other side independently of task requirements. Sometimes the ‘wrong’ hemisphere attempts a task, and performance suffers. This metacontrol, biasing the link between expectations and intentions in disregard of the processing required, may contribute to habitual differences in the way normal individuals process cognitive problems. It may lie behind differences in mental abilities—e.g. leading one person to be skilled at, and prefer, visuo-constructive tasks while another is gifted at verbal rationalizations. Since the hemispheres of commissurotomy patients become progressively more alike by changes compensating for their separation, tests with them probably reveal only reduced forms of hemisphere specialization as these exist in intact brains where complementary activities cooperate.
Eran Zaidel developed a method for blocking off half of the visual field of one mobile eye of a commissurotomy patient so they could take part on much more natural ‘narratives’ of experience and share them with a researcher. He attaches a contact lens to the eye that carries a small optical system and screen. The patient can cast eyes over a test array while picking up visual information by only one hemisphere. These tests prove that each of the hemispheres can elaborate awareness of the meanings of words and pictures employing metaphor. Objects may be linked in awareness by their customary usefulness and social importance as well as by more obvious perceptual features. Names, colours, temperatures, and many other properties of things may be correctly identified from black and white pictures. The tests of Zaidel and Sperry have shown that both hemispheres of commissurotomy patients are capable of supporting personal consciousness. Each can exhibit a strong sense of the social and political meaning of pictures or objects that are being inspected while the other hemisphere is not seeing them. Comprehension of words, spoken or written, is surprisingly rich in the right hemisphere, but when words are combined in a proposition, the comprehension of the right hemisphere falls drastically. The linguistic abilities of the right hemisphere resemble those of a nursery-school child.
Fig. C5. Visual and haptic consciousness in the separated hemispheres of commissurotomy patients. (a) and (b), illustrations from Sperry’s publications: (a), the divided corpus callosum; (b), the separation of word recognition reported verbally for the right visual field from recognition of an object in the left hand to match a word in the left field. (c) how chimeric stimuli are reported, by speech (left hemisphere—right visual field) and by drawing (right hemisphere—left visual field). (d) use of chimeric stimuli to demonstrate the different recognition systems of left and right hemisphere. The left hemisphere matches by ‘function’ or ‘meaning’, which can be easily verbalized. The right hemisphere matches readily by appearance or form.
The human brain has inherent motives adapted to create and maintain a society and its culture by two complementary conscious systems, which differ not only in their cognitive achievements, the focus of most research studies, but also in their emotions and ‘personality’ or self–other regulation. As Donald Tucker has recorded, the two halves of the brain are adapted to guide the actions of the body with different emotions, the left being more ‘assertive’ or proactive and environment challenging, the right being more self-regulating and ‘apprehensive’. These innate differences guide the development of attachments and cooperative understanding in early childhood when the functions of the cerebral hemispheres are growing rapidly and changing in response to experience gained intersubjectively—by shared consciousness.
Research with normal subjects inspired by split-brain research proves that individuals vary greatly in asymmetric cerebral functions and consciousness. Such diverse factors as sex, age, handedness, education, and special training correlate with psychological and physiological measures of cerebral lateralization and hemispheric activation. It is not hard to perceive advantages of such physical and psychological diversity in the most highly cooperative of animal beings, in whom bodies, minds, and the actions and the relational experiences of society and culture become inseparable.
Commissurotomy patients have helped us understand how consciousness, intention, and feelings are generated in activity at different levels of brain function. Regulation of intentional activity and phenomenal experience is a multi-layered phenomenon. It does not appear necessary to imagine that the ‘self’, which must maintain a unity, is split when the forebrain commissures are cut, although some of its activities and memories are depleted or dissociated after the operation. Most importantly, the complementary emotions that regulate personal experience and moral and cooperative awareness with others are not dissociated by commissurotomy, though the efficiency of their control may be diminished.
COLWYN TREVARTHEN
Bogen, J. E. (1993). ‘The callosal syndromes’. In Heilman, K. M. and Valenstein, E. (eds) Clinical Neuropsychology.
Levy, J. and Trevarthen, C. (1976). ‘Metacontrol of hemispheric function in human split-brain patients’. Journal of Experimental Psychology: Human Perception and Performance, 2.
Marks, C. (1981). Commissurotomy, Consciousness and Unity of Mind. Nagel, T. (1971). ‘Brain bisection and the unity of consciousness’. Synthèse, 22.
Sperry, R. W. (1974). ‘Lateral specialization in the surgically separated hemisphere’. In Schmitt, F. O. and Warden, F. G. (eds) The Neurosciences: Third Study Program.
—— (1982). ‘Some effects of disconnecting the cerebral hemispheres’ (Nobel lecture). Science, 217.
—— (1984). ‘Consciousness, personal identity, and the divided brain’. Neuropsychologia, 22.
Trevarthen, C. (1974). ‘Analysis of cerebral activities that generate and regulate consciousness in commissurotomy patients’. In Dimond, S. J. and Beaumont, J. G. (eds), Hemisphere Function in the Human Brain.
—— (1990). ‘Integrative functions of the cerebral commissures’. In Boller, F. and Grafman, J. (eds) Handbook of Neuropsychology, Vol. 4.
—— (ed.) (1990). Brain Circuits and Functions of the Mind: Essays in Honour of Roger W. Sperry.
—— and Reddy, V. (2007), ‘Consciousness in infants’. In Velman, M. and Schneider, S. (eds) A Companion to Consciousness.
Zaidel, D. (1994). ‘A view of the world from a split-brain perspective’. In Critchley, E. M. R. (ed.) The Neurological Boundaries of Reality.
Zaidel, E. and Iacoboni, M. (eds) (2003). The Parallel Brain: the Cognitive Neuroscience of the Corpus Callosum.
——, Iacoboni, M., Zaidel, D. W., and Bogen, J. E. (2003). ‘The callosal syndromes’. In Heilman, K. M. and Valenstein, E. (eds) Clinical Neuropsychology.
Zangwill, O. L. (1974). ‘Consciousness and the cerebral hemispheres’. In Dimond, S. J. and Beaumont, J. G. (eds) Hemisphere Function in the Human Brain.
complexity See INFORMATION INTEGRATION THEORY
concepts of consciousness Consciousness is a complex feature of our mental life, and the concepts we use to talk and think about it are correspondingly diverse. The terms ‘conscious’ and ‘consciousness’ are used in a variety of ways both in everyday speech and theoretical practice, none of which is specially privileged.
Indeed, the adjective ‘conscious’ varies not only in its meaning but also in the sorts of things to which it is applied. Sometimes it is used to attribute so-called creature consciousness to persons or organisms, and at other times to ascribe state consciousness to mental states or processes (Rosenthal 1986). Each in turn is interpreted in many ways. Thus it crucial to explicate these various concepts clearly, and then determine what links, if any, there may be among them.
1. Creature consciousness
2. State consciousness
3. Conceptual links and relations
A person, organism or other relevant system (e.g. a suitable robot) might be described as conscious in a number of different, though perhaps interrelated, respects.
Sentience. At minimum, a conscious creature might simply be one that is sentient, i.e. capable of sensing and responding to its environment (Armstrong 1981). Organisms vary in the quantity and quality of information to which they are sensitive, and it is not clear where to draw the threshold for being conscious in the relevant sense. Indeed such sensitivity varies by degree, as does the speed and flexibility with which organisms can respond, and there may be no sharp dividing line. Plants respond adaptively to changes around them, as do protozoa, but few would regard them as conscious or sentient in the relevant sense. Mammals, birds, and even lizards seem to qualify, but what about shrimp, grasshoppers, sea slugs, or anemones? In part we may not know enough about their actual sensory and behavioural capacities, but our difficulties also reflect a certain vagueness in the concept itself. What counts as ‘sensing’ or ‘responding’ is itself far from clear in marginal cases.
Wakefulness. Most sentient creatures exhibit multiple states of alertness. They vary over time in how sensitive or responsive they are to the world around them as well as in their level of core activity. One might regard a creature as conscious only when it is in a relatively high level of alertness and thus using its sensory and response capacities in an active way. Merely having such capacities would not suffice: only when they were being actively used would a creature count as conscious in the relevant sense. It would not qualify as such when it was *sleeping deeply, in a coma, or sedated. However, just where to draw the boundaries is unclear. How alert or wakeful must a creature to be to qualify? Should it count as conscious when it is dreaming, drowsing, emerging from *anaesthesia, or having an *epileptic absence? Wakefulness, like sentience, varies in degree along multiple dimensions, and there is no obviously right way to draw the boundaries for being conscious in the relevant sense.
Self-awareness. Some concepts of creature consciousness require a conscious creature to be aware not only of its surroundings but also of its own awareness of its world (Carruthers 2000). Such concepts treat consciousness as a form of self-awareness (see SELF-CONSCIOUSNESS). An automaton might be sentient and awake in so far as it actively responds to sensory inputs but not count as conscious in the self-aware sense because it lacked reflective inner-awareness of its own outer-awareness. That intuition is sometimes supported by appeal to cases such as that of the absent-minded long-distance highway driver (Armstrong 1981) or of the petit mal seizure patient who persists in an ongoing activity while ‘absent’ (Penfield 1975). They respond to their surroundings, but they are not conscious in the reflective sense because they are not aware of their of own awareness.
One may interpret the self-aware requirement in multiple ways, and the relevant concept of consciousness will vary correspondingly. One might demand explicit, fully conceptualized self-awareness, in which case many complex creatures such as dogs, cats, or even young children would fail to qualify. Indeed, such a requirement might exclude all non-linguistic creatures. Alternatively, if basic forms of implicit and non-conceptual self-awareness sufficed, the concept would apply to a far wider range of cases. Creatures incapable of explicit I-thoughts might nonetheless be reflectively aware of their own minds in an implicit and non-conceptual but nonetheless very real way. A dog watching the preparation of its dinner would seem to be aware of its own hunger and desire, even if it lacks explicit I-thoughts.
*What it is like. Thomas Nagel (1974) famously defined a conscious creature as one that ‘there is something that it is like to be’. Bats and horses are conscious in the Nagel sense just if there is something that it is like to experience the world from the bat or horse point of view. A conscious creature is one that has an experiential perspective or subjective point of view.
A system might be quite sophisticated in its sensory and response abilities yet lack a point of view in the relevant subjective experiential sense. We feel confident that there is something that it is like to be a pig, a pigeon, or a lemur. But anti-computationalists have denied that there is anything it is like to be a digital electronic computer, no matter how powerful or intelligent its responses may be (Searle 1992). Having a subjective point of view seems to require something of another sort, something other than mere intelligence or response abilities. But what it is or how it might be produced remains unclear.
The notion of a point of view may get a bit fuzzy at the margins. Is there anything it is like to be an iguana or a honeybee? It may be hard to decide or even to grasp what the issue comes to in such cases. However, there are plenty of clear central cases. We seem to know it when we see it, even if we cannot say just what it is we know is there. We just know that wolves and beavers are conscious in the ‘what it is like’ sense when they smell each other on the wind.
Subject of conscious states. If the notion of state consciousness could be defined in a way that made no essential appeal to creature consciousness, then conscious creatures might be non-circularly analysed as the subjects of such states. A conscious creature would simply be one that had conscious states in the relevant sense. Understood in that way, the concept of a conscious creature is secondary and derivative from a more basic concept of a conscious state. Thus what being such a subject would involve turns crucially on how the concept of state consciousness is itself understood.
Transitive consciousness. A distinction is commonly drawn between transitive and intransitive senses of consciousness. As well as predicating consciousness of creatures in various senses, we also speak in a relational way of creatures being conscious of something. The ‘of’ here is *intentional, and it involves being directed at an object, the object of which the creature is conscious. It is that object-directedness which supposedly distinguishes transitive consciousness from the various predicative concepts above, which might be classed as types of intransitive consciousness.
As a grammatical or logical fact, the transitive and intransitive concepts differ. We speak both of a creature’s being conscious and of its being conscious of some x. The latter requires a direct object, some x, whether real or not, of which it is conscious. But one should be cautious about the psychological implications of grammatical distinctions. Many of the so-called intransitive concepts seem on analysis to involve intentional directedness as well. For example, a creature could not be sentient with respect to its surroundings nor self-aware without being intentionally directed at the relevant sorts of inner and outer objects. As a matter of grammar no direct object may be required, but as a matter of psychology an intentional object is entailed.
Concepts of state consciousness are equally diverse. Mental states and processes can be described or conceived of as conscious in a variety of distinct if interrelated senses.
State one is aware of. On one very common usage, a conscious mental state is simply a state one is aware of being in (Armstrong 1981). A conscious desire or memory is just a desire or memory that one is aware of having. An unconscious desire is transformed into a conscious one not by a change in the desire itself but by one’s becoming aware of having that desire. Conceived of in this way, the conscious–unconscious division among our mental states simply tracks the extent and limits of our self-awareness.
So-called *higher-order theories are concerned most directly with conscious states in this sense. According to higher-order theory, a mental state M of subject S is conscious just if S is also in a simultaneous higher-order state—either perception-like (Lycan 1996) or thought-like (Rosenthal 1986)—whose content is that S is in M.
Qualitative state. Conscious states might be distinguished as those that involve so-called *qualia or experiential feels, such as the red associated with the visual experience of a ripe tomato, the taste of a pineapple, or the pain in a stubbed toe (Chalmers 1996). On this concept, a conscious state must do more than merely represent; it must do so in a way that involves the presence of experiential qualities or so called ‘raw feels’. The nature of such qualia is matter of current controversy—as is their very existence. Thus one’s concept of qualitative state consciousness will depend upon one’s account of qualia.
Phenomenal state. The term phenomenal property is sometimes used interchangeably with qualitative property. However, if the two are distinguished, one might conceive of conscious states as states with a *phenomenal aspect that goes beyond the presence of experiential feels. Following a long tradition, going back at least to Immanuel Kant, the phenomenal has been understood to concern the overall structure of experience. It thus includes global organizing aspects such as the fact that our experience is structured as the unified continuous experience of an ongoing self set within a world of objects ordered in space, time and causality. Thus the presence of such aspects would be part of what is implied by the concept of a phenomenally conscious state (Husserl 1913/1931).
Subjective state. The term *subjective, like ‘phenomenal’, is often interchanged with ‘qualitative’, but again it may useful to keep them distinct. The term ‘subjective’ can be understood as referring to important aspects of mind not captured by those other notions.
On an epistemic reading, a subjective state or feature might be one that can be known only in a certain way, perhaps only from the first-person experiential perspective (Jackson 1982, Van Gulick 1985). The subjective taste of a mango can be known or understood only from the perspective of those who have had that experience. On a more ontological reading, subjective features are those that can exist only as states of a subject, or only from the perspective of a subject. A pain that exists without being felt by anyone may be impossible. In this sense subjective states exist only as modes of experiential subjects. On either reading, a perspectival link is an essential element in the concept of a subjectively conscious state.
‘What it is like’ states. Thomas Nagel’s (1974) notion of ‘what it is like’ might be used to define a conscious state as one that there is something that it is like to be in. A visual perception or memory is a conscious one in this sense just if there is something that it is like to have that perception or that memory. Nagel’s concept has intuitive appeal, but what it involves is not so clear. It may be just a particularly first-person way of picking out the same features aimed at by the qualitative, phenomenal, and subjective concepts of a conscious state, or perhaps it is something more. But if so, it remains unclear what that extra element might be.
Access consciousness. From a more third-person perspective one might define conscious states in terms of the access they afford to other parts of the mind. Following Ned Block (1995), a state is *access conscious if its content is readily available for inference, application, and report. On this more functional concept, a visual perception’s being access conscious is not a matter of qualia or raw feels, but of the degree to which its visual information is readily and flexibly available for use by other mental systems in guiding personal behaviour and making reports. If one can report and act in all the relevant ways on the information in that state, then it counts as a conscious perception in the access sense.
Narrative consciousness. Normal experience exhibits a narrative structure; it is the ‘story’ of the self. It forms an ongoing and relatively coherent history from the perspective of a more or less unified self, whether actual or merely virtual (Dennett 1991). Only a subset of one’s states are included in that stream, and those that are might be counted as narratively conscious on that basis. The concept of a narratively conscious state is simply that of a state that appears in one’s stream, i.e. in one’s self-interpreted experiential narrative.
Given that there are so many concepts of creature consciousness and equally many or more of state consciousness, what relations might there among them? What links are there, either among those in each family or between the two families of concepts? The crossproduct of possibilities is far too large to survey in any comprehensive way, but a few notable relations should be noted.
One might choose to treat either creature consciousness or state consciousness as primary, and analyse the other in terms of it. Conscious creatures might be conceived of derivatively as simply those that have conscious states. Or, working in the other direction, conscious states might be defined as those that occur in conscious creatures.
Alternatively, one might view the two conceptual families as interdependent and accord neither clear priority. The creature level and the state level might be two complementary ways of conceptualizing and viewing the overall nature and structure of conscious mentality. Our understanding of conscious states and processes may enhance our understanding of what it is to be a conscious creature. And, conversely, reflecting on the nature of conscious creatures, selves, and subjects may provide important insights for better conceptualizing conscious states and processes.
The division between creature and state concepts is cross-cut by that between so-called first-person and third-person concepts of consciousness. The former are concepts that are applied and understood from the ‘inside’, i.e. from within the experiential perspective. By contrast the latter are grounded in external factors such as behaviours, reports, and functional capacities.
Although the distinction between first-person and third-person concepts is ubiquitous in the literature, its application is less than clear in many cases. In general, it is not possible to sort the various concepts of consciousness into those that are first-person and those that are third. Most instead have a first-person and a third-person mode of use. For example, one might suppose that phenomenality is a paradigmatically first-person concept in so far as it concerns the structure and feel of experience. But one could also approach it from an external perspective, as by the method of *heterophenomenology (Dennett 1991), which aims to inferentially construct a conscious being’s ‘phenomenal world’ on the basis of its responses and reports. The same duality of use applies in general to our various concepts of consciousness. Each can be used in either a first-person or third-person mode, though one or the other method will dominate in many cases.
In terms of inter-family relations, there are obviously corresponding pairs among some state concepts and analogous creature concepts. The self-awareness concepts in each family involve a reflective aspect of inner-directed intentionality of the sort that higher-order theories aim to explain, and Nagel’s ‘what it is like’ test might be used as a criterion both for conscious creatures and for conscious states.
Within the families, specific clusters of concepts appear to overlap or interpenetrate each other. The qualitative, phenomenal, and subjective notions of state consciousness offer somewhat different slants on the experiential dimension of consciousness, which is the aspect most closely associated with the so called *hard problem and the supposed resistance of consciousness to explanation (Chalmers 1996). Although those three concepts are often used interchangeably with each other and with the ‘what it is like’ concept, it is useful to keep them distinct. They focus on different aspects of experiential consciousness, and thus provide the opportunity to articulate and explain its various aspects and their interrelations.
The notion of a point of view—whether subjective, narrative, or phenomenal—figures as well in multiple concepts of both state and creature consciousness. Others, such as the notion of access consciousness, may seem to be orthogonal, but even in those cases important interlinks may be involved. For example, the unified integration associated with phenomenal structure seems apt to support the richness of inferential connection and application associated with access conscious states. The fact that phenomenality and access consciousness so often co-occur is not likely mere coincidence. More explanatory links appear to be involved.
We use the noun ‘consciousness’ as well as the adjective ‘conscious’ in its various senses. However, that need not imply any commitment to consciousness as a thing or substance over and above conscious states and creatures, at least not any more that we are committed to the existence of redness or squareness as distinct entities over and above red objects and square objects. Consciousness in that respect is like life. Contemporary biologists do not regard life itself as something over and above living organisms. Life is not an extra substance or force that gets added into organisms. Similarly, the existence of conscious creatures and states need not involve the presence of consciousness as an extra ingredient or substance. Conscious creatures are different from non-conscious ones just as living systems are different from non-living ones, but in neither case need that difference be a matter of an added basic force or substance.
Given that states and creatures can be conceived of as conscious in so many different ways, care must be taken to avoid confusion and merely verbal disputes. One needs to be clear about which sense one intends and not to conflate one concept with another. However, the multiplicity of concepts need be no embarrassment. Nor does it provide any reason to disparage consciousness as ill defined. Given the complex nature of conscious mentality, a pluralistic diversity of concepts is just what we need to understand it and explain it in all its many aspects.
ROBERT VAN GULICK
Armstrong, D. (1981). ‘What is consciousness?’ In The Nature of Mind.
Block, N. (1995). ‘On a confusion about the function of consciousness’. Behavioral and Brain Sciences, 18.
Carruthers, P. (2000). Phenomenal Consciousness.
Chalmers, D. (1996). The Conscious Mind.
Dennett, D. C. (1991). Consciousness Explained.
Husserl, E. (1913/1931). Ideas: General Introduction to Pure Phenomenology, transl. W. Boyce Gibson.
Jackson, F. (1982). ‘Epiphenomenal qualia’. Philosophical Quarterly, 32.
Lycan, W. (1996). Consciousness and Experience.
Nagel, T. (1974). ‘What is it like to be a bat?’ Philosophical Review, 83.
Penfield, W. (1975). The Mystery of the Mind: a Critical Study of Consciousness and the Human Brain.
Rosenthal, D. (1986). ‘Two concepts of consciousness’. Philosophical Studies, 49.
Searle, J. (1992). The Rediscovery of the Mind.
Van Gulick, R. (1985). ‘Physicalism and the subjectivity of the mental’. Philosophical Topics, 13.
conceptual thought A central question about consciousness turns on a distinction between sensory and conceptual aspects of mind: does consciousness in the phenomenal sense extend to both aspects, or is it exclusively a sensory affair? The question, though central, is hard to frame because of obscurities both in the relevant sensory/conceptual distinction, and in the relevant notion of consciousness. But briefly we may say this. The relevant sensory aspect includes appearances of colour, shape, location, movement, sound, taste, and odour; bodily feelings of pain, pleasure, pressure, and warmth; and modality-specific *imagery (e.g. visualization). And the conceptual aspect comprises the exercise of capacities for inference, classification, and analogy— what is thus involved in using concepts. Finally, phenomenal consciousness has to do with consciousness in the ‘*what it’s like’ sense. There is, for example, some way it feels to you to be in pain, there are ways colours look to you, and quite generally, there are ways in which it seems to you to have the experiences you do—and there is, in every such case, something that it is like for you to have such experiences. What it is like for you is the phenomenal character of the experience, and phenomenally conscious states are just those which have phenomenal character in this sense. Now, the issue is roughly this. Are occurrences of non-sensory conceptual thought conscious in the phenomenal sense? Should we say they have phenomenal character that is not exhausted by that of whatever sensory experiences and imagery coincide with them?
Foundational issues about consciousness depend on whether we say phenomenal character is exclusively sensory (what we might label the exclusive view), or whether we find it extends to conceptual activity more generally (the inclusive view). For example, only if we are exclusive can we accept approaches to phenomenal consciousness (like Michael Tye’s and Fred Dretske’s) that propose to explain it as a species of sensory representation. Exclusivists and inclusivists would also obviously differ on whether a search for the neural basis of consciousness should be confined to whatever aspects of the brain are dedicated to sense perception and imagery. Further: only inclusivists can propose (as do Alvin Goldman and David Pitt) that our ‘introspective’ knowledge of mind can be generally accounted for in terms of some uniquely first-personal sensitivity to the phenomenal character of our experience. And the two camps will divide on the special value that attaches to consciousness in this sense. Exclusivists will see it bound up only with a special consideration due the sentient, while inclusivists will take it to concern also whatever distinctive value accrues to ‘higher’ faculties. Exclusive views have been defended in, e.g., Jackendoff (1987), Lormand (1996), Robinson (2005), Georgalis (2003), and Wilson (2003). Inclusive views are advocated in Horgan and Tienson (2002), Pitt (2004), Siewert (1998), and Strawson (1994).
Care must be taken not to misrepresent the views of either camp. Inclusivists need not claim that thoughts have distinctive sensation-like ‘feels’. And they are not committed to saying that conscious episodes of non-imagistic thought and understanding could occur in isolation from the sort of general (e.g. inferential) abilities many would deem requisite for the possession of concepts. Exclusivists, for their part, need not simply deny conceptual thought is phenomenally conscious— they may say rather, that it is so only indirectly, through being accompanied by relevant sense experience or imagery (e.g. verbal imagery that expresses it). What, then, is the dispute? Most would accept that subjectively discernible differences occur in what and how we understand and think. And it will be granted that these are not just differences in how something appears to the senses, or in the modality or content of imagery. But consensus is more elusive on the question: do any such more-than-sensory differences count as *phenomenal? This question need not be addressed only with an appeal to ‘what it seems right to us to say’ directly in response to it. There is a place here for reasoned consideration. Further attempts to clarify what is meant by ‘phenomenal character’ should be part of this. Also, we should examine arguments for inclusivism (like Pitt’s) that hold we need to recognize differences in occurrent thought as phenomenal in nature to account for the immediacy with which we know our own minds. But we also need to consider argument from critical first-person reflection on actual cases. At least two general kinds of cases seem pertinent: (1) unverbalized occurrences of non-imagistic thought; and (2) experiences of understanding.
Regarding (1): we should take note of occasions on which we have warrant for asserting: ‘It just occurred to me that …’; ‘I was just thinking of …’; or ‘I just realized that …’. If there was something it was like for a thought to occur to us on such occasions, but we cannot recall having either formed an image of or verbalized what we were thinking at that earlier time, and yet, we have no reason to attribute this to a deficit in memory, then we have prima facie reason to be inclusive. For then there is phenomenal character to thought even when we lack evidence of anything sensory with which we can plausibly identify it.
Type (2) cases—‘experiences of understanding’—are various. Consider the following. (1) You repeat a word or phrase until it becomes for you a ‘meaningless sound’. (2) You cease to follow the meaning of a text (in a language in which you are fluent) and then you re-read it, following the meaning. (3) You are faced with a syntactically odd or complex sentence—at first it does not make sense to you, and then, suddenly, it does. (4) You encounter a sentence in a language you understand poorly or not at all—initially it means nothing to you, but subsequently you ‘get’ it. Finally, (5) consider instances of ambiguous phrases—first taken (or understood) one way, then another: cases of ‘interpretive switch’. In each of (1)–(5), you may ask: (a) Is there a subjectively discernible change in what it is like for you to undergo the experiences involved? (b) If so, can you plausibly conceive of this change purely in sensory terms, without resort to what is thought, or understood by the utterance? Answering ‘yes’ to (a) and ‘no’ to (b) seems to give you reason to be inclusive.
An inclusive view must confront difficult questions that arise once we consider a distinction between what is thought (i.e. thought content) and how it is thought (i.e. its mode—as, e.g., a doubt, a supposition, or a judgement), and associated controversies over externalist and internalist views of content. Are the more-than-sensory differences in the phenomenal character of thought separable from thought-content and mode? It will seem not, if we have no way to conceive of what it is like for us to think, except in terms of thought-mode and content. But externalist views of thought content may seem to suggest an alternative. If, as externalism holds, having the right external causal connection to the world is essential to thought content, and the subjective character of experience could remain the same even when these connections were broken, might not a literally thought-less subject have experience with the very same phenomenal character as a conceptual thinker? Against this, one may argue that the thought experiments crucial to establishing externalism do not show this is possible, but, at most, only that some ways of identifying thought content would distinguish two subjects content-wise who were phenomenally the same. For example, in a standard externalist thought experiment, a (chemically uninformed) subject on Earth is thinking that ‘there is water in a glass’, while another, similarly ignorant and phenomenally type-identical subject (on Twin Earth) is thinking (not this, but) that ‘there is twater in a glass’. (‘Twater’, we assume, names a Twin Earth substance superficially similar to, but microstructurally quite different from, Earth’s H2O.) But this leaves it open that two such subjects may still be somehow cognitively similar, insofar as the conception one has of water is no different from that the other has of twater. And if having what they share would be enough to make someone a thinker of thoughts, then externalism does not refute the idea that phenomenal character is sufficient for some kind of thought content.
CHARLES SIEWERT
Georgalis, N. (2003). ‘The fiction of phenomenal intentionality’. Consciousness and Emotion, 4.
Horgan, T. and Tienson, J. (2002). ‘The intentionality of phenomenology and the phenomenology of intentionality’. In Chalmers, D. (ed.) Philosophy of Mind: Classical and Contemporary Readings.
Jackendoff, R. (1987). Consciousness and the Computational Mind.
Lormand, E. (1996). ‘Nonphenomenal consciousness’. Noûs, 30.
Pitt, D. (2004). ‘The phenomenology of cognition, or, what is it like to think that P?’. Philosophy and Phenomenological Research, 69.
Robinson, W. (2005). ‘Thoughts without distinctive non-imagistic phenomenology’. Philosophy and Phenomenological Research, 70.
Siewert, C. (1998). The Significance of Consciousness.
Strawson, G. (1994). Mental Reality.
Wilson, R. (2003). ‘Intentionality and phenomenology’. Pacific Philosophical Quarterly, 84.
conditioning The word ‘conditioning’ has two different meanings, only one of which is uncontroversial. In common usage, even in the psychological literature, the word is used to refer to a kind (or kinds) of learning, but formally it refers only to two basic procedures that have been used to study learning; instrumental and Pavlovian conditioning. The question of what exactly is learnt under these procedures is very much a matter of ongoing debate. As will become apparent, this question is particularly important when we are considering the role that conscious awareness might play in ‘conditioning’.
1. Instrumental conditioning
2. Pavlovian conditioning
3. Evaluative conditioning
4. The role of awareness in conditioning
Instrumental, or operant, conditioning is a procedure in which behaviours increase or decrease in frequency as a consequence of being followed by reinforcing (rewarding) or punishing outcomes. For example, Thorndike, who discovered instrumental conditioning, noticed that cats escaping from his ‘puzzle boxes’ initially seemed to behave in a largely random manner until they inadvertently ‘solved’ the puzzle, pushing a lever and releasing themselves from the box. He concluded that many sophisticated, apparently goal-directed behaviours may depend on an essentially blind mechanism that simply favours successful behaviours—those that happen to lead to reward. An implicit assumption was that the strengthening of successful behaviours was completely automatic and independent of conscious thought. Pavlovian, or classical, conditioning is a procedure in which a neutral stimulus (most famously, a bell) repeatedly predicts the arrival of a behaviourally important stimulus (like food), resulting in the previously neutral stimulus acquiring the ability to produce responses relevant to the behaviourally important stimulus (like preparatory salivation).
Both these procedures reliably produce learning in a very wide range of species—from ants to elephants (and humans)—which is usually taken to mean that the learning mechanisms engaged under these procedures are in some sense fundamental properties of neural activity and that they have been conserved over evolutionary time. Indeed, a major rationale for studying learning in rats and pigeons—still the main experimental subjects for such research—is that the conditioning mechanisms themselves are thought to be essentially identical, at least across the vertebrates, and so choice of species is largely a matter of convenience. If this view is valid, it implies that the learning in such circumstances should be very simple, automatic and independent of conscious control or interference. Interestingly, in the case of human learning, exactly the opposite view now dominates—that is, that conditioning does not occur in humans in the absence of awareness of the stimulus contingencies.
It had been almost universally assumed that conscious awareness was unnecessary (indeed irrelevant) for any kind of conditioning until three separate reviews of the evidence for conditioning in the absence of awareness each concluded that the accumulated evidence was, in fact, unconvincing (Brewer 1974, Dawson and Schell 1985, Shanks and St John 1994). This conclusion was subsequently reinforced by two more critical reviews (Lovibond and Shanks 2002, Shanks 2005). The reviews differ in their details, and are directed at different bodies of empirical data, since they span nearly 30 years, but the main thrust of the arguments have remained the same. In common with most of the research conducted in this area, the reviews concentrated on the possible role played by awareness of the relationship to be learnt in a conditioning situation, while that learning is taking place. There are clearly other things that learners may or may not be aware of (e.g. the stimuli being learned about), but these have been considered less critical or less interesting questions (Boakes 1989). As might be imagined, the difficult part of demonstrating conditioning without awareness is in devising a measure of awareness that captures all of the learner’s relevant conscious knowledge without providing them with explicit information about the contingency they had been exposed to. The main criticisms of the awareness tests that have been used is that they were either too insensitive, and so simply failed to detect conscious knowledge possessed by the learners, or that they asked the wrong questions, failing to detect the conscious knowledge that actually helped in the learning task. As will be discussed below, these criticisms are important and frequently valid, but unless our default assumption is that conditioning does depend on awareness, it is equally important to be sure that awareness tests only measure conscious knowledge, without any contamination from unconsciously acquired information, and that claims of strong links between awareness and conditioned responding are held to the same high standards of evidence as alleged dissociations. In other words, reasons for doubting the evidence purporting to show conditioning without awareness are not, themselves, reasons for believing that conditioning depends on awareness. This is a separate claim, in need of independent empirical support.
Starting with Thorndike himself, a number of researchers have investigated whether instrumental conditioning can occur in the absence of awareness of the relationship between responding and its consequences. Many of those studies were criticized in Brewer’s (1974) review, mostly on the grounds that the awareness tests used were too insensitive, and in many early cases they clearly were, but there are at least a few studies that have produced reasonably compelling evidence of instrumental responding without awareness. For example, Hefferline et al. (1959) showed that when termination of a loud, unpleasant noise was made contingent on a small movement of the subject’s thumb (detected by an electrode), thumb twitches increased in frequency. Similarly, Liebermann et al. (1998) constructed a fake *extrasensory perception (ESP) task, in which feedback that the subject had selected the card the experimenter was ‘thinking of’ was actually contingent on the subject’s voice volume, and produced both conditioned increases and decreases in volume, without subjects becoming aware of the contingency. In both these tasks the intention of the experiment was disguised by a convincing cover story that participants seem to have believed.
Unfortunately, despite these apparent successes, it is difficult to evaluate the extent to which instrumental conditioning can occur without awareness because there is an essentially infinite number of ways in which a response can increase or decrease in frequency, and so no way to be sure that even a very sensitive awareness test actually measures awareness of the particular ‘response’ reinforced in any given situation. For example, although Hefferline et al’s subjects may, in fact, have been completely unaware of the relationship between their thumb movement and the offset of the loud noise, they may have been aware of the relationship between some correlate of the thumb movement and noise offset, about which they were not asked a specific question. Presumably a general increase in muscular tension, certain kinds of gross bodily movements, or even thinking about flicking switches might each produce small, inadvertent or incidental movements of the thumb, and so it may have been any of these (or a host of others) that were actually reinforced—and no sensitive, specific measure of awareness could hope to capture all of them. Equally plausible correlated hypotheses can be postulated to underlie the voice level effects found by Liebermann et al. Of course this is not to say that instrumental conditioning does not happen without awareness—the fact that it occurs in headless cockroaches strongly suggests that it can—just that it is, in practice, difficult to be sure that it has. For this reason, most of the research designed to test whether conditioning can occur without awareness has focused on classical or Pavlovian conditioning.