Representational theories of consciousness attempt to reduce consciousness to “mental representations” rather than directly to neural states. Examples include first-order representationalism (FOR), which attempts to explain conscious experience primarily in terms of world-directed (or first-order) intentional states (Tye 1995); and higher-order representationalism (HOR), which holds that what makes a mental state M conscious is that it is the object of some kind of higher-order mental state directed at M (Rosenthal 2005, Gennaro 2012). The primary focus of this chapter is on HOR and animal consciousness.
In Section 1, I introduce the more general problem of other minds with respect to animals. In Section 2, I provide a brief sketch of representationalism, which is the theory of consciousness that the higher-order thought (HOT) theory falls under in the standard taxonomy. Section 3 motivates HOT theory and presents some of its details. In Section 4, I present evidence in favor of the view that HOT theory is consistent with animal consciousness. In Section 5, I briefly consider the potentially damaging claim that HOT theory requires neural activity in the prefrontal cortex in order for one to have conscious states.
Perhaps the most commonly used notion of ‘conscious’ is captured by Thomas Nagel’s “what it is like” sense (Nagel 1974). When I am in a conscious mental state, there is “something it is like” for me to be in that state from the first-person point of view. When I smell a rose or have a conscious visual experience, there is something it “seems like” from my perspective. This is primarily the sense of ‘conscious state’ that I use throughout this chapter.
We have come a long way from Descartes’ view that animals are mere “automata” and do not have conscious experience. In addition to the obvious behavioral similarities between humans and many other animals, much more is known today about physiological similarities such as brain and DNA structures. To be sure, there are also important differences and some genuinely difficult grey areas where one might legitimately doubt an animal’s consciousness. Nonetheless, the vast majority of philosophers today accept that a significant portion of the animal kingdom has conscious mental states. This is obviously not to say that most animals can have all of the sophisticated conscious states enjoyed by human beings, such as reflecting on philosophical problems, enjoying artworks, thinking about the vast universe or distant past, and so on. However, it seems reasonable to believe that most animals have some conscious states from rudimentary pains to perceptual states.
One way to approach this topic has been via the traditional “problem of other minds,” that is, how can one know that others have conscious mental states, given the comparatively indirect access we have to another’s mind? Although virtually everyone is willing to take for granted that other human beings have conscious states similar to our own, knowledge of animal minds does present some difficulties. Nonhuman animals cannot describe their mental states using our public language. Although there have been attempts to teach human-like languages to members of other species, none can do so in a way that would easily solve this problem. Nonetheless, a strong inductive rationale for animal consciousness seems sufficient to establish a reasonable belief that (most) animals have conscious mental states. This has traditionally taken the form of an argument by analogy such that we know how we feel when we exhibit the behavior of someone in fear or in pain, and so it seems reasonable to think that the same conscious states are present when a dog or lion displays the similar behavior. This is presumably because we think of such behavior as caused by the relevant conscious state.
Although many different criteria might be put forth (Baars 2005), most evidence of other minds falls under at least one of the following:
Tables and rocks display none of the above criteria, and so we don’t think they are conscious. Trees and plants are alive but also do not meet any of the above criteria. For example, they don’t jump away or scream when approached with a chainsaw or lawnmower. At the other extreme, humans normally seem to meet all four criteria. However, when we look at the animal kingdom, we find evidence that can be somewhat mixed. Some animals may meet only two or three criteria, whereas others might meet only one. At the least, we might suppose that the more criteria met, the more likely an animal is conscious, and there seems to be a major difference between, say, a house fly and a chimp or dolphin. The matter can also be complicated by the fact that it may depend upon the degree to which a given animal meets a particular criterion.
Some theories attempt to reduce consciousness in mentalistic terms, such as ‘thoughts’ and ‘awareness,’ rather than directly in neurophysiological terms. One popular approach is to reduce consciousness to mental representations. The notion of a “representation” is, of course, very general and can be applied to pictures and various natural objects, such as the rings inside a tree. Much of what goes on in the brain might also be understood in a representational way. For example, mental events represent outer objects partly because they are caused by such objects in cases of veridical visual perception. Philosophers often call such mental states ‘intentional states’ which have representational content, that is, mental states are “directed at” something, such as a thought about a horse or a perception of a tree. Although intentional states, such as beliefs and thoughts, are sometimes contrasted with ‘phenomenal states,’ such as pains and color experiences, it is clear that many conscious states, such as visual perceptions, have both phenomenal and intentional properties.
The general view that we can explain conscious mental states in terms of representational states is called ‘representationalism.’ Most representationalists believe that there is room for a second-step reduction to be filled in later by neuroscience. The idea, then, is that if consciousness can be explained in representational terms and representation can be understood in purely physical terms, then there is the promise of a naturalistic theory of consciousness. Most generally, however, representationalism can be defined as the view that the phenomenal properties of conscious experience (that is, the ‘qualia’) can be explained in terms of the experiences’ representational properties.
One question that should be answered by any theory of consciousness is: what makes a mental state a conscious mental state? There is a long tradition that has attempted to understand consciousness in terms of higher-order awareness, but this view has been vigorously defended by several contemporary philosophers (Rosenthal 1986, 1997, 2005, Lycan 1996, 2001, Gennaro 1996, 2012). The basic idea is that what makes a mental state M conscious is a higher-order representation (HOR) of M. A HOR is a “metapsychological” or “metacognitive” state, that is, a mental state directed at another mental state (“I am in mental state M”). So, for example, my desire to write a good chapter becomes conscious when I am (non-inferentially) “aware” of the desire. Intuitively, conscious states, as opposed to unconscious ones, are mental states that I am “aware of” being in. This overall idea is sometimes referred to as the Transitivity Principle (TP):
Conversely, the idea that I could be having a conscious state while totally unaware of being in that state seems like a contradiction. A mental state of which the subject is completely unaware is clearly an unconscious state. For example, I would not be aware of having a subliminal perception, and so it is unconscious.
There are various kinds of HOR theory, with the most common division between higher-order thought (HOT) theories and higher-order perception (HOP) theories. HOT theorists, such as Rosenthal (2004, 2005) and Gennaro (2012), think it is better to understand the HOR as a thought containing concepts. HOTs are treated as cognitive states involving some kind of conceptual component. HOP theorists urge that the HOR is a perceptual state which does not require the conceptual content invoked by HOT theorists (Lycan 1996, 2004). One can also find something like TP in premise 1 of Lycan’s (2001) more general argument for HOR:
The intuitive appeal of the first premise leads to the final conclusion – (5) – which is really just another way of stating HOR.
It might seem that HOT theory results in circularity by defining consciousness in terms of HOTs (since HOTs can be thought of as a kind of higher-order “awareness” of mental states, as in TP). It also might seem that an infinite regress results because a conscious mental state must be accompanied by a HOT which, in turn, must be accompanied by another HOT ad infinitum. However, the standard and widely accepted reply is that when a conscious mental state is a first-order world-directed conscious state, the higher-order thought (HOT) is not itself conscious. But when the HOT is itself conscious, there is a yet higher-order (or third-order) thought directed at the second-order state. In this case, we have introspection, which involves a conscious HOT directed at an inner mental state. When one introspects, one’s attention is directed back into one’s mind. For example, what makes my desire to write a good chapter a conscious first-order desire is that there is an unconscious HOT directed at the desire. In this case, my conscious focus is directed outward at the paper or computer screen, and so I am not consciously aware of having the HOT from the first-person point of view. When I introspect that desire, however, I then have a conscious HOT (accompanied by a yet higher, third-order, HOT) directed at the desire itself. It is thus crucial to distinguish first-order conscious states (with unconscious HOTs) from introspective states (with conscious HOTs). HOT theory can be illustrated by Figure 18.1 below.
A number of objections to higher-order theories (and counter-replies) can be found in the literature. One of these says that animals (and even infants) are not likely to have the conceptual sophistication required for HOTs, which would then render animal (and infant) consciousness very unlikely (e.g. Seager 2004). Are cats and pigs capable of having complex higher-order thoughts, such as “I am in mental state M”? Although most who raise this issue are not HOT theorists, Carruthers (1989, 2000) is one HOT theorist who actually embraces the normally unwelcome conclusion that (most) animals do not have phenomenal consciousness. I initially replied that HOTs need not be as sophisticated as they might initially appear, and there is ample comparative neurophysiological evidence, such as the presence of certain shared cortical and even neocortical structures, supporting the conclusion that animals have conscious mental states and HOTs (Gennaro 1996).
In my view, numerous recent experiments also show that animals have “metacognitive” states which provide further evidence for HOTs. A number of key areas are under continuing investigation, including animal memory and uncertainty monitoring. The term ‘I-thoughts’ is also often used in the literature to mean “thoughts about one’s own mental states or oneself.” Thus, they are very similar to HOTs and closely linked to what psychologists call ‘metacognition,’ that is, mental states about mental states, or ‘cognitions’ about other mental representations (Koriat 2007). Although some reject the notion that most nonhuman animals have I-thoughts, the evidence seems to be growing that many animals do in fact have them, and may even be able to understand the mental states of others (Terrace and Metcalfe 2005, Hurley and Nudds 2006).
One area of inquiry has to do with episodic memory (EM), which is an explicitly conscious kind of remembering involving “mental time travel” (Tulving 1983, 2005). It is often contrasted with semantic memory, which need only involve knowing that a given fact is true or what a particular object is, and procedural memory, whereby memory of various learned skills is retained. Some notion of “I” or self-concept seems necessary to have a genuine EM. I recognize an EM as mine and as representing an event in my past. To give an example from animal cognition research, Clayton and Dickinson and their colleagues report convincing demonstrations of memory for time in scrub jays (Clayton, Bussey, and Dickinson 2003: 37). Scrub jays are food-caching birds, and when they have food they cannot eat, they hide it and recover it later. Because some of the food is preferred but perishable (such as crickets), it must be eaten within a few days, while other food (such as nuts) is less preferred but does not perish as quickly. In cleverly designed experiments using these facts, scrub jays are shown, even days after caching, to know not only what kind of food was where, but also when they had cached it (see also Clayton, Emery, and Dickinson 2006). Although still somewhat controversial, these experimental results at least seem to show that scrub jays have some episodic memory which involves a sense of self over time. This strongly suggests that the birds have some degree of metacognition with a self-concept (or “I-concept”) which can figure into HOTs. Further, many crows and scrub jays return alone to caches they had hidden in the presence of others and recache them in new places (Emery and Clayton 2001). This suggests that they know that others know where the food is cached, and thus, to avoid having their food stolen, they recache the food. This strongly suggests that these birds can even have some mental concepts directed at other minds, which is sometimes called ‘mindreading.’ Of course, there are many different experiments aimed at determining the metacognitive abilities of various animals, so it can sometimes be difficult to generalize across species.
There is also the much-discussed work on uncertainty monitoring with animals such as monkeys and dolphins (Smith, Shields, and Washburn 2003, Smith 2005). For example, a dolphin is trained in a perceptual discrimination task, first learning to identify a particular sound at a fixed frequency (the “sample” sound). The dolphin later learns to match other sounds to the sample sound. When presented with a sound that is either the same or different in pitch as the sample sound, he has to respond in one way if it is the same pitch (such as by pressing one paddle) and another way if it is a different pitch (pressing another paddle). Eventually the dolphin is introduced into a test environment and forced to make extremely difficult discriminations. To test for the capacity to take advantage of his own uncertainty, the dolphin is presented with a third “uncertain” response, the Escape paddle, which yields a greater reward than an incorrect response but a lesser reward than a correct response. The dolphin chooses the Escape paddle with a similar response pattern to humans and rhesus monkeys, which suggests that the dolphin is aware of his state of uncertainty; that is, he has some knowledge of his own mental state. This is clearly a metacognitive state: the dolphin is aware that he doesn’t know something, in this case, whether or not a sound matches (or is very close to) the sample sound.2
Some authors (e.g. Carruthers 2000, 2005, 2009), however, have cited experimental work suggesting that even chimps lack the ability to attribute mental states to others (Povinelli 2000). These experiments are designed to determine if chimps take notice of when an experimenter is looking at something (say, food) or is unable to see something (for example, due to blindfolding). Chimps were just as likely to ask for food from an experimenter with a bucket over her head as from one who could see, which seems to indicate a lack of the mental concept ‘seeing’ or ‘visual perception.’ Carruthers further argues that animals with HOTs should also be able to have thoughts about the mental states of other creatures. However, it is not at all clear that having I-thoughts requires being able to read other minds. And in any case, the evidence seems to be growing that many animals can mind-read. For example, Laurie Santos and colleagues show that rhesus monkeys attribute visual and auditory perceptions to others in competitive paradigms (Flombaum and Santos 2005, Santos, Nissen, and Ferrugia 2006). Rhesus monkeys preferentially attempt to obtain food silently only in those conditions where silence was relevant to obtaining the food undetected. While a human competitor was looking away, monkeys would take grapes from a silent container, thus apparently understanding that hearing leads to knowing on the part of human competitors. Subjects reliably picked the container that did not alert the experimenter that a grape was being removed. This suggests that monkeys take into account how auditory information can change the knowledge state of the experimenter.3
One interesting development in recent years has been on attempts to identify how HOT theory might be realized in the brain. The issue is sometimes framed in terms of the question: How global is HOT theory? That is, do conscious mental states require widespread brain activation, or can at least some of them be fairly localized in narrower areas of the brain? Perhaps most interesting is whether or not the prefrontal cortex (PFC) is required for having conscious states (Gennaro 2012: chapter nine). I disagree with those who think that, according to HOT theory and related views, the PFC is required for most conscious states (Kriegel 2009, Block 2007, Lau and Rosenthal 2011). It may very well be that the PFC is required for the more sophisticated introspective states, but this isn’t a problem for HOT theory because it does not require introspection to have first-order conscious states.
There seems to be significant evidence for conscious states without PFC activity. For example, Rafael Malach and colleagues show that when subjects are engaged in a perceptual task or absorbed in watching a movie, there is widespread neural activation but little PFC activity (Grill-Spector and Malach 2004, Goldberg, Harel, and Malach 2006). Although some other studies do show PFC activation, this is mainly because of the need for subjects to report their experiences. Zeki (2007) also cites evidence that the “frontal cortex is engaged only when reportability is part of the conscious experience” (2007: 587), and “all human color [brain] imaging experiments have been unanimous in not showing any particular activation of the frontal lobes” (2007: 582). Similar results are found for other sensory modalities, such as auditory perception (Baars and Gage 2010: chapter seven). Also, basic conscious experience is certainly not eliminated entirely even when there is extensive bilateral PFC damage or lobotomies (Pollen 2008). It seems to me that this line of argument would be an advantage for HOT theory with regard to animal and infant consciousness. If HOT theory does not require PFC activity for all conscious states, then it is in a better position to account for animal and infant consciousness since it is doubtful that infants and most animals have the requisite PFC activity.
One might still ask: why think that unconscious HOTs can occur outside the PFC? If we grant that unconscious HOTs can be regarded as a kind of “pre-reflective” self-consciousness, we can, for example, look to Newen and Vogeley (2003) for some answers. They distinguish five levels of self-consciousness ranging from “phenomenal self-acquaintance” and “conceptual self-consciousness” up to “iterative meta-representational self-consciousness.” The majority of their paper is explicitly about the neural correlates of what they call the “first-person perspective” (1PP) and the “egocentric reference frame.” Citing numerous experiments, they point to various neural signatures of self-consciousness. The PFC is rarely mentioned, and then usually only with regard to more sophisticated forms of self-consciousness. Other brain areas are much more prominently identified, such as the medial and inferior parietal cortices, the temporoparietal cortex, the posterior cingulate cortex, and the anterior cingulate cortex (ACC). Even when considering the neural signatures of “theory of mind” and “mindreading,” Newen and Vogeley have replicated experiments indicating that such meta-representation is best located in the ACC. In addition, “the capacity for taking 1PP in such [theory of mind] contexts showed differential activation in the right temporo-parietal junction and the medial aspects of the superior parietal lobe” (Newen and Vogeley 2003: 538). Once again, even if the PFC is essential for having some HOTs and conscious states, this poses no threat to HOT theory provided that the HOTs in question are of the more sophisticated introspective variety.
This neurophysiological issue is certainly not yet fully settled, but I think it is a mistake to hold that HOT theory should treat first-order conscious states as essentially including PFC activity. I would make the following concession, however: if I ever became convinced that animal consciousness is really inconsistent with HOT theory, then I would be much more inclined to give up HOT theory rather than the view that most animals have conscious states.
Kozuch (2014) presents a nice overall discussion of the PFC in relation to higher-order theories, but he argues that the lack of dramatic deficits in visual consciousness in patients with PFC lesions presents a compelling case against higher-order theories. I agree with much of Kozuch’s analysis, especially with respect to the notion that some (visual) conscious states do not require PFC activity (sometimes focused more on the dorsolateral PFC, or dlPFC). However, Kozuch rightly notes that my view is left undamaged, at least to some extent, since I do not require that the PFC is where HOTs must be neutrally realized. I would add that we must also keep in mind the distinction between unconscious HOTs and conscious HOTs (= introspection). Perhaps the latter require PFC activity, given the more sophisticated executive functions associated with introspection, but having first-order conscious states does not require introspection.
In closing, then, HOT theory is a viable theory of consciousness which is consistent with the presence of consciousness in at least most animals. Evidence from studies on animal memory, uncertainty monitoring, and competitive paradigms support the notion that most animals are capable of having some HOTs. Further, there is little reason to suppose that HOTs must essentially involve activity in the PFC.
1 I list this separately because some of this evidence may be more sophisticated than the more basic behavioral or communicative evidence in the first two criteria.
2 Nonetheless, some authors (Carruthers 2008) argue that these and other experiments do not force us to infer the presence of metacognition. But see Gennaro 2012, chapter eight (especially section 8.3), for further counter-reply on this point.
3 I lack the space here to delve further into this massive literature, but see, for example, the essays in Part IV, Mindreading, of this volume and in Terrace and Metcalfe 2005. For much more on the overall issue of mindreading and metacognition in animals and infants, see Carruthers 2009 (and the peer commentary which follows), as well as Nichols and Stich (2003), Goldman (2006), and Gennaro (2009, 2012, chapters seven and eight). For further defense of the view that self-attribution of mental states (metacognition) is prior to our capacity to attribute mental states to others (mindreading), see Goldman (2006). A more modest view, offered by Nichols and Stich (2003), is that the two capacities are independent and dissociable. Carruthers (2009) argues that mindreading is actually prior to metacognition. I am not convinced that the evidence supports his view better, say, than Nichols and Stich’s position. Two often-discussed views are simulation theory (ST) and theory-theory (TT). ST holds that mindreading involves the ability to imaginatively take the perspective of another. TT holds that metacognition results from one’s “theory of mind” being directed at oneself.
For more on metacognition, see D. DeGrazia, “Self-Awareness in Animals,” in R. Lurz (ed.), The Philosophy of Animal Minds (New York: Cambridge University Press, 2009); J. Proust, The Philosophy of Metacognition (New York: Oxford University Press, 2013); and M. Beran, J. Brandl, J. Perner, and J. Proust (eds.), The Foundations of Metacognition (New York: Oxford University Press, 2012). For more on mindreading, see R. Lurz, Mindreading Animals (Cambridge, MA: MIT Press, 2011). For a nice overview of some of the themes in this chapter and related topics, see K. Andrews, “Animal Cognition”, in Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Summer 2016 Edition), http://plato.stanford.edu/archives/sum2016/entries/cognition-animal/.
Baars, B. (2005) “Subjective Experience Is Probably Not Limited to Humans: The Evidence From Neurobiology and Behavior,” Consciousness and Cognition 14, pp. 7–21.
Baars, B., and Gage, N. (2010) Cognition, Brain, and Consciousness: Introduction to Cognitive Neuroscience, second edition, Oxford: Elsevier.
Block, N. (2007) “Consciousness, Accessibility, and the Mesh Between Psychology and Neuroscience,” Behavioral and Brain Sciences 30, pp. 481–499.
Carruthers, P. (1989) “Brute Experience,” Journal of Philosophy 86, pp. 258–269.
Carruthers, P. (2000) Phenomenal Consciousness, Cambridge: Cambridge University Press.
Carruthers, P. (2005) Consciousness: Essays From a Higher-Order Perspective, New York: Oxford University Press.
Carruthers, P. (2008) “Meta-Cognition in Animals: A Skeptical Look,” Mind and Language 23, pp. 58–89.
Carruthers, P. (2009) “How We Know Our Own Minds: The Relationship Between Mindreading and Metacognition,” Behavioral and Brain Sciences 32, pp. 121–138.
Clayton, N., Bussey, T., and Dickinson, A. (2003) “Can Animals Recall the Past and Plan for the Future?” Nature Reviews Neuroscience 4, pp. 685–691.
Clayton, N., Emery, N., and Dickinson, A. (2006) “The Rationality of Animal Memory: Complex Caching Strategies of Western Scrub Jays,” In Hurley and Nudds (2006).
Emery, N., and Clayton, N. (2001) “Effects of Experience and Social Context on Prospective Caching Strategies in Scrub Jays,” Nature 414, pp. 443–446.
Flombaum, J., and Santos, L. (2005) “Rhesus Monkeys Attribute Perceptions to Others,” Current Biology 15, pp. 447–452.
Gennaro, R. (1996) Consciousness and Self-Consciousness: A Defense of the Higher-Order Thought Theory of Consciousness, Amsterdam and Philadelphia: John Benjamins.
Gennaro, R. (2004) “Higher-Order Thoughts, Animal Consciousness, and Misrepresentation: A Reply to Carruthers and Levine,” In R. Gennaro (ed.) Higher-Order Theories of Consciousness: An Anthology, Amsterdam and Philadelphia: John Benjamins.
Gennaro, R. (2009) “Animals, Consciousness, and I-thoughts,” In R. Lurz (ed.) Philosophy of Animal Minds, New York: Cambridge University Press.
Gennaro, R. (2012) The Consciousness Paradox: Consciousness, Concepts, and Higher-Order Thoughts, Cambridge, MA: The MIT Press.
Goldberg, I., Harel, M., and Malach, R. (2006) “When the Brain Loses its Self: Prefrontal Inactivation During Sensorimotor Processing,” Neuron 50, pp. 329–339.
Goldman, A. (2006) Simulating Minds, New York: Oxford University Press.
Grill-Spector, K., and Malach, R. (2004) “The Human Visual Cortex,” Annual Review of Neuroscience 7, pp. 649–677.
Hurley, S., and Nudds, M. eds. (2006) Rational Animals? New York: Oxford University Press.
Koriat, A. (2007) “Metacognition and Consciousness,” In P. Zelazo, M. Moscovitch, and E. Thomson (eds.) The Cambridge Handbook of Consciousness, Cambridge, MA: Cambridge University Press.
Kozuch, B. (2014) “Prefrontal Lesion Evidence Against Higher-Order Theories of Consciousness,” Philosophical Studies 167, pp. 721–746.
Kriegel, U. (2009) Subjective Consciousness, New York: Oxford University Press.
Lau, H., and Rosenthal, D. (2011) “Empirical Support for Higher-Order Theories of Conscious Awareness,” Trends in Cognitive Sciences 15, pp. 365–373.
Lurz, R. ed. (2009) The Philosophy of Animal Minds, Cambridge, MA: Cambridge University Press.
Lycan, W. (1996) Consciousness and Experience, Cambridge, MA: MIT Press.
Lycan, W. (2001) “A Simple Argument for a Higher-Order Representation Theory of Consciousness,” Analysis 61, pp. 3–4.
Lycan, W. (2004) “The Superiority of HOP to HOT,” In R. Gennaro (ed.) Higher-Order Theories of Consciousness: An Anthology, Amsterdam: John Benjamins.
Nagel, T. (1974) “What Is It Like to Be a Bat?” Philosophical Review 83, pp. 435–456.
Newen, A., and Vogeley, K. (2003) “Self-Representation: Searching for a Neural Signature of Self-Consciousness,” Consciousness and Cognition 12, pp. 529–543.
Nichols, S., and Stich, S. (2003) Mindreading, New York: Oxford University Press.
Pollen, D. (2008) “Fundamental Requirements for Primary Visual Perception,” Cerebral Cortex 18, pp. 1991–1998.
Povinelli, D. (2000) Folk Physics for Apes, New York: Oxford University Press.
Rosenthal, D. M. (1986) “Two Concepts of Consciousness,” Philosophical Studies 49, pp. 329–359.
Rosenthal, D. M. (1997) “A Theory of Consciousness,” In N. Block, O. Flanagan, and G. Güzeldere (eds.) The Nature of Consciousness, Cambridge, MA: MIT Press.
Rosenthal, D. M. (2004) “Varieties of Higher-Order Theory,” In R. Gennaro (ed.) Higher-Order Theories of Consciousness: An Anthology, Philadelphia and Amsterdam: John Benjamins.
Rosenthal, D. M. (2005) Consciousness and Mind, New York: Oxford University Press.
Santos, L., Nissen, A., and Ferrugia, J. (2006) “Rhesus monkeys, Macaca mulatta, Know What Others Can and Cannot Hear,” Animal Behaviour 71, pp. 1175–1181.
Seager, W. (2004) “A Cold Look at HOT Theory,” In R. Gennaro (ed.) Higher-Order Theories of Consciousness: An Anthology, Amsterdam: John Benjamins.
Smith, J. D. (2005) “Studies of Uncertainty Monitoring and Metacognition in Animals,” In Terrace and Metcalfe (2005).
Smith, J. D., Shields, W., and Washburn, D. (2003) “The Comparative Psychology of Uncertainty Monitoring and Metacognition,” Behavioral and Brain Sciences 26, pp. 317–373.
Terrace, H., and Metcalfe, J. eds. (2005) The Missing Link in Cognition: Origins of Self-Reflective Consciousness, New York: Oxford University Press.
Tulving, E. (1983) Elements of Episodic Memory, Oxford: Oxford University Press.
Tulving, E. (2005) “Episodic Memory and Autonoesis: Uniquely Human?” In Terrace and Metcalfe (2005).
Tye, M. (1995) Ten Problems of Consciousness, Cambridge, MA: MIT Press.
Zeki, S. (2007) “A Theory of Micro-Consciousness,” In M. Velmans and S. Schneider (eds.) The Blackwell Companion to Consciousness, Malden, MA: Blackwell.