It is beyond serious doubt that nonlinguistic creatures are capable of thinking and reasoning about the physical environment in highly sophisticated ways. But can animals think about thinking? Alternatively put, are animals capable of metarepresentation? This question is at the heart of how we think about comparative psychology, animal cognition, and human cognitive development.
In Bermúdez 2003b and 2009, I proposed that certain types of thinking about thinking are only available to language-using creatures. My argument generated interesting debate and useful criticisms that helped me to refine and develop it. In this entry, I review the state of play, offering a revised version of the argument that addresses some of the principal objections.
As the entries to this companion amply attest, there is a rich experimental literature exploring the representational capacities of nonlinguistic creatures. A number of experimental paradigms directly address the question of whether animals can think about thinking. In order to get the issues clearly in view, it is important to make some basic distinctions between different types of thinking about thinking.
The first distinction has to do with whether the putative thinking about thinking is self-directed or other-directed. Discussions of metarepresentation in language-using humans typically distinguish between metacognition, on the one hand, and mindreading, on the other. Thinkers have metacognitive abilities to the extent that they are capable of monitoring and evaluating their own mental states. Mindreading, in contrast, is a matter of a creature’s ability to think about another creature’s mental states. We can ask, therefore, whether nonlinguistic creatures are capable of metacognition, of mindreading, or of both.
The second distinction has to do with the type of thinking that might be the object of metarepresentational thinking. Thinking about thinking might involve, on the one hand, thinking about another creature’s perceptual states. So, for example, a primate might think that a conspecific can hear a predator or see a food source. Alternatively, the objects of thinking about thinking might be what philosophers typically call propositional attitudes, such as beliefs, hopes, fears, and so on. Perceptual states and propositional attitudes differ in important respects.
Combining these two distinctions yields four distinct types of thinking about thinking, all of which are plainly available and widespread in language-using creatures. In the following, I will be focusing primarily on propositional attitude mindreading.
As will emerge in Section 4, thinking about propositional attitudes is a much richer and more demanding cognitive phenomenon than thinking about perceptual states. It is only for propositional attitudes that the question of language-dependence arises. Moreover, the most widely discussed evidence for metacognition in the animal kingdom comes from studies of animals’ degrees of “uncertainty” about their perceptual judgments. In a typical metacognition experiment, for example, animals learn to perform a perceptual discrimination task and then are trained to use a “don’t know” button in conditions of subjective uncertainty (see Smith 2005, for example). Even leaving aside the first-order (non-metarepresentational) interpretation of such experiments proposed by Carruthers 2008, the most that such experiments can show is the existence of perceptual metacognition (a nonlinguistic animal monitoring its own perceptual states). In contrast, psychologists, philosophers, and ethologists have made much stronger claims about nonlinguistic creatures being able to engage in propositional attitude mindreading.
My view of the extent of thinking about thinking in nonhuman animals is represented in Table 11.1.
Perceptual |
Propositional attitude |
|
---|---|---|
Metacognition |
Maybe |
No |
Mindreading |
Yes |
No |
There are some methodological issues to tackle before discussing my negative claims about propositional attitude mindreading. One response to the discussion so far would be to say that whether nonlinguistic animals can think about thinking is simply an empirical question, to be resolved by suitably designed experiments. There is no room for philosophical arguments or other theoretical speculations.
Robert Lurz has given eloquent expression to a view along these lines. After describing two important experimental paradigms in this area, he writes:
The underlying assumption of the above research is that existence of nonlinguistic higher-order PAs [propositional attitudes] is an empirical question, not to be ruled out a priori but to be decided by running well-designed experiments and examining competing hypotheses against the data. If Bermúdez’s theory is correct, however, this assumption is seriously mistaken: Whether nonlinguistic subjects can have higher-order PAs can be answered from the armchair, and the answer is, in principle, no. The point of mentioning the empirical research is not to make a positive empirical case for the existence of nonlinguistic higher-order PAs. I leave that to the researchers. It is to show that Bermúdez’s theory denies an underlying assumption of a number of lines of current empirical research. The issue, to repeat, is whether we can know in advance of empirical investigation whether actual nonlinguistic subjects have or can have higher-order PAs, not whether the results of current empirical studies demonstrate the existence of nonlinguistic higher-order PAs. A significant consequence of Bermúdez’s theory, then, is that continued empirical research into the existence of nonlinguistic higher-order PAs is known a priori to be misconceived and pointless.
(Lurz 2007, p. 272)
I am sympathetic to Lurz’s animadversions against the proverbial armchair. However, it seems to me that Lurz is setting up a false dichotomy. The significance of all experiments in this area depends upon how the crucial notions are operationalized, and that process of operationalization in turn depends upon a broader theoretical conception of the nature of thought and reasoning. One of the reasons this area is so exciting from an experimental point of view is that there is no standardly agreed conceptual framework for designing experiments and interpreting the results of those experiments.
The task of developing such a conceptual framework is one in which experimentalists are just as engaged as philosophers and theoretical cognitive scientists (see, for example, Heyes 1998 and Povinelli and Vonk 2006). Quine’s well-known metaphorical description of science as a force field is particularly appropriate in this context. Most often Quine’s metaphor is interpreted as showing the impossibility of purely a priori inquiry, which of course it does. But the very same picture of scientific inquiry shows also the naïveté of thinking that any interesting and theoretically loaded question has a straightforward empirical solution. As Quine puts it, “no particular experiences are linked with any particular statements in the interior of the field, except indirectly through considerations of equilibrium affecting the field as a whole” (Quine 1951, p. 40). The discussion in the remainder of this entry should be read as a contribution to the multidisciplinary task of determining just such an equilibrium between the different theoretical and practical pressures at play in this area (see Bermúdez 2011 for further discussion of Quine’s force-field analogy in this context).
The distinction between propositional attitude mindreading and perceptual mindreading is important because they make very different cognitive demands. It is the additional demands imposed by propositional attitude mindreading that, I claim, require linguistic abilities. The difference can best be appreciated initially through diagrams.
Figure 11.1 shows a diagram illustrating perceptual mindreading. Subjects engaged in perceptual mindreading have to be able to represent three things. First, they need to be able to represent the perceiving agent. Second, they need to be able to represent the state of affairs that the other agent is perceiving. Third, they need to be able to represent the fact that the agent is perceiving that state of affairs. The first two do not introduce any additional representational demands, since any subject likely to be engaged in perceptual mindreading will already be perfectly capable of representing other agents (conspecifics and predators, for example). And, I claim, representing perception itself in this context need not be very demanding. It certainly involves being able to represent the other agent’s sensitivity to the perceived state of affairs – that the agent is, or is about to, modify their behavior in response to information about their environment. Relatedly, it involves being able to represent that the agent is suitably placed to be sensitive to their environment – that its gaze is directed in the right direction, for example, or that the perceived state of affairs is in earshot.
Representing perception in this way certainly does not require language. It is beyond dispute that even cognitively unsophisticated nonlinguistic creatures are highly sensitive to contingencies between eye gaze and behavior (in conspecifics and in potential predators). Moreover, and most importantly, no metarepresentation is required. The perceptual mindreader does not need to be able to represent representations, in addition to representing objects and features of the world. Everything takes place at the same level as ordinary thought about the environment and what it contains. There is no need for what I have termed intentional ascent – the shift from thinking about the world to thinking about thinking.
Propositional attitude mindreading is very different because it does require metarepresentation, as the diagram in Figure 11.2 brings out. The key point is that propositional attitude mindreading does not involve thinking about a direct relation between a subject and their environment in the way that perceptual mindreading does. That is the whole point of the Sally-Anne task and the various other false-belief tests. Beliefs can be false and false beliefs are just as powerful in bringing about behavior as true ones. It is what subjects believe about the world that explains and predicts their behavior. This means that representing another subject’s belief state requires representing them as having representations of their environment – representations that can be either true or false. Philosophers typically analyze belief (and other propositional attitudes) as an attitude to a proposition or thought. The terminology is inessential, however. What matters is that understanding what another subject believes requires metarepresentation in a way that understanding what they are seeing or hearing does not.
So, any creature engaging in propositional attitude mindreading must be able to represent representations. In order to establish the language-dependence of propositional attitude mindreading, we need to establish two things. The first is that belief-representations must be represented linguistically (as opposed to being represented imagistically, for example). The second is that these representations must take place in a natural language (as opposed, for example, to a language of thought or some other subpersonal computational medium). In the remainder of this section, I sketch my case for these two claims. We will return to them in more detail below in the context of responding to objections.
To see why linguistic representation is required, we need to go back to the initial discussion of propositional attitudes. An important contrast with perception is that propositional attitudes such as belief do not typically impact upon behavior directly. How one behaves in virtue of what one believes depends upon what else one believes and what one wants to achieve. This means that propositional attitude mindreaders have to be able to represent propositions or thoughts in a way that allows them to work out how the relevant beliefs or other propositional attitudes will feed into action. Since the path from belief to action typically involves some sort of reasoning process, either implicit or explicit, a belief must be represented in a way that reveals its inferential relations to other beliefs and to other relevant mental states, such as desires.
Most of the logical/inferential relations between beliefs hold in virtue of structure. This is certainly true of the logical relations codified in the propositional and predicate calculus. Take conditional beliefs, for example. A bird might have a conditional belief about its environment (if it rains, then there will be more insects on the leaves, for example), or about its own actions (e.g. if there are more insects on the leaves, then I should switch from foraging on the ground to foraging on the leaves). If the creature comes to believe that it is raining, then the combination of beliefs will lead it to switch from foraging on the ground to foraging on leaves. Another creature trying to predict the bird’s behavior will need to recreate the obvious reasoning from the belief that it is raining via the two conditional beliefs to the decision to switch foraging strategies. That, in turn, requires representing the conditional beliefs as attitudes to a complex proposition that relates two other propositions, so that the entire reasoning process has the form
Only in language, I claim, can the structure of these beliefs be represented in the right way for their logical relations to emerge.
Language is a mechanism for creating complex representational structures from simple representations through combinatorial rules, in addition to possessing markers (such as logical connectives and quantifiers) that reveal the basic inferential connections between the propositions that sentences express. Because of this, linguistic representations have a canonical structure, which allows a conditional belief to be represented linguistically in the form “If A then B”, for example. The only alternative to a language-like representational structure is a pictorial/imagistic structure. But pictorial/imagistic representations do not have a canonical structure and so cannot reveal inferential connections. We will return to the relation between language and inference in the next two sections.
Suppose, then, that beliefs and other propositional attitudes must be represented linguistically. It does not follow immediately that they must be represented in a natural (public) language. Many cognitive scientists and philosophers claim that there is a language of thought – a language-like computational medium for subpersonal information-processing. Why could the language of thought not do the job? My answer, in brief, is that propositional attitude mindreading is part of a creature’s conscious mental life. This is because, for those creatures capable of it, propositional attitude mindreading is integrated with conscious practical decision-making. Creatures who understand and can think about the beliefs and desires of others typically do so in the context of working out what they themselves would do. We decide what to do in the light of what we predict others will do and the reasons that we can identify for their observed actions. This means that the representations exploited in propositional attitude mindreading must be consciously accessible elements of a creature’s psychological life – and so cannot be sentences in the language of thought. More on this in Section 7.
The argument in Section 4 plainly depends upon the cogency of the distinction between language-like and imagistic representation. That there is such a distinction has been denied by John Heil in a thoughtful and probing discussion of the relation between thought and language. According to Heil, “where cognition is concerned there is nothing special about language” (Heil 2012, p. 263). His reason is that what I am calling language-like representations are really just images. Here are two illustrative passages.
Just as you make use of sentences – written or spoken – to articulate ideas to others, you can use inner utterances in the articulation of ideas to yourself. You can talk through a problem, recall the details of an earlier conversation, or plan a course of action by listing steps to its completion in your head. In these cases, inner utterances are not manifestations or copies of thoughts; you are thinking with language just as you might open a can with a can-opener…
Inner utterances (I say, siding with Bermúdez) are a species of mental imagery, where the images are images of what their audible, visual, or tactile counterparts sound, look, or feel like. There is no logical or conceptual gulf between linguistic (“propositional”) imagery and imagery of other sorts, “pictorial” imagery. Conscious thought quite generally is imagistic.
(Heil 2012, p. 265–6)
This is a very useful reminder. Heil is absolutely right to emphasize that language is a tool and that we think with or through language, rather than in language. We do not have “wordless” thoughts that we then translate into language.
I have no quarrel, moreover, with his claim that thinking through language is ultimately a matter of entertaining and manipulating linguistic imagery, so that in a sense, all thought comes out as imagistic/pictorial. I have no issues with reformulating my central claim in terms of linguistic imagery (as thinking about thoughts requiring linguistic imagery). I think that it will be very profitable to explore the mechanics of how we think through inner speech. This is a relatively unstudied area (see Vicente and Martinez-Manríque 2011 for a literature review and Bermúdez 2018 for further discussion of inner speech in the context of thinking about thinking). It promises to shed considerable light on cognition in general and metarepresentation in particular.
However, I cannot share Heil’s confidence that “just as it seems unlikely that any tool is irreplaceable, so it seems unlikely that language is, for any particular task, irreplaceable” (2012, p. 265). As we saw in the previous section, the issue has to do with canonical structure and inference. My (reformulated) claim is that thinking about thoughts requires a special kind of imagery – linguistic imagery – to do a job that I claim nonlinguistic imagery cannot do, namely, represent the canonical structure of thinking in a way that will make inferential connections perspicuous. In order to defeat that claim, we need to have reasons for thinking that nonlinguistic imagery is an appropriate tool for that task. Heil does not provide such reasons, but in the next section we will look at an intriguing analysis of map-like representations that seems to be the most compelling proposal in this area.
The contrast between language-like representational formats and imagistic ones is often mapped onto the distinction between digital and analog representations. There are two salient points of difference. The first has to do with how they respectively represent. Complex digital representations are built up in a rule-governed way from basic symbolic units that have a purely arbitrary connection with their objects, whereas analog representations represent through relations of similarity and/or isomorphism. The second has to do with their structure. Analog representations typically exploit continuously variable magnitudes (such as volume or color), while digital representations have a discrete structure.
Given the representational requirements laid out in Section 5, it seems highly plausible that purely analog representations will not suffice for propositional attitude mindreading, since that requires representing beliefs in a way that brings out their internal structure and inferential connections. However, in Bermúdez 2003a, I did not pay sufficient attention to the possibility of hybrid representational formats that might be sufficiently structured to meet the requirements of propositional attitude mindreading without being linguistic. The most promising candidates are cartographic (map-like) representations, which have been illuminatingly studied by Elisabeth Camp.
The following two passages give the flavor of Camp’s rich discussion.
Cartographic systems are a little like pictures and a little like sentences. Like pictures, maps represent by exploiting isomorphisms between the physical properties of vehicle and content. But maps abstract away from much of the detail that encumbers pictorial systems. Where pictures are isomorphic to their represented contents along multiple dimensions, maps only exploit an isomorphism of spatial structure: on most maps, distance in the vehicle corresponds, up to a scaling factor, to distance in the world. Further, typically this spatial isomorphism itself only captures functionally salient features of the represented domain: for a road map, say, only streets and buildings and not trees and benches. Maps also depart from the direct replication of visual appearance by employing a disengaged, “God’s eye” perspective instead of an embedded point of view.
(Camp 2007, p. 158–9)
In principle, it’s not hard to extend maps to represent negative information. Most crudely, we could introduce a higher-order icon with the force of a “contrary operator”: say, putting a slashed circle over the “Bob” icon to indicate that Bob is not at the represented location. Because we are already employing symbolic icons as constituents, this doesn’t itself fundamentally change the sort of representational system we’re employing. However, this technique would quickly lead to massive clutter. A more elegant solution would color icons and background regions to reflect positive and negative information. For instance, the default state could be a grey background, expressing neutrality about the presence and absence of every potentially representable object and property. A black (or other fully-saturated) icon would represent certainty that the relevant object/property is at that location, while a white (or anti-colored) icon would represent certainty of its absence; a white background could then represent certainty that there were no other, unrepresented objects or properties in that region besides those explicitly represented on the map.
(Camp 2007, p. 163)
The question, then, is why can’t some sort of map-like representation be deployed nonlinguistically in propositional attitude mindreading?
The quick response, as Camp herself notes, would be to say that “diagrams and maps just are sentences written in a funny notation” (Camp 2007, p. 155). She responds, surely correctly, that maps and languages have very different combinatorial principles, so that thinking in maps is very different from thinking in words. As observed earlier, linguistic combinatorial principles are conventional and domain-general, whereas maps and pictures represent through isomorphism and similarity.
Nonetheless, in virtue of their hybrid nature, maps exploit representational devices much richer than direct isomorphism. This is central to Camp’s argument that maps can function as nonlinguistic combinatorial representational systems. She analyzes what might be termed structured maps. Consider the following:
In particular, because maps exploit discrete, recurrent syntactic constituents with stable, at least partly conventionalized, semantic properties, one can achieve something close to the effect of sentential structure within a cartographic system by manipulating the basic icons in ways that don’t affect their spatial structure. In effect, we’ve introduced rules for generating syntactically complex icons which represent semantically complex objects and properties: not-Bob, past-Bob, etc. So long as these icons still function as labels placing objects and properties at locations, one might argue, and so long as their mode of combination sets up an isomorphism between their spatial structures and those of the analogous features in the world, we’re still operating within a fundamentally cartographic system.
(Camp 2007, p. 166)
Structured maps involve the addition of symbols, so that representation is not purely pictorial. This does not beg the question against the “sentences in funny notation” view, because of the very significant differences between symbol systems and languages proper. Researchers have successfully trained various different species of nonlinguistic animal to communicate through symbol systems (see the papers in Part V on Communication in this volume for more details), but none of these symbol systems have the properties of full-fledged languages – such as allowing recursive embedding and arbitrary combination, for example.
The real issue, I think, has to do with the ability to use maps as representational devices. Here we need to make a distinction between implicit and explicit mastery. Explicit mastery of a structured map would involve being able to spell out the representational conventions governing the synactically complex icons – being able to articulate, for example, how the “contrary operator” conveys the information that an object is not there. It would be hard to deny that such articulation requires language. But in order to think with a map (to echo John Heil’s phrase), implicit mastery is all that is required. A competent map-user does not need to be able to articulate the conventions governing the map. They need simply to be guided by those conventions and to navigate in conformity with them.
For these reasons, I think that Camp is absolutely correct that maps can have more structure than I gave them credit for without being simply notationally different ways of writing down sentences. Her aim is to undermine standard arguments for the language of thought by showing that maps can function as combinatorial representational systems. However, the question that I am addressing in this chapter is somewhat different (and not one that she directly addresses). What I am interested in is whether thoughts can be represented nonlinguistically through structured maps, and here it is very unclear that structured maps can do the job.
Propositional attitude mindreading is metarepresentational because it involves spelling out how another agent represents the world. On almost all understandings of mindreading, this requires the mindreader to be able to think about the structured map as an articulation of how the other agent believes the world to be.1 There is a fundamental difference between thinking with a structured map, on the one hand, and thinking about a structured map as a way of representing the world, on the other. We saw that implicit mastery of the map’s representational conventions is all that’s required for the first of these. But it is not sufficient for the second. Using a structured map to represent another agent’s beliefs requires the mindreader to think directly about the map’s representational properties. It is not enough just to be guided by them, or to act in accordance with them. Explicit rather than implicit mastery is required. But, as pointed out above, that sort of explicit mastery brings language back into the picture. Using a structured map to represent the world need not be a linguistic achievement. But thinking about how another creature might represent the world by means of a structured map requires a level of explicit grasp of how the map is functioning as a tool for representing the world. And that, I claim, is language-dependent in a way that simply using a structured map need not be.
It is essential to my argument for the language-dependence of thinking about thinking that the crucial metarepresentational work cannot be done by a subpersonal language of thought. As discussed in Section 4, my argument against the language of thought rested upon the claim that propositional attitude mindreading requires beliefs to be represented in a way that makes them consciously accessible. Robert Lurz has taken issue with that part of my argument. He writes:
Bermúdez’s reasoning here appears to rest upon the dubious assumption that if the vehicles of thought are subpersonal, the thoughts themselves (i.e. propositional contents) those vehicles represent are as well. But what needs to be at the personal level in bouts of second-order cognitive dynamics are thoughts (i.e. propositional contents), not their representational vehicles. It is thoughts, after all, that we hold in mind, and it is the relations among thoughts that we consider and evaluate during second-order cognitive dynamics. We needn’t have any conscious accessibility to the representational vehicles of these thoughts in order to have conscious accessibility to the thoughts themselves.
(Lurz 2007, p. 288)
Lurz’s basic point is very valid, and I agree with him that a thought can be cognitively accessible without its vehicle being cognitively accessible. If that were not true, then there would not be any room for discussion and argument about how thoughts are in fact vehicled – we could just introspect the answer. Nonetheless, I am unconvinced by how he applies that point to my argument.
The problem is that “consciously accessible” is an equivocal expression. “Having a consciously accessible thought” can mean either “having a conscious thought” or “being conscious of a thought”, and these are two very different things. Eliding them runs the risk of collapsing the basic distinction between first-order thought (which is about the world) and second-order thought (which is about thoughts).
It is certainly true that one can have a conscious thought without being conscious of the vehicle of that thought. This seems almost always to be the case. But that is because having a conscious thought does not involve being conscious of a thought at all. The consciousness of a conscious thought is, as it were, directed outwards. To have a conscious thought is to be conscious of whatever it is that one’s thought is about. To have a conscious thought about the cat on the lawn is to be conscious of the cat on the lawn.2 This is a paradigm case of first-order thought about the world – the cat is the object of one’s thinking.
In contrast, being conscious of the thought that the cat is on the lawn is not an episode of first-order thought. Being conscious of a thought about the cat on the lawn is thinking about the thought, not thinking about the cat. The object of one’s thinking is not the cat on the lawn, but rather the thought that the cat is on the lawn.
Lurz’s point applies to first-order thought, but not (I claim) to second-order thought. We can think consciously about a cat without being conscious of the vehicle of our thinking. But we can only think consciously about the thought that the cat is on the lawn if the thought is vehicled in a certain way. A good analogy here is thinking about a sentence. We can only think about a sentence if it is written down or uttered. The sentence needs to be represented in a way that brings out its structure and composition. Thoughts are exactly the same. A thought is the thought that it is in virtue of its composition and structure. Thinking about a given thought, therefore, requires representing its composition and structure. So, the vehicle of second-order thought must make the structure and composition of the target thought perspicuous. But, by the argument of Sections 4 and 6, the vehicles of such second-order thinking must be linguistic.
In sum, objections to my argument in Bermúdez 2003a that thinking about thinking requires language have brought a number of interesting and important points into focus. These include a more nuanced picture of the relation between theory and experiment in discussing nonlinguistic cognition (Lurz); the role of linguistic imagery in thought (Heil); an insightful analysis of how cartographic representations can serve as vehicles for first-order thought (Camp); and the relation between content and vehicle in conscious thought (Lurz). These contributions have certainly helped me refine and develop the original argument. But, I submit, the basic claim that thinking about thinking requires language still stands.
1 The exception here is the radical version of simulationism initially proposed in Gordon 1986. On this view, mindreading simply requires deploying one’s own propositional attitudes “off-line” in order to predict how others will behave, so that no metarepresentation is involved. If this is the correct model for propositional attitude mindreading, then that would undermine the distinction between implicit and explicit mastery that I am drawing. This is not the place to evaluate radical simulationism, but I have suggested elsewhere that off-line simulation is best viewed, not as a complete account of propositional attitude mindreading, but rather as one of a range of cognitive shortcuts that creatures employ to avoid the computational complexities of full-fledged metarepresentation (Bermúdez 2003b and 2006).
2 This only holds, of course, when there actually is a cat on the lawn. I am not familiar with a fully satisfying account of what goes on when one consciously thinks about the cat on the lawn and there is no cat on the lawn, but I see no plausibility in the thought that what one is conscious of in such cases is the thought that there is a cat on the lawn.
The arguments explored in this chapter were first presented in Bermúdez 2003a, with critical commentary in Lurz 2007, Heil 2012, and Camp 2007. For further discussion of related issues in animal cognition and metarepresentation, see the papers in Hurley and Nudds 2006 and Lurz 2009.
Baron-Cohen, S., A. M. Leslie, and U. Frith. 1985. Does the autistic child have a ‘theory of mind’? Cognition 21: 37–46.
Bermúdez, J. L. 2003a. Thinking Without Words. New York: Oxford University Press.
Bermúdez, J. L. 2003b. The domain of folk psychology. In A. O’Hear (Ed.), Minds and Persons. Cambridge: Cambridge University Press.
Bermúdez, J. L. 2006. A plausible eliminativism. In B. L. Keeley (Ed.), Paul Churchland. Cambridge: Cambridge University Press.
Bermúdez, J. L. 2009. Mindreading in the animal kingdom. In R. W. Lurz (Ed.), The Philosophy of Animal Minds. Cambridge: Cambridge University Press.
Bermúdez, J. L. 2011. The forcefield puzzle and mindreading in nonhuman primates. Review of Philosophy and Psychology 2: 397–410.
Bermúdez, J. L. 2014. Cognitive Science: An Introduction to the Science of the Mind (2nd edition). Cambridge: Cambridge University Press.
Bermúdez, J. L. 2018. Inner speech, determinacy, and thinking about thoughts. In P. Langland-Hassan and A. Vicente (Eds.), Inner Speech: Nature, Functions, and Pathology. Oxford: Oxford University Press.
Camp, E. 2007. Thinking with maps. Philosophical Perspectives 21: 145–82.
Carruthers, P. 2008. Meta-cognition in animals: A skeptical look. Mind and Language 23: 58–89.
Gordon, R. 1986. Folk psychology as simulation. Mind and Language 1: 158–71.
Heil, J. 2012. The Universe as We Find It. Oxford: Oxford University Press.
Heyes, C. M. 1998. Theory of mind in nonhuman primates. Behavioral and Brain Sciences 21: 101–34.
Hurley, S. L., and M. Nudds. 2006. Rational Animals? Oxford: Oxford University Press.
Lurz, R. W. 2007. In defense of wordless thoughts about thoughts. Mind and Language 22: 270–96.
Lurz, R. W. 2009. The Philosophy of Animal Minds. Cambridge: Cambridge University Press.
Povinelli, D., and J. Vonk. 2006. We don’t need a microscope to explore the chimpanzee’s mind. In S. Hurley and M. Nudds (Eds.), Rational Animals? Oxford: Oxford University Press.
Quine, W. V. O. 1951. Two dogmas of empiricism. The Philosophical Review 60: 20–43.
Smith, J. D., and D. A. Washburn. 2005. Uncertainty monitoring and metacognition by animals. Current Directions in Psychological Science 14: 19–24.
Vicente, A., and F. Martínez-Manrique. 2011. Inner speech: Nature and functions. Philosophy Compass 6: 209–19.