25
Tracking and representing others’ mental states

Stephen A. Butterfill

1 Introduction

Few things matter more than the mental states of those nearby. Their ignorance defines limits on cooperation and presents opportunities to exploit in competition. (If she’s seen where you stashed those mealworms she’ll pilfer them when you’re gone, leaving you without breakfast. And you won’t get that grape if he hears you sneaking past.) What others feel, see and know can also provide information about events otherwise beyond your ken. It’s no surprise, then, that abilities to track others’ mental states are widespread. Many animals, including scrub jays (Clayton, Dally and Emery 2007), ravens (Bugnyar, Reber and Buckner 2016), goats (Kaminski, Call and Tomasello 2006), dogs (Kaminski et al. 2009), ring-tailed lemurs (Sandel, MacLean and Hare 2011), monkeys (Hattori, Kuroshima and Fujita 2009) and chimpanzees (Karg et al. 2015), reliably vary their actions in ways that are appropriate given facts about another’s mental states. What underpins such abilities to track others’ mental states?

There is a quite widely accepted answer. As in humans, so in other animals: abilities to track others’ mental states are underpinned by representations of those mental states. Some people seem less confident about lemurs or monkeys than chimpanzees, perhaps in part because these animals’ abilities to track others’ mental states appear less flexible (e.g. Burkart and Heschl 2007). Others caution that there is currently insufficient evidence to accept that any nonhuman animals ever represent others’ mental states (e.g. Whiten 2013). But overall, the view that abilities to track others’ mental states are underpinned by representations of those mental states is endorsed by many of those cited above for at least some nonhuman animals.

The simple answer will appear inescapable if we assume that tracking others’ mental states must, as a matter of logic, involve representing others’ mental states. But this assumption is incorrect. Contrast representing a mental state with tracking one. For you to track someone’s mental state (such as a belief that there is food behind that rock) is for there to be a process in you which nonaccidentally depends in some way on whether she has that mental state. Representing mental states is one way, but not the only way, of tracking them. In principle, it is possible to track mental states without representing them. For example, it is possible, within limits, to track what another visually represents by representing her line of sight only. More sophisticated illustrations of how you could, in principle, track mental states without representing them abound (e.g. Buckner 2014, p. 571f). What many experiments actually measure is whether certain subjects can track mental states: the question is whether changes in what another sees, believes or desires are reflected in subjects’ choices of route, caching behaviours, or anticipatory looking (say). It is surely possible to infer what is represented by observing what is tracked. But such inferences are never merely logical. To learn what underpins abilities to track others’ mental states, we would therefore need to evaluate competing hypotheses. In recognising this, we immediately face two requirements. The first requirement is a theoretically coherent, empirically motivated and readily testable hypothesis on which tracking mental states does not involve representing mental states. This requirement is currently unmet (Halina 2015, p. 486; Heyes 2015, p. 322) and, as the next section argues, surprisingly difficult to meet.

2 Pure behaviour reading: cast the demon out

Pure behaviour reading is the process of tracking others’ behaviours, including their future behaviours, independently of any knowledge, or beliefs about, their mental states. Can research on pure behaviour reading supply hypotheses on which tracking mental states does not involve representing mental states?

Contrast two approaches to theorising about behaviour reading. One focusses on the behaviourist counterpart of Laplace’s demon. The behaviour-reading demon has unlimited cognitive capacities, perfect knowledge of history and can conceptualise behaviours in any way imaginable. Although blind to mental states, it can predict others’ future behaviours at least as well as any mindreader (Andrews 2005, p. 528; Halina 2015, p. 483f). Invoking the behaviour-reading demon makes vivid the point that the existence of abilities to track others’ mental states does not logically entail representations of mental states. But the behaviour-reading demon is little use when it comes to generating testable hypotheses. Not even the most exacting rigour requires excluding the possibility that an animal is a behaviour-reading demon before accepting that it can represent mental states.

The other approach to theorising about behaviour reading concerns actual animals rather than imaginary demons. Byrne (2003) studied a particularly sophisticated case of behaviour reading in Rwandan mountain gorillas. The procedure for preparing a nettle to eat while avoiding contact with its stings is shown in Figure 25.1. It involves multiple steps. Some steps may be repeated varying numbers of times, and not all steps occur in every case. The fact that gorillas can learn this and other procedures for acquiring and preparing food by observing others’ behaviour suggests that they have sophisticated behaviour-reading abilities (Byrne 2003, p. 513). If we understood these behaviour-reading abilities and their limits, we might be better able to understand their abilities to track mental states too.

We seek an account of pure behaviour reading to generate testable hypotheses about tracking mental states without representing them. This will involve at least three components: segmentation, categorisation and structure extraction.

First, it is necessary to segment continuous streams of bodily movements into units of interest. Humans can readily impose boundaries on continuous sequences of behaviour even as infants (Baldwin and Baird 2001). How could such segmentation be achieved? Commencement and completion of a goal or subgoal typically coincide with dramatic changes in physical features of the movements, such as velocity (Zacks, Tversky and Iyer 2001). Baldwin and Baird express this idea graphically with the notion of a ‘ballistic trajectory’ which provides an ‘envelope’ for a unit of action (Baldwin and Baird 2001, p. 174). Research using schematic animations has shown that adults can use a variety of movement features to group behavioural chunks into units (Hard, Tversky and Lang 2006).

Figure 25.1

Figure 25.1Read this! An analysis of the steps performed by the left and right hands in preparing nettles to eat without getting stung.

Source: Byrne (2003), figure 1 (p. 531).

A second component of behaviour reading is categorisation. Adult humans spontaneously label units of behaviour as ‘running’, ‘grasping’, or ‘searching’ (say). This is categorisation: two units which may involve quite different bodily configurations and joint displacements and which may occur in quite different contexts are nevertheless treated as equivalent. How are categories identified in pure behaviour reading? One possibility is that some categorisation processes involve mirroring motor cognition. When a monkey or a human observes another’s action, there are often motor representations in her that would normally occur if it were her, the observer, who was performing the action (see Rizzolatti and Sinigaglia 2010 for a review). Further, in preparing, performing and monitoring actions, units of action are represented motorically in ways that abstract from particular patterns of joint displacements and bodily configurations (e.g. Koch et al. 2010). These findings indicate that one process by which units of action are categorised is the process by which, in other contexts, your own actions are prepared.

A third component of behaviour reading is structure extraction. Many actions can be analysed as a structure of goals hierarchically ordered by the means-ends relation (see Figure 25.2 for an illustration). A behaviour reader should be able to extract some or all of this structure. But how? Units of behaviour that are all involved in bringing about a single outcome are more likely to occur in succession than chunks not so related. This suggests that transitional probabilities in the sequence of units could, in principle, be used to identify larger structures of units, much as phonemes can be grouped into words by means of tracking transitional probabilities (Gómez and Gerken 2000). We know that human adults can learn to group small chunks of behaviour into larger word-like units on the basis of statistical features alone (Baldwin et al. 2008). A statistical learning mechanism required for discerning such units is automatic (Fiser and Aslin 2001), domain-general (Kirkham, Slemmer and Johnson 2002) and probably present in human infants (Safran et al. 2007) as well as other species, including songbirds (Abe and Watanabe 2011) and rats (Murphy, Mondragón and Murphy 2008). It is therefore plausible that some animals use statistical learning to extract some of the hierarchical structure of actions.

Figure 25.2

Figure 25.2 A routine action with a complex, hierarchical structure

Our primary concern here with behaviour reading is as a potential basis for abilities to track others’ mental states without representing them. But behaviour reading matters in other ways too. In mindreaders, behaviour reading enables mental state ascriptions (Newtson, Engquist and Bois 1977, p. 861; Baldwin et al. 2001, p. 708). Behaviour reading may also matter for efficiently representing events (Kurby and Zacks 2008), identifying the likely effects of actions (Byrne 1999), predicting when an event of interest will occur (Swallow and Zacks 2008, p. 121), and learning through observation how to do things (Byrne 2003). And of course a special case of pure behaviour reading, ‘speech perception’, underpins communication by language in humans.

What are the limits of pure behaviour reading? It is perhaps reasonable to assume that structure extraction depends on domain-general learning mechanisms. After all, such mechanisms appear sufficient, and there is currently little evidence for domain-specific mechanisms. This assumption allows us to make conjectures about the limits of pure behaviour reading. One limit concerns non-adjacent dependencies. There is a non-adjacent dependency in my behaviour when, for example, my now having a line of sight to an object that is currently unobtainable because of a competitor’s presence results in me retrieving the object at some arbitrary later time when the competitor is absent. In this case, my retrieving the object depends on my having had it in my line of sight, but there is an arbitrary interval between these events. The hypothesis is that structures involving non-adjacent dependencies are relatively difficult to learn and identify, and that difficulty increases as the number of non-adjacent dependencies increases.1 More generally, since birdsongs are discriminable and involve diverse behavioural structures (Berwick et al. 2011), we might take the Birdsong Limit as a rough working hypothesis: structures not found in birdsong cannot be extracted in pure behaviour reading.

Although not designed to test such limits, some existing experimental designs involve features which plausibly exclude explaining subjects’ performance in terms of pure behaviour reading only. To illustrate, consider the sequence of events in the ‘misinformed’ condition of Hare, Call and Tomasello (2001, Experiment 1). A competitor observes food being placed [A], the competitor’s access is blocked [B], stuff happens [XN], food is moved [C], more stuff happens [YN], and the competitor’s access is restored [D]. Finding evidence that chimpanzees can learn to identify patterns of this form [ABXN CYN D] and use them to predict the conspecifics’ behaviours would represent a major discovery.

While it is probably impossible and certainly unnecessary to exclude the possibility that an animal is a behaviour-reading demon, it turns out to be quite straightforward (in theory, at least) to exclude the possibility that its actual behaviour-reading abilities are what underpin its abilities to track others’ mental states. Even in advance of knowing much about the processes and representations involved in pure behaviour reading, the assumption that structure extraction depends on domain-general learning mechanisms makes it unlikely that the relatively sophisticated abilities of corvids and great apes (say) to track others’ mental states could be underpinned by pure behaviour reading only.

3 End false belief about false belief

In the absence of an alternative, should we accept, provisionally, that in at least some nonhumans, tracking mental states does, after all, involve representing them? There are at least two obstacles to accepting this.

The first is a false belief about false belief. The false belief task (Wimmer and Perner 1983) is sometimes regarded as an acid test of mental state representations (see Bennett’s, Dennett’s and Harman’s influential responses to Premack and Woodruff 1978). Awkwardly, chimpanzees and other nonhuman animals have so far mostly thwarted efforts to show that they can track others’ false beliefs (e.g. Marticorena et al. 2011). False belief tasks continue to yield many important discoveries concerning humans (e.g. Milligan, Astington and Dack 2007; Devine and Hughes 2014). But there are reasons to doubt that the false belief task, despite its enormous value, is an acid test of mindreading. First, it is possible to track others’ false beliefs without actually representing them (Butterfill and Apperly 2013). Second, there is evidence that typically developing humans can represent incompatible desires before they can represent false beliefs (Rakoczy, Warneken and Tomasello 2007). Having an ability to track false beliefs is therefore not sufficient for being able to represent beliefs, and probably not necessary for being able to represent mental states. So whether we accept that any nonhumans can represent others’ mental states should not hinge on whether they can track false beliefs. As Premack and Woodruff (1978, p. 622) suggest, a false belief task is ‘another arrow worth having in one’s quiver rather than the assured bullseye that the philosophers suggest it is.’

There is a second, more challenging obstacle to accepting that some nonhumans can represent mental states. After claiming that ‘chimpanzees understand … intentions … perception and knowledge’, Call and Tomasello (2008) qualify their claim by adding that ‘chimpanzees probably do not understand others in terms of a fully human-like belief-desire psychology’ (p. 191). This is true. The emergence in human development of the most sophisticated abilities to represent mental states probably depends on rich social interactions involving conversation about the mental (e.g. Moeller and Schick 2006), on linguistic abilities (Milligan, Astington and Dack 2007), and on capacities to attend to, hold in mind and inhibit things (Devine and Hughes 2014). These are all scarce or absent in chimpanzees and other nonhumans. So it seems unlikely that the ways humans at their most reflective represent mental states will match the ways nonhumans represent mental states. Reflecting on how adult humans talk about mental states is no way to understand how others represent them. But then what could enable us to understand how nonhuman animals represent mental states?

The view that tracking mental states involves representing them leaves too many options open, as Call and Tomasello’s nuanced discussion shows. It is not a hypothesis that generates readily testable predictions. We need a theoretically coherent, empirically motivated and readily testable hypothesis on which tracking mental states does involve representing mental states (compare Heyes 2015, p. 321). Identifying such a hypothesis is the second requirement we would have to meet in order to evaluate competing hypotheses about what underpins abilities to track others’ mental states. And to meet this second requirement, we must first reject a dogma.

4 Reject the dogma of mindreading

Representing physical states, such as the masses or temperatures of things, requires having some model of the physical. Little follows directly from the fact that an individual can represent weight or other physical properties: everything depends on which model of the physical underlies her capacities. And if we ask, ‘What model of the physical characterises her thinking?’, we find that there are multiple, experimentally distinguishable candidate answers (e.g. Kozhevnikov and Hegarty 2001). Her physical cognition might be characterised by a Newtonian model of the physical, or perhaps on an Aristotelian model. And it might involve one or another measurement scheme. Perhaps, for example, she distinguishes the weights of things relative to her abilities to move them. Or maybe she relies on a system of comparisons. Different models of the physical and different systems of measurement generate different predictions about the limits of her abilities to track physical events.

Likewise for mental properties. The conjecture that someone can represent mental states – that she is a mindreader, or that she has a ‘theory of mind’ – does not by itself generate readily testable predictions. Everything depends on which model of the mental characterises her mindreading.

In asking which model of the mental – or of the physical – characterises a capacity, we are seeking to understand not how the mental or physical in fact are, but how they appear from the point of view of an individual or system. Specifying a model is a key part of providing what Marr calls a ‘computational description’ of a system (Marr 1982). The model need not be something used by the system: it is a tool the theorist uses in describing what the system is for and broadly how it works.

When an animal is suspected of mindreading, we must ask, ‘How does she model the mental?’ But it will make no sense to ask this question as long as we are in the grip of a dogma. The dogma is that all models of the mental comprise a family in which one of the models, the best and most sophisticated model, contains everything contained in any of the models.

This dogma implies that only animals whose capacities approximate those that humans exhibit in talking about mental states can be mindreaders. In rejecting the dogma, we also remove any reason to make this assumption. Different mindreaders may rely on different, incommensurable models of the mental and different schemes for distinguishing mental states. Mindreading in other animals need not be an approximate version of mindreading in adult humans any more than medieval physical thought approximates contemporary physical theories.

To see how strange endorsing the dogma would be, contrast the mental with the physical. The briefest encounter with the history of science reveals that there are several models of physical phenomena like movement, mass and temperature. Some models are more accurate but also relatively costly to apply, while others are easier to apply but less accurate. And there appear to be different kinds of physical cognition which involve different – and incommensurable – models of the physical (e.g. Helmet et al. 2006). It would be astonishing to discover that there is one privileged model such that all physical cognition can be understood by reference to that particular model. The dogma of mindreading tacitly guides discussion only because, by contrast with the rich array of flawed theories of the physical, there is a scarcity of scientific theories of the sort of mental states which animals can track. But this scarcity can be alleviated.

5 Construct models of the mental

What is a model of the mental? On a widely accepted view, mental states involve subjects having attitudes toward contents (see Figure 25.3). Possible attitudes include believing, wanting, intending and knowing. (Not every model of the mental need include these attitudes.) The content is what distinguishes one belief from all others, or one desire from all others. The content is also what determines whether a belief is true or false, and whether a desire is satisfied or unsatisfied.

There are two main tasks in constructing a model of mental states. The first task is to characterise some attitudes. This typically involves specifying their distinctive functional and normative roles by developing a theory of the mental. The second task is to find a scheme for specifying the contents of mental states. This typically involves one or another kind of proposition.

One model of the mental is specified by minimal theory of mind (Butterfill and Apperly 2013, 2016), which repurposes a theory offered by Bennett (1976) in building on insights offered by Gómez (2009) and Whiten (1994), among others. This theory – or, rather, series of nested theories – specifies states with stripped-down functional roles whose contents are distinguished by tuples of objects and properties rather than by propositions. These features ensure that, although minimal theory of mind is capable of underpinning abilities to track mental states, including false beliefs, in a limited but useful range of situations, realising minimal theory of mind need involve little conceptual sophistication or cognitive resources.

The construction of minimal theory of mind enables us to specify some simple models of the mental, and to generate testable hypotheses about how mindreaders model minds. One such hypothesis concerns infant humans. The hypothesis is that a minimal theory of mind describes the model of the mental which underpins mindreading processes in these subjects. A key prediction of this hypothesis has so far mostly been confirmed (see Low et al. 2016). A minimal model of the mental might capture how minds appear from the point of view of some mindreading processes in some humans.

Figure 25.3

Figure 25.3 Mental states involve subjects having attitudes toward contents

Consider a related hypothesis about nonhuman animals: abilities to track mental states in some nonhumans are underpinned by capacities to represent mental states which involve a minimal model of the mental. (This hypothesis was suggested by Apperly and Butterfill 2009.) This hypothesis avoids objections arising from views on which nonhumans have representational powers whose emergence in human development involves linguistic abilities and communicative exchanges. It also generates testable predictions about the limits of mindreading in nonhumans, including predictions which distinguish hypotheses about minimal theory of mind from hypotheses about pure behaviour reading. And there is already a hint that one such prediction is correct (see Karg et al. 2016; they don’t mention this, but a signature limit of minimal mindreading is inability generally to do level-2 perspective taking).

Constructing models of the mental enables us to identify theoretically coherent, empirically motivated and readily testable hypotheses on which representations of mental states underpin abilities to track them. But of course this is just a first step towards understanding varieties of animal mindreading, one that opens the way for further theorising about the kinds of processes that underpin mindreading.

6 Conclusion

What underpins abilities to track others’ mental states? To answer this question, we would need to evaluate at least two competing hypotheses. First, we would need a theoretically coherent, empirically motivated and readily testable hypothesis on which tracking mental states does not involve representing mental states. Although no such hypothesis currently exists (Halina 2015, p. 486; Heyes 2015, p. 322), there is a body of research on behaviour reading from which a theory capable of generating readily testable predictions might be derived (see Section 2). Second, we would need a readily testable hypothesis on which representations of mental states underpin abilities to track them. The construction of minimal theory of mind enables us to generate one such hypothesis (Section 5).

How plausible are these hypotheses? Even in advance of having a theory of behaviour reading, we might assume that extracting structure in behaviour reading depends on domain-general learning mechanisms only. Given this assumption, it seems unlikely that nonhumans’ most flexible mental-state tracking abilities are underpinned by behaviour reading only (Section 2). This may motivate the search for alternative theories on which tracking others’ mental states does not involve representing them. It may even justify accepting, provisionally at least, that some animals other than humans represent mental states.

To accept this is not yet to have a theory about mindreading capable of generating readily testable predictions, however (Section 3). Understanding abilities to track others’ mental states is not simply a matter of categorising them as involving or not involving representations of mental states. Instead, we need to understand how different mindreaders model the mental.

Because different mindreaders may rely on different, incommensurable models of the mental and different schemes for distinguishing mental states, we need to identify models of the mental that we can use to generate readily testable hypotheses about different mindreaders’ capacities (Section 4). The construction of minimal theory of mind illustrates how to do this.

The hypothesis that abilities to track mental states in some nonhumans, including great apes and corvids, are underpinned by capacities to represent mental states which involve a minimal model of the mental, has three things going for it. It makes precise what researchers should care about in asserting that animals other than humans can represent others’ mental states. It isn’t already known to be false, and there is even a hint that its predictions are correct (Section 5). And it has no theoretically coherent, empirically motivated and readily testable competitors – at least not yet. So if a minimal model of the mental doesn’t characterise any nonhuman animals’ abilities to track other mental states, what does?

Note

1 Compare de Vries et al. (2012). Of course, whether non-adjacent dependencies are intrinsically difficult depends on the cognitive architecture (Uddén et al. 2012). There is evidence that monkeys (Ravignani et al. 2013) and chimpanzees (Sonnweber, Ravignani and Fitch 2015) can learn patterns involving one non-adjacent dependency.

References

Abe, Kentaro, and Dai Watanabe. 2011. ‘Songbirds possess the spontaneous ability to discriminate syntactic rules’. Nature Neuroscience 14 (8): 1067–1074, August. doi:10.1038/nn.2869.

Andrews, Kristin. 2005. ‘Chimpanzee theory of mind: Looking in all the wrong places?’ Mind & Language 20 (5): 521–536, November. doi:10.1111/j.0268–1064.2005.00298.x.

Apperly, Ian A., and Stephen Butterfill. 2009. ‘Do humans have two systems to track beliefs and belief-like states?’ Psychological Review 2009 (116): 4.

Baldwin, Dare, Annika Andersson, Jenny R. Safran, and Meredith Meyer. 2008. ‘Segmenting dynamic human action via statistical structure’. Cognition 106 (3): 1382–1407.

Baldwin, Dare, and Jodie A. Baird. 2001. ‘Discerning intentions in dynamic human action’. Trends in Cognitive Sciences 5 (4): 171–178.

Baldwin, Dare, Jodie A. Baird, Megan M. Saylor, and M. Angela Clark. 2001. ‘Infants parse dynamic action’. Child Development 72 (3): 708–717.

Bennett, Jonathan. 1976. Linguistic Behaviour. Cambridge: Cambridge University Press.

Berwick, Robert C., Kazuo Okanoya, Gabriel J. L. Beckers, and Johan J. Bolhuis. 2011. ‘Songs to syntax: The linguistics of birdsong’. Trends in Cognitive Sciences 15 (3): 113–121, March. doi:10.1016/j.tics.2011.01. 002.

Buckner, Cameron. 2014. ‘The semantic problem(s) with research on animal mind-reading’. Mind & Language 29 (5): 566–589, November. doi:10.1111/mila.12066.

Bugnyar, Thomas, Stephan A. Reber, and Cameron Buckner. 2016. ‘Ravens attribute visual access to unseen competitors’. Nature Communications 7: 10506, February. doi:10.1038/ncomms10506.

Burkart, Judith Maria, and Adolf Heschl. 2007. ‘Understanding visual access in common marmosets, Callithrix jacchus: Perspective taking or behaviour reading?’ Animal Behaviour 73 (3): 457–469, March. doi:10.1016/j.anbehav.2006.05.019.

Butterfill, Stephen A., and Ian A. Apperly. 2013. ‘How to construct a minimal theory of mind’. Mind and Language 28 (5): 606–637.

Butterfill, Stephen A., and Ian A. Apperly. 2016. ‘Is goal ascription possible in minimal mindreading?’ Psychological Review 123 (2): 228–233, March. doi:http://0-dx.doi.org.pugwash.lib.warwick.ac.uk/ 10.1037/rev0000022.

Byrne, Richard W. 1999. ‘Imitation without intentionality. Using string parsing to copy the organization of behaviour’. Animal Cognition 2 (2): 63–72.

Byrne, Richard W. 2003. ‘Imitation as behaviour parsing’. Philosophical Transactions: Biological Sciences 358 (1431): 529–536.

Call, Josep, and Michael Tomasello. 2008. ‘Does the chimpanzee have a theory of mind? 30 years later’. Trends in Cognitive Sciences 12 (5): 187–192.

Clayton, Nicola S., Joanna M. Dally, and Nathan J. Emery. 2007. ‘Social cognition by food-caching corvids: The western scrub-jay as a natural psychologist’. Philosophical Transactions of the Royal Society B 362: 507–552.

de Vries, Meinou H., Karl Magnus Petersson, Sebastian Geukes, Pienie Zwitserlood, and Morten H. Christiansen. 2012. ‘Processing multiple non-adjacent dependencies: Evidence from sequence learning’. Philosophical Transactions of the Royal Society B: Biological Sciences 367 (1598): 2065–2076, July. doi:10.1098/rstb.2011.0414.

Devine, Rory T., and Claire Hughes. 2014. ‘Relations between false belief understanding and executive function in early childhood: A meta-analysis’. Child Development 85 (5): 1777–1794, September. doi:10.1111/cdev. 12237.

Fiser, József, and Richard N. Aslin. 2001. ‘Unsupervised statistical learning of higher-order spatial structures from visual scenes’. Psychological Science 12 (6): 499–504.

Gómez, Juan Carlos. 2009. ‘Embodying meaning: Insights from primates, autism, and Brentano’. Neural Networks, What It Means to Communicate, 22 (2): 190–196, March. doi:10.1016/j.neunet.2009.01.010.

Gómez, Rebecca, and LouAnn Gerken. 2000. ‘Infant artificial language learning and language acquisition’. Trends in Cognitive Sciences 4 (5): 178–186.

Halina, Marta. 2015. ‘There is no special problem of mindreading in non-human animals’. Philosophy of Science 82 (3): 473–490. doi:10.1086/ 681627.

Hard, Bridgette Martin, Barbara Tversky, and David S. Lang. 2006. ‘Making sense of abstract events: Building event schemas’. Memory & Cognition 34: 1221–1235.

Hare, Brian, Josep Call, and Michael Tomasello. 2001. ‘Do chimpanzees know what conspecifics know?’ Animal Behaviour 61 (1): 139–151.

Hattori, Yuko, Hika Kuroshima, and Kazuo Fujita. 2009. ‘Tufted capuchin monkeys (Cebus apella). Animal Cognition 13 (1): 87–92, June. doi:10. 1007/s10071-009-0248-6.

Helme, Anne E., Josep Call, Nicola S. Clayton, and Nathan J. Emery. 2006. ‘What do bonobos (Pan paniscus) understand about physical contact?’ Journal of Comparative Psychology (Washington, DC: 1983) 120 (3): 294–302, August. doi:10.1037/0735–7036.120.3.294.

Heyes, Cecilia. 2015. ‘Animal mindreading: What’s the problem?’ Psychonomic Bulletin & Review 22 (2): 313–327, August. doi:10.3758/s13423-014–0704–4.

Kaminski, Juliane, Juliane Bräuer, Josep Call, and Michael Tomasello. 2009. ‘Domestic dogs are sensitive to a human’s perspective’. Behaviour 146 (7): 979–998. doi:10.1163/156853908X395530.

Kaminski, Juliane, Josep Call, and Michael Tomasello. 2006. ‘Goats’ behaviour in a competitive food paradigm: Evidence for perspective taking?’ Behaviour 143: 1341–1356, November. doi:10.1163/156853906778987542.

Karg, Katja, Martin Schmelz, Josep Call, and Michael Tomasello. 2015. ‘Chimpanzees strategically manipulate what others can see’. Animal Cognition 18 (5): 1069–1076, May. doi:10.1007/s10071–015–0875-z.

Karg, Katja, Martin Schmelz, Josep Call, and Michael Tomasello. 2016. ‘Differing views: Can chimpanzees do Level 2 perspective-taking?’ Animal Cognition 19 (3): 555–564, February. doi:10.1007/ s10071-016-0956-7.

Kirkham, Natasha Z., Jonathan A. Slemmer, and Scott P. Johnson. 2002. ‘Visual statistical learning in infancy: Evidence for a domain general learning mechanism’. Cognition 83 (2): B35–B42.

Koch, Giacomo, Viviana Versace, Sonia Bonnì, Federica Lupo, Emanuele Lo Gerfo, Massimiliano Oliveri and Carlo Caltagirone. 2010. ‘Resonance of cortico – cortical connections of the motor system with the observation of goal directed grasping movements’. Neuropsychologia 48 (12): 3513–3520, October. doi:10.1016/j.neuropsychologia.2010.07.037.

Kozhevnikov, Maria, and Mary Hegarty. 2001. ‘Impetus beliefs as default heuristics: Dissociation between explicit and implicit knowledge about motion’. Psychonomic Bulletin & Review 8 (3): 439–453. doi:10.3758/ BF03196179.

Kurby, Christopher A., and Jefrey M. Zacks. 2008. ‘Segmentation in the perception and memory of events’. Trends in Cognitive Sciences 12 (2): 72–79.

Low, Jason, Ian A. Apperly, Stephen A. Butterfill, and Hannes Rakoczy. 2016. ‘Cognitive architecture of belief reasoning in children and adults: A primer on the two-systems account’. Child Development Perspectives n/a–n/a, May. doi:10.1111/cdep.12183.

Marr, David. 1982. Vision: A Computational Investigation Into the Human Representation and Processing of Visual Information. San Francisco, CA: W.H. Freeman.

Marticorena, Drew C. W., April M. Ruiz, Cora Mukerji, Anna Goddu, and Laurie R. Santos. 2011. ‘Monkeys represent others’ knowledge but not their beliefs’. Developmental Science 14 (6): 1406–1416, November. doi:10.1111/j.1467–7687.2011.01085.x.

Milligan, Karen, Janet Wilde Astington, and Lisa Ain Dack. 2007. ‘Language and theory of mind: Meta-analysis of the relation between language ability and false-belief understanding’. Child Development 78 (2): 622–646, March. doi:10.1111/j.1467-8624.2007.01018.x.

Moeller, Mary Pat, and Brenda Schick. 2006. ‘Relations between maternal input and theory of mind understanding in deaf children’. Child Development 77 (3): 751–766, May. doi:10.1111/j.1467–8624.2006. 00901.x.

Murphy, Robin A., Esther Mondragón, and Victoria A. Murphy. 2008. ‘Rule learning by rats’. Science 319 (5871): 1849–1851, March. doi:10.1126/science.1151564.

Newtson, Darren, Gretchen A. Engquist, and Joyce Bois. 1977. ‘The objective basis of behavior units’. Journal of Personality and Social Psychology 35 (12): 847–862.

Premack, David, and Guy Woodruff. 1978. ‘Does the chimpanzee have a theory of mind?’ Behavioral and Brain Sciences 1 (4): 515–526. doi:10.1017/S0140525X00076512.

Rakoczy, Hannes, Felix Warneken, and Michael Tomasello. 2007. ‘“This way!”, “No! That way!” – 3-year olds know that two people can have mutually incompatible desires’. Cognitive Development 22 (1): 47–68. doi:10.1016/j.cogdev.2006.08.002.

Ravignani, Andrea, Ruth-Sophie Sonnweber, Nina Stobbe, and W. Tecumseh Fitch. 2013. ‘Action at a distance: Dependency sensitivity in a New World primate’. Biology Letters 9 (6): 20130852, December. doi:10.1098/ rsbl.2013.0852.

Rizzolatti, Giacomo, and Corrado Sinigaglia. 2010. ‘The functional role of the parieto-frontal mirror circuit: Interpretations and misinterpretations’. Nature Reviews: Neuroscience 11 (4): 264–274. doi:10.1038/nrn2805.

Safran, Jenny R., Seth D. Pollak, Rebecca L. Seibel, and Anna Shkolnik. 2007. ‘Dog is a dog is a dog: Infant rule learning is not specific to language’. Cognition 105 (3): 669–680, December. doi:10.1016/j.cognition. 2006.11.004.

Sandel, Aaron A., Evan L. MacLean, and Brian Hare. 2011. ‘Evidence from four lemur species that ringtailed lemur social cognition converges with that of haplorhine primates’. Animal Behaviour 81 (5): 925–931, May. doi:10.1016/j.anbehav.2011.01.020.

Sonnweber, Ruth, Andrea Ravignani, and W. Tecumseh Fitch. 2015. ‘Non-adjacent visual dependency learning in chimpanzees’. Animal Cognition 18 (3): 733–745, January. doi:10.1007/s10071–015–0840-x.

Swallow, Khena M., and Jefrey M. Zacks. 2008. ‘Sequences learned without awareness can orient attention during the perception of human activity’. Psychonomic Bulletin & Review 15: 116–122.

Uddén, Julia, Martin Ingvar, Peter Hagoort, and Karl M. Petersson. 2012. ‘Implicit acquisition of grammars with crossed and nested non-adjacent dependencies: Investigating the push-down stack model’. Cognitive Science 36 (6): 1078–1101, August. doi:10.1111/j.1551–6709.2012. 01235.x.

Whiten, Andrew. 1994. ‘Grades of mindreading’. In Children’s Early Understanding of Mind, edited by Charlie Lewis and Peter Mitchell, 47–70. Hove: Erlbaum.

Whiten, Andrew. 2013. ‘Humans are not alone in computing how others see the world’. Animal Behaviour 86 (2): 213–221, August. doi:10. 1016 /j.anbehav.2013.04.021.

Wimmer, Heinz, and Josef Perner. 1983. ‘Beliefs about beliefs: Representation and constraining function of wrong beliefs in young children’s understanding of deception’. Cognition 13: 103–128.

Zacks, Jefrey M., Barbara Tversky, and Gowri Iyer. 2001. ‘Perceiving, remembering, and communicating structure in events’. Journal of Experimental Psychology: General 130 (1): 29–58.