People engage in instrumental reasoning all the time, in fluid and complex ways. Thus: you’re at the office and want to use your new headphones, currently encased in non-frustration-free packaging. You could ask your colleague for scissors, but that would entail a lengthy conversation about Westworld. You wonder: are your keys might be sharp enough to cut the plastic, if you could get a grip on the upper corner? Would an unfurled paper clip pierce it? Or: you’re tired of looking for parking, and want a house with a garage. How much would a three-bedroom house in the closest town with good schools cost? To afford a down payment within two years, you’d need to save at least $1,000 a month. Should you cancel cable and forgo all restaurants? Sell your car and commute by train? Reconcile with your rich but racist uncle?
In these cases and endless others, people form the intention to achieve a goal, G, by identifying a state of affairs M, which is neither inherently desirable nor currently actual, as to-be-done because it will centrally contribute to actualizing G. The ability to reason instrumentally is enormously practical, of course. But it is also theoretically interesting: it stands as a landmark on a trajectory from simple stimulus-response association to purely theoretical deliberation. A creature who can reason instrumentally doesn’t just respond directly to its immediate environment, as a mouse fleeing the scent of a cat does. Nor does it act directly to satisfy a need, like a hungry bird flying off to a cache of nuts. Instrumental reason severs the direct connection between representation and action by interposing a cognitive representation of a possible state.
By giving an agent a wider range of means to achieve its goals, instrumental reason also begins to connect an agent’s thoughts in richer, more flexible ways. As an agent’s goals become increasingly independent of particular means, and as its representations of the world become increasingly independent of particular responses, those cognitive states become more recognizable as desires and beliefs, as opposed to the mixed “pushmi-pullyu” representations – food-to-eat, predator-to-avoid – that are characteristic of simpler creatures (Millikan 1996; Papineau 2001).
Who can reason instrumentally? On the one hand, it seems that nonhuman animals (hereafter simply animals) sometimes solve problems in ways similar to the package-opening and house-buying deliberations above. Thus, Comins et al. (2011) observe that, from a troop of approximately 1,000 rhesus monkeys, one individual, ‘84J,’ can open the abundant local coconuts: by carrying them to a concrete dock (the island’s hardest, flattest surface) and performing a distinctive toss, 84J is able to enjoy a delicious, otherwise inaccessible food. On the other hand, since we lack the access to 84J’s mind that we have to our own minds (via introspection) and to other people’s minds (by verbal report), it’s not obvious that the underlying cognitive process is relevantly similar – in particular, it might have arisen “through trial and error association without reinforcement” (Comins et al. 2011: 2). More fundamentally, it’s not obvious what does need to be true of 84J’s cognitive abilities and mechanisms for his behavior to be justifiedly described as a result of instrumental reasoning.
A venerable tradition holds that 84J can’t be engaged in instrumental reasoning (henceforth IR), because it is a distinctively human capacity. One route to this conclusion identifies genuine reason as a flexible, open-ended capacity to connect thoughts, of which IR is the most obvious practical instance. Thus, Descartes (1637/1985: 140) describes reason as a “universal instrument,” and defends the conclusion “not merely that the beasts have less reason than men, but … no reason at all” by appealing to the observation that even “madmen” and “the stupidest child,” but no animals, spontaneously “invent their own signs” “to make themselves understood.”
An alternative route focuses on reason’s depth rather than its breadth. Thus, Kant holds that reason is constituted by its availability for self-reflection: “the very existence of reason,” he claims, rests on “the freedom of critique” (1781/1999: A738f/B766f). Because only humans can interrogate the basis of and relations between their beliefs and desires, only they can take the responsibility required for genuine belief. By contrast, because animals merely respond to a world and a set of needs that are given to them, they lack the intellectual and moral status of rational agency.
Against such exclusionary views, Hume (1748/1999: 80) defends the continuity of reason by pointing out that animals also learn from experience to “infer some fact beyond what immediately strikes [their] senses”; by exploiting such inferences, he claims, they can “be taught any course of action, and most contrary to their natural instincts and propensities.” In particular, Hume rejects the hypothesis that the way ordinary humans make such inferences is by appeal to general causal laws, since these “may well employ the utmost care and attention of a philosophic genius to discover and observe.” Instead, he concludes, both humans and animals cognize entirely through associative habit (1748/1999: 80).
Contemporary philosophers and psychologists are less prone to treat reason as an all-or-nothing faculty, and more likely to identify clusters of distinct abilities and processes. But the debate between proponents of qualitative difference and continuity persists. And in that debate, instrumental reasoning serves as a central proving ground for distinguishing rational from merely associative cognitive processes, and for identifying the boundary of ‘distinctively human thought.’ Thus, Papineau (2001), Millikan (2006), and Korsgaard (2009) have all recently argued that animals’ cognitive activities are tethered to perception in a way that rules out IR.
Defenders of the human distinctiveness of IR deploy two converging lines of argument. On the one hand, they argue that IR entails an interlocking package of complex capacities that are only found in humans; on the other, they argue that putative instances of nonhuman IR can be explained by simpler associative mechanisms. In Section 2, we argue that each of these complex capacities can be implemented in a way that some other animals do plausibly instantiate. In Section 3, we outline some constraints on the production of IR, and argue that the conditions for goal-directed action, and IR in particular, do outstrip the resources of purely associative explanation. We conclude in Section 4 by sketching some key further differences between human and animal reason.
The core of instrumental reasoning is the identification of a merely possible, not inherently valuable state M as helping to actualize a goal state G. But not just any transition from a represented actual state of affairs A to G via M counts: the agent must act to actualize M because they recognize that M is appropriately connected to G. Thus, we can identify an epistemological and a metaphysical dimension to IR: the transition from representing A to G via M must be justified rather than accidental, in virtue of a grasp of the connection between M and G.
A common defense of the human distinctiveness of IR argues that only language has the expressive power to represent the connection between M and G in the right way. To be justified rather than accidental, the thought goes, the agent’s actualization of M must be motivated by a representation of a general connection between M and G, as in a causal law; thus, Papineau (2001: 153) holds that IR involves “the use of … explicit general information to guide action.” We know how this works with language: via syllogistic reasoning with sentences containing an operator embedding M and G and a modal operator on M. How else, one might wonder, could it go? As Devitt (2005: 147) puts it, “We understand inference in formal terms – in terms of rules that operate on representations in virtue of their structure. But we have no theory at all of formal inferential transitions between thoughts that do not have linguistic vehicles” (cf. also Bermúdez 2003).
An initial worry about a restriction of IR to formal inference over explicit representations of general information concerns the requirement of generality. For many cases of IR, the salient contingency which the agent recognizes is grounded in an intricate cluster of interacting forces; the generality of any represented causal ‘law’ would be highly gerrymandered, and the underlying physical mechanisms would indeed require “the utmost care and attention of a philosophic genius to discover” (Hume 1748/1999: 80). Instead of a general law, it seems that an agent merely needs to represent the presence of a token causal connection between M and G, such that M is a ‘difference-maker’ for G (Woodward 2003).
A more promising way to capture the requirement of generality holds that the agent must grasp the connection as one that obtains between two or more states in the world, independently of the agent. Just as an agent who represents space egocentrically (e.g. in a GPS navigator’s ‘first-person view’ v. traditional ‘aerial’ mode) will be unable to represent spatial relations among locations not involving her – say, from the nest to the pond when she is at the tree – so will a “causally egocentric” creature be limited to representing contingencies that are directly grounded in its own past interventions and immediate possible actions (Papineau 2001; Gopnik et al. 2004). An agent who grasps relations as objective thus treats them as being generally accessible, independently of its own current position and needs, in a way that the egocentric representer misses.
However, language isn’t the only format for encoding causal relations and transitions in an non-egocentric, inferentially valid way. For familiar reasons, not all transitions between representations can be validated ‘explicitly,’ in the sense of recurrence of symbols, on pain of regress (Carroll 1895). Different systems parse out the representational burden between syntactic vehicles and transformation rules differently (Anderson 1978); and a transition between representational states is valid just in case the application of those rules to those vehicles is reliably truth-preserving (Sloman 1978:116).2
For causation in particular, flow-chart-like directed graphs or ‘Bayes nets’ provide a plausible, rigorously defined format for implementing non-egocentric causal knowledge and inference in humans (Pearl 2000; Gopnik et al. 2004; Holyoak and Cheng 2011; Elwert 2013). Further, there is evidence for non-egocentric causal representation in animals (Seed et al. 2009): among other findings, Blaisdell et al. (2006) found that rats differentiate common-cause and causal-chain structures and infer appropriate instrumental action from passive observation; while Call (2013: 12) argues that apes distinguish “mere co-occurrence” of cues and rewards from causal connections, and encode information about “object-object interactions.”
If we grant the possibility of non-egocentric but nonlinguistic causal inference from M to G, how should we understand an agent’s representing M as being merely possible, as is also required for IR? Rather than positing a modal operator within the represented content as would be natural in language, we might invoke a distinct attitude: of entertaining M, as opposed to believing or desiring it. Thus, Suddendorf and Whiten (2001), following Perner (1991), distinguish secondary representation from both primary and meta-representation: secondary representation “decouples” a represented goal G from the perceived reality A, so that G is “held in mind” simultaneously with and distinct from A. In IR, they suggest, an agent “collates” A and G by “mentally working back” from G to A. One standard model for such “working back” is simulation; on a simple view, simulation is just the off-line activation of cognitive, motor, and imagery mechanisms, producing a form of “trial and error in thought … quicker and safer than … either operant conditioning or natural selection” (Millikan 2006: 118). However, the process of “collating” A and G by entertaining M (and possibly alternatives M′, M″, …) also plausibly involves restructuring perceived and recalled information and relating distinct pieces of information (Call 2013: 15). Such simulation and structuring can plausibly be represented non-linguistically in graphic and imagistic formats.
Finally, in the absence of a capacity for explicit self-critique, how should we understand the epistemic norm that the transitions among these representations of A, M, and G be justified? We’ve already seen that they can be implemented in a rigorously valid non-sentential system. A minimal further constraint is that the agent be sufficiently sensitive to the actual relations among A, M, and G, such that if they were to get information indicating alterations among those relations, their behavior would alter accordingly. More robustly, we might add a capacity for metacognition – monitoring the quality of information available either to oneself or to others – which is again possible in the absence of metarepresentation (Proust 2006, and Chapter 13 in this volume). Here too, some animals appear to pass the bar (Smith et al. 2003): for instance, dolphins and rhesus monkeys opt out of a visual-discrimination task in favor of a less-demanding, less-rewarding task as their performance becomes unreliable, suggesting some awareness of their unreliability.
In sum, we should attribute a capacity for IR when we have evidence that an agent acts to actualize M even though M is not inherently desirable, because it represents M in a non-egocentric way as a potential ‘difference-maker’ in a causal network connecting A to G, where its so acting is sensitive to the quality of its information about the relations among A, M, and G. We’ve already seen some evidence that some animals possess each of these constituent capacities. What about the whole package?
Most discussions of instrumental cognition in animals focus on tool use, especially in primates and some bird species (see Shumaker et al. 2011 for review). The least-sophisticated cases of putative tool use involve direct interventions on the environment, as in Köhler (1925)’s classic case of a chimpanzee moving a box to climb up and reach bananas. Following Piaget (1952), many comparative psychologists employ tasks challenging animals to pull rewards that are attached to strings (Jacobs and Osvath 2015), or placed on a tray or cloth. More complex, less perceptually driven tests for tool use, which have been passed by rooks, crows, orangutans, and rhesus monkeys, among others, include the ‘Aesop’s fable’ tube task, in which subjects retrieve food at the bottom of a tube by adding stones (or spit) to the tube in order to make the food float within reach; and the tube-trap task, in which they must insert a tool to retrieve a reward in one set of circumstances, when the trap is functional, but ignore the trap in other circumstances, as indicated by slight variations in the setup (Emery and Clayton 2009; Seed et al. 2009). At the most challenging end lies metatool use: employing one tool to construct or obtain a second, such as fracturing one stone with another to cut a piece of rope with a resulting shard, or using a short tool to dislodge a longer one. Such behavior has been observed in great apes (Toth et al. 2006; Mulcahy et al. 2005; Martin-Ordas et al. 2012), other primates (Mannu et al. 2009), New Caledonian crows (Taylor et al. 2007; Wimpenny et al. 2009), and rooks (Bird and Emery 2009). Finally, animals may also use conspecifics in a way suggestive of IR: for instance, orangutan mothers will coerce their offspring to retrieve food they cannot reach themselves (e.g., by pushing them through a narrow opening), which they then steal (Völter et al. 2015).
In Section 2, we cleared the way for instrumental reasoning in animals by means of non-egocentric representations of causal contingency, and cited some evidence that some animals might engage in it. But as noted in Section 1, the same observable behavior can be produced by importantly different processes, ranging from trial and error to spontaneous insight. Which mechanisms for producing a representational connection from A to G via M count as underwriting IR?
The most obvious mechanisms to rule out are those that are largely innate and inflexible, like the greylag goose’s habit of rolling stray eggs back to her nest. Although the behavior appears purposeful, it is infamously automatic: even if her egg is removed just after she begins to roll it, the goose will complete the entire maneuver (Lorenz and Tinbergen 1938/1970). It is also plausible to exclude actions produced by direct response to perceptual features. For instance, an instrumental reasoner in a string-pulling task must tug the string because it represents a connection between string-pulling and food, not because it perceives the string as a visual extension of the food itself. Likewise, perceptual ‘affordances’ can be learned and quite rich, but don’t interpose an intermediate state represented in a secondary mode as a potential difference-maker for achieving a distinct goal. At the other extreme, IR needn’t be wholly underwritten by individual innovation. Just as few, if any, behaviors are completely innate, but almost always a mixture of genetic potential and learning (Staddon 2016), so too are few, if any, behaviors completely innovative. Genetic predispositions and individual learning are key preconditions for instrumental reasoning and action, for animals and humans alike.
In cognitive psychology, genuine reasoning is typically contrasted with association. While one might think that sophisticated, multistage processes like metatool use require reasoning, virtually all putative cases of IR have been re-described by skeptics in associative terms. Thus, in many metatool-use studies, the accessible tool is positioned near the inaccessible one, opening up the possibility that a subject wielding the former simply dislodged and acquired the latter through chance manipulation. Further, because subjects are often trained to perform a complex metatool task in stages, they might successfully retrieve a reward as the result of automatic chaining, whereby behaviors are linked sequentially as secondary reinforcers of the positive outcome (Epstein et al. 1984; Wimpenny et al. 2009; Martin-Ordas et al. 2012).
The possibility that even these paradigmatically rational behaviors can be explained associatively raises the specter that it may never be possible to establish a given behavior as resulting from genuinely rational as opposed to merely associative processes. One general methodological response is that the dichotomy itself is misguided. Insofar as the complex associative explanations invoked above appeal to transitions between internal states that are characterized in representational terms, they are themselves susceptible to, or tantamount to, rationalist reinterpretation (Papineau and Heyes 2006); and insofar as a system’s processing architecture imposes normatively appropriate functional constraints, associative mechanisms can themselves mimic, or implement, rational cognition (Dickinson 2012).
A more specific, ambitious response is that the conditions for IR identified in Section 2 do impose robust constraints that distinguish merely associative from rational processes, in ways that are susceptible to experimental test. IR involves an agent representing a goal state G independently of current circumstances. By contrast, because chaining utilizes associations between stimulus and response (A-M), but never between response and outcome (M-G), a blind ‘chainer’ will implement M in A regardless of whether G is currently a goal. The fact that rats trained to press a lever for food press less frequently when they find the food less desirable (due to pre-feeding) suggests both that their incentive values are modulated by their motivational states and that their actions are influenced by a sensitivity to action-outcome contingencies (Balleine and Dickinson 1998). Similarly, IR involves connecting M to G independently of association with a particular stimulus state A. The studies of non-egocentric causal learning through transfer from passive observation cited in Section 2 suggest that this condition is also met, insofar as the observed condition is of another animal performing an action which is structurally analogous to M in a situation structurally analogous to A, but where the test subject has not itself previously encountered A or performed M directly (Papineau and Heyes 2006; Dickinson 2012). Thus, putting aside the specific worries about appropriate controls on the purported demonstrations of metatool use as instances of IR above, it is plausible that at least some animals do act in goal-directed ways that are not amenable to standard associationist explanation.
We now have a better grip on what instrumental reasoning is and how it might be implemented. Instrumental reasoning interposes a representation of a non-valued state M between an agent’s representation of their current circumstances A and a goal state G, because they represent M as an achievable difference-maker for producing G. IR can be implemented in the absence of an expressively rich language, for instance through simulation using Bayes nets. And there is substantial, if not incontrovertible, evidence for IR in a range of nonhuman animals, especially rodents, corvids, and primates.
Still, there is a considerable grain of truth in Descartes’ assertion of a qualitative gap between humans and other animals. In its scope, flexibility, and frequency, humans’ capacity for problem-solving does approximate a “universal instrument” much more closely than even the most sophisticated cases of spontaneous metatool use cited above. In application to IR in particular, humans appear to have a markedly more nuanced grasp of causal networks as interacting clusters of multiple distinct forces. They have a markedly richer ability to explore alternative paths from A to G. And they have a markedly more robust ability to critique and revise those representations. Even if it is not an absolute condition on the possibility of instrumental reasoning, language clearly facilitates each of these abilities individually and in combination, in virtue of its combinatorial generality, its syntactic and semantic abstractness, and its indefinite recursive capacity (Camp 2015).
Our focus on IR should also not lead us to neglect an arguably more profound difference between human and animal cognition: not just a greater flexibility of means, but of ends. Millikan (2006: 122) claims that other animals, while capable of highly complex cognition, are still fundamentally ‘pushmi-pullyu’ creatures in the sense that they “solve only problems posed by immediate perception … by deciding among possibilities currently presented in perception, or as known extensions from current perception.” We have seen that some animals do exploit possibilities that are not, and have never been, directly presented in or extending from perception. But there is less evidence that they solve problems that are not immediately present to them.
One distinctively human tendency may be to create our own problems – in particular, to set multiple innovative long-term goals, and to adjust and adjudicate among them in flexible, ongoing ways. Korsgaard (2009: 38) argues that by “shattering” the “teleological conception of the world” as embodying a given, fixed set of distinctions and desires, Kantian self-conscious reflection “creates both the opportunity and the necessity for reconstruction.” We have seen that other animals are agents who do more than blindly respond to circumstances as given. But humans may be closer to achieving uniqueness in actively constructing ourselves as distinctive agents or selves (Camp 2011).
Imagination clearly plays a key role in instrumental reasoning, by enabling an agent to step back from current circumstances and identify alternative conditions that would make a difference to achieving a goal. Call points to a correlation between exploratory play and flexible problem solving, and suggests that play “performs a crucial role in the acquisition and storage of information” (2013: 12), especially of information about causal contingencies. However, play also involves trying on goals in a merely ‘as-if,’ exploratory mode. In this way, for humans at least, it also potentially contributes to the reconstructive project of making a self. In these respects, imagination should not be opposed to reason, but rather treated as an integral component of it.
1 Thanks to Ben Bronner, Federico Castellano, and Simon Goldstein for discussion, and Jake Beck for comments.
2 Papineau himself appears to understand explicitness in terms of an overall “system which processes … items of general information to yield new such general information” (2001: 155–6). He finds an evolutionary explanation in terms of language “attractive” but allows the possibility that “our ancestors played out various scenarios in their ‘mind’s eye’” (2001: 177).
S. Hurley and M. Nudds, Rational Animals? (Oxford: Oxford University Press, 2006) brings together a wide range of scientists and philosophers discussing what it means to be rational and what behaviors indicate rationality. R. Lurz, The Philosophy of Animal Minds (New York: Cambridge University Press, 2009) contains valuable essays by philosophers on animals’ capacity to reason. R. Shumaker, K. Walkup and B. Beck, Animal Tool Behavior: The Use and Manufacture of Tools by Animals (Baltimore, MD: The Johns Hopkins University Press, 2011) offers a rich compendium of examples of tool use in animals, ranging from spiders to primates. C. Sanz, J. Call and C. Boesch, Tool Use in Animals: Cognition and Ecology (Cambridge: Cambridge University Press, 2013) contains analytical essays by leading cognitive ethologists and animal psychologists about animal tool use.
Anderson, JR 1978, ‘Arguments concerning representations for mental imagery’, Psychological Review, vol. 85, no. 4, p. 249.
Balleine, BW, and Dickinson, A 1998, ‘Goal-directed instrumental action: Contingency and incentive learning and their cortical substrates’, Neuropharmacology, vol. 37, no. 4, pp. 407–419.
Bermúdez, J 2003, Thinking without words, Oxford University Press, Oxford.
Bird, CD, and Emery, NJ 2009, ‘Insightful problem solving and creative tool modification by captive nontool-using rooks’, Proceedings of the National Academy of Sciences, vol. 106, no. 25, pp. 10370–10375.
Blaisdell, AP, Sawa, K, Leising, KJ, and Waldmann, MR 2006, ‘Causal reasoning in rats’, Science, vol. 311, no. 5763, pp. 1020–1022.
Call, J 2013, ‘Three ingredients for becoming a creative tool user’, in CM Sanz, J Call and C Boesch (eds.), Tool use in animals: Cognition and ecology, Cambridge University Press, Cambridge.
Camp, E 2011, ‘Wordsworth’s Prelude, poetic autobiography, and narrative constructions of the self’, Nonsite.org vol. 3, http://nonsite.org/article/wordsworth%E2%80%99s-prelude-poetic-autobiography-and-narrative-constructions-of-the-self
Camp, E 2015, ‘Logical concepts and associative characterizations’, in E Margolis and S Laurence (eds.), The conceptual mind: New directions in the study of concepts, MIT Press, Cambridge, MA.
Carroll, L 1895, ‘What the tortoise said to Achilles’, Mind, vol. 104, no. 416, pp. 691–693.
Comins, JA, Russ, BE, Humbert, KA, and Hauser, MD 2011, ‘Innovative coconut-opening in a semi free-ranging rhesus monkey (Macaca mulatta): A case report on behavioral propensities’, Journal of Ethology, vol. 29, no. 1, pp. 187–189.
Descartes, R 1637–1985, ‘Discourse on method’, in J Cottingham, R Stoothoff and D Murdoch (eds.), The philosophical writings of Descartes, vol. 1, Cambridge University Press, Cambridge.
Devitt, M 2005, Ignorance of language, Oxford University Press, Oxford.
Dickinson, A 2012, ‘Associative learning and animal cognition’, Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 367, no. 1603, pp. 2733–2742.
Elwert, F 2013, ‘Graphical causal models’, in SL Morgan (ed.), Handbook of causal analysis for social research, Springer, New York.
Emery, NJ, and Clayton, NS 2009, ‘Tool use and physical cognition in birds and mammals’, Current Opinion in Neurobiology, vol. 1, pp. 27–33.
Epstein, R, Kirshnit, CE, Lanza, RP, and Rubin LC 1984 ‘“Insight” in the pigeon: Antecedents and determinants of an intelligent performance’, Nature, vol. 308, pp. 61–62.
Gopnik, A, Glymour, C, Sobel, DM, Schulz, LE, Kushnir, T, and Danks, D 2004, ‘A theory of causal learning in children: Causal maps and Bayes nets’, Psychological Review, vol. 111, no. 1, p. 3.
Holyoak, KJ, and Cheng, PW 2011, ‘Causal learning and inference as a rational process: The new synthesis’, Annual Review of Psychology, vol. 62, pp. 135–163.
Hume, D 1748/1999, An enquiry concerning human understanding, T Beauchamp (ed.), Oxford University Press, Oxford.
Jacobs, IF, and Osvath, M 2015, ‘The string-pulling paradigm in comparative psychology’, Journal of Comparative Psychology, vol. 129, no. 2, p. 89.
Kant, I 1781/1999, Critique of pure reason, P Guyer and A Wood (eds.), Cambridge University Press, Cambridge.
Köhler, W 1925, ‘Intelligence of apes’, The Journal of Genetic Psychology, vol. 32, pp. 674–690.
Korsgaard, CM 2009, ‘The activity of reason’, Proceedings and Addresses of the American Philosophical Association, vol. 83, no. 2, pp. 23–43.
Lorenz, K, and Tinbergen, N 1938/1970, ‘Taxis and instinctive behaviour pattern in egg-rolling by the Greylag goose’, Studies in animal and human behavior,Vol. 1, Harvard University Press, Cambridge.
Mannu, M, and Ottoni, EB 2009, ‘The enhanced tool-kit of two groups of wild bearded capuchin monkeys in the Caatinga: Tool making, associative use, and secondary tools’, American Journal of Primatology, vol. 71, no. 3, pp. 242–251.
Martin-Ordas, G, Schumacher, L, and Call, J 2012, ‘Sequential tool use in great apes’, PLoS ONE, vol. 7, no. 12, p. E52074.
Millikan, R 1996, ‘Pushmi-pullyu representations’, Philosophical Perspectives, vol. 9, pp. 185–200.
Millikan, R 2006, ‘Styles of rationality’, in SL Hurley and M Nudds (eds.), Rational animals?, Oxford University Press, Oxford.
Mulcahy, NJ, Call, J, and Dunbar, RI 2005, ‘Gorillas (Gorilla gorilla) and orangutans (Pongo pygmaeus) encode relevant problem features in a tool-using task’, Journal of Comparative Psychology, vol. 119, no. 1, p. 23.
Papineau, D 2001, ‘The evolution of means-end reasoning’, Royal Institute of Philosophy Supplement, vol. 49, pp. 145–178.
Papineau, D, and Heyes, C 2006, ‘Rational or associative? Imitation in Japanese quail’, in SL Hurley and M Nudds (eds.), Rational animals? Oxford University Press, Oxford.
Pearl, J 2000, Causality, Cambridge University Press, Cambridge.
Perner, J 1991, Understanding the representational mind, MIT Press, Cambridge.
Piaget, J 1952, The origins of intelligence in children, Norton, New York.
Proust, J 2006, ‘Rationality and metacognition in non-human animals’, in SL Hurley and M Nudds (eds.), Rational animals? Oxford University Press, Oxford.
Seed, A, and Call, J 2009, ‘Causal knowledge for events and objects in animals’, in S Watanabe, AP Blaisdell, L Huber and A Young (eds.), Rational animals, irrational humans, Keio University Press, Tokyo.
Shumaker, RW, Walkup, KR, and Beck, BB 2011, Animal tool behavior: The use and manufacture of tools by animals, JHU Press, Baltimore, MD.
Sloman, A 1978, The computer revolution in philosophy: Philosophy, science and models of mind, Harvester Press, Brighton.
Smith JD, Shields WE, and Washburn DA 2003, ‘The comparative psychology of uncertainty monitoring and metacognition’, Behavioral and Brain Sciences, vol. 3, pp. 317–339.
Staddon, JE 2016, Adaptive behavior and learning, Cambridge University Press, Cambridge.
Suddendorf, T, and Whiten, A 2001, ‘Mental evolution and development: Evidence for secondary representation in children, great apes, and other animals’, Psychological Bulletin, vol. 127, no. 5, p. 629.
Taylor, AH, Hunt, GR, Holzhaider, JC, and Gray, RD 2007, ‘Spontaneous metatool use by New Caledonian crows’, Current Biology, vol. 17, no. 17, pp. 1504–1507.
Toth, N, Schick, K, and Semaw, S 2006, ‘A comparative study of the stone tool-making skills of Pan, Australopithecus, and Homo sapiens,’ in N Toth, K Schick and S Semaw, The Oldowan: Case studies into the earliest Stone Age, pp. 155–222, Stone Age Institute Press, Gosport.
Völter, CJ, Rossano, F, and Call, J 2015, ‘From exploitation to cooperation: Social tool-use in orangutan mother-offspring dyads’, Animal Behaviour, vol. 100, pp. 126–134.
Wimpenny, JH, Weir, AA, Clayton, L, Rutz, C, and Kacelnik, A 2009, ‘Cognitive processes associated with sequential tool use in New Caledonian crows’, PLoS ONE, vol. 4, no. 8, p. E6471.
Woodward, J 2003, Making things happen: A causal theory of explanation, Oxford University Press, Oxford.