1
Arthropod intentionality?1

Andrew Knoll and Georges Rey

Introduction

A ubiquitous idiom in cognitive science is:

Thus, one reads of the visual system representing edges, surfaces, color and objects; birds representing the stars; bees the azimuth of the sun, and ants the direction of food from their nest. We will presume here that such talk treats the system as in some way mental, involving the puzzling phenomenon of intentionality: representations are about a certain subject matter, and they may be non-factive, non-existential and aspective: i.e., they can be false, represent non-existing things, and usually represent things “as” one way or another, e.g., a four-sided figure as a square or as a diamond. That is, representations have “correctness” (or “veridicality”) conditions which specify when they’re correct and in what way. We will regard those conditions as providing the “content” of the representation.2

An obviously important question is when talk of such intentional representation is literally true, and when it is merely metaphorical or a façon de parler. Sunflowers “follow” the sun through the course of a day, presumably not because they literally represent the sun, but simply because cells on the sunless side of the stem grow faster, causing the plant to droop towards it.3 Drooping sunflowers at night don’t misrepresent the position of the sun.

In this short entry, we want to address this question by focusing on some of the simplest animal “minds” that have so far been investigated: those of arthropods, specifically ants and bees. This is partly because their relative simplicity permits a clearer understanding of what’s relevant to literal intentional ascription than is easy to acquire of more complex animals, particularly human beings; and partly because they seem very near – or past – the limits of such ascription. Getting clearer about them should help clarify what’s essential in the more complex cases. Moreover, ants and bees have been the subject of quite exciting, detailed research, with which we think any philosopher of mind ought to be acquainted.

Whether a system has literal intentionality has sometimes been thought to turn on its cognitive architecture. For example, Carruthers (2004) argues that some insects (ticks, caterpillars, Sphex and digger wasps) have an inflexible architecture, which is unamenable to explanation in terms of intentional attitude states, while the behavior of insects that exhibit flexible navigational capacities, such as honeybees, is best explained in terms of practical syllogisms operating over states with intentional content. We agree with Carruthers’ general point that the flexibility of a system may be a good guide to whether it involves states with intentional content. But we think that much turns on the details of this flexibility, in particular on how much it involves a certain kind of information integration, which we think in turn requires the features of intentionality we have mentioned. The empirical case is still out on the cognitive architecture of ants and bees; however, there is a prima facie case that some ant navigation involves a flexible architecture that doesn’t require intentional explanation, while the honeybees have one that does.

The desert ant (Cataglyphis fortis)

First, consider the desert ant, Cataglyphis fortis, which lives in the relatively featureless Tunisian desert. It goes on meandering foraging expeditions from its nest that can cover 100m. After finding food, it can return on a direct path to its nest, despite its tortuous outbound route.

Cataglyphis relies on several systems to perform its navigational feats, including a sun compass, wind compass, odor beaconing system, and retinotopic landmark guidance system. Its principle navigation system, however, is a “dead reckoning” or path integration (“PI”4) system. This system keeps track of the steps the ant has taken, and of the polarization of incoming light, which usually changes as a result of changes in the ant’s direction of travel. By realizing a simple vector algebra, the system computationally transforms these correlates of distance and direction, and generates the vector sum of the distance-direction vectors that describe its outward walk. It then follows the inverse of that vector back to its nest.

Our question is whether ascribing states with intentional content to this computational process is part of the best explanation of the PI system. For Charles Gallistel (1990: 58–83), a representation just is a state in a computational system that stands in an isomorphic relation to the structure of the environment and functions to regulate the behavior of an animal in that environment. By these lights, because the ant has states that are functionally isomorphic to the distance and direction it traverses, it therefore has “representations” of distance and direction.

Tyler Burge (2010: 502) complains that such states are “representational” in name only. That is, the states are not about actual spatial distance and direction, in the interesting sense that they have correctness conditions that do any explanatory work. The ant would behave the same whether or not it had representations with those or any other correctness conditions.

One needs to be careful here. It just begs the question against Gallistel to claim that his “representations” play no explanatory role in his theory. Insofar as he thinks representations just are states isomorphically related to and triggered by environmental phenomena, they consequently play an explanatory role if the correlations do. Theoretical reduction is not elimination: if talk of “salt” can be replaced by talk of “NaCl,” salt will play precisely the same explanatory role as NaCl! The substantive issue is whether properties that are arguably constitutive of being an intentional representation – e.g., the properties we mentioned at the start – are essential to the explanation. But we can give Gallistel the word “representation” for functional isomorphisms, and use “i-representation” for the representations that exhibit intentional properties.5

Burge’s complaint is nevertheless on the right track. Isomorphic correlations don’t exhibit the intentional properties that make the representational idiom distinctively interesting. Correlations are factive and relate existing phenomena: no state of an animal can be correlated with features of non-existent landscapes, but animals might represent them nonetheless. Moreover, if a state is sometimes mistakenly correlated with environmental features that fail to take an ant back to its nest, then that’s as real a part of the correlation as features that do take it back.

This latter fact raises what has been called the “disjunction” problem (Fodor 1987), a consequence of an i-representation’s non-factivity. If an i-representation can be erroneous, what determines when that might be? To take a much-discussed example, suppose a frog flicks its tongue at flies, but equally at beebees and mosquitos. Is the content of the relevant representation [fly], and the beebees are errors, or is it [fly or beebee] – “[flybee]”? – and flies and beebees are right and the mosquitos errors? Or perhaps it is merely [moving black dot], and all of those are correct but [moving square] would be wrong. Generally, any non-factive representation can be entokened under conditions C1 C2 C3 … Cn, for indefinitely large n: what determines which of these conditions are the correct ones and which mistakes?6

Many philosophers have proposed solving the disjunction problem by imposing further constraints on correlations or other natural relations – say, that they must be law-like, obtaining under “normal” circumstances (Dretske 1987); that they must be specified by an “interpretation function” (Cummins 1989); that the correctness conditions involve evolutionary selection (Millikan 1984; Dretske 1986; Neander 1995, 2017); or that erroneous conditions asymmetrically depend upon the correct ones (Fodor 1987, 1990). Any of these constraints might succeed abstractly in distinguishing correct from incorrect uses of an i-representation. But, although defining correctness conditions is certainly an important issue, Burge’s point is an additional one. The question he raises is whether any assignment of correctness conditions, appropriately idealized or not, would be explanatory. We want to agree with Burge that, insofar as the ant seems insensitive to whether any of its states are in error, correctness conditions seem irrelevant to that explanation, however they might be defined.

Cataglyphis: navigation without i-representations

As we’ve noted, the ant’s navigational capacities are sensitive to a wide variety of proximal inputs beyond those that factor directly into the PI system.7 The ant can follow odor concentrations emanating from food and nests (Buehlmann, Hansson, and Knaden 2013, 2012); its antennae are sensitive to displacement, which ordinarily correlates well with wind direction, and which the ant can use to set its direction of travel (Müller & Wehner 2007; Wolf and Wehner 2000); and it has systems that track changes in polarized light and also photoscopic patterns that track the position of the sun in the sky (Wehner and Müller 2006). Additionally, it is able to perform its PI in three dimensions, when its foraging path takes it up and down hills, and even when it is forced to trip and stumble over corrugations on its foraging paths.8 More surprisingly still, Steck, Wittlinger, and Wolf (2009) showed that amputation of two of the ant’s six legs doesn’t impede the ant’s successful navigation, even though such amputations cause the ant to stumble and use irregular gaits.

The ant is also sensitive to visual stimuli that correlate with landmarks. The prevailing view is that it responds to stored retinotopic “snapshots” of landmarks in its terrain,9 which it can match with current retinal stimuli in order to influence navigation in a variety of ways (Collett, Chittka, and Collett 2013). Cartwright and Collett (1983, 1987) have described an algorithm that operates only upon proximal retinal stimuli to implement these capacities.

It might be supposed that this sensitivity to a wide variety of stimuli is enough to establish that Cataglyphis makes use of intentional states, i-representing the wind direction, sun position, and landmarks that are germane to computing its location. This inference is too hasty. Insofar as intentionality is genuinely explanatory, its properties, e.g., non-factivity, should support counterfactuals, indicating how, ceteris paribus, the ant would respond if the representation were false.10 On the face of it, if a system’s computations are counterfactually defined only over purely proximal inputs, then it would behave the same whenever different distal stimuli had the same proximal effects – e.g., in the case of an ant navigating by PI, a vector trajectory toward the nest vs. the same trajectory away from the nest. The fact that it’s a distal error would make no difference to the ant: it wouldn’t lead the ant to correct it.11 Classifying states of the ant as “true” or “false” relative to the distal stimuli they are responding to would capture no generalizations not already accounted for by their response to proximal stimuli.

Indeed, not being able to recover from error seems precisely to be Cataglyphis’ plight. Ants that are displaced after finding food will walk in the direction that would have taken them back to their nest had they not been moved (Wehner and Srinivasan 1981). Ants that have pig bristle stilts attached to their legs after they find food end up overshooting their nest on their homebound walk, whereas ants whose legs are amputated end up undershooting it (Wittlinger, Wehner, and Wolf 2006, 2007). One might think, given enough time, the ants will eventually be able to recover from such displacements. But Andel and Wehner (2004) gathered data indicating that, even given ample time, ants can’t so correct. They manipulated the ant’s PI system so that it ran in the direction away from its nest upon getting food,12 and then recorded the behavior of the ants for three minutes after they had executed this PI vector. For those three minutes, the ants did run back and forth parallel to their PI vector. But ants execute this back and forth after completing all of their PI walks, whether or not they succeed in taking them toward the nest. The behavior seems to be not a correction from error, but mere execution of the motor routine triggered by activation of the PI vector. The ants have been observed persisting in this motor routine for up to two hours without finding their nest upon having been displaced (Müller and Wehner 1994: 529). They seem to lack the cognitive resources to recover from error.

Of course, it’s still possible in these instances that there just isn’t enough information available to the ant to allow it to revise course. But there are instances in which the ants are unable to use available proximal stimuli to orient themselves even if those same stimuli can orient them in other circumstances. For example, ants deprived of polarized light can use the sun compass to navigate just as accurately to and from the nest. However, if an ant uses polarized light to chart a course from nest to food, and then is deprived of polarized light cues, it cannot use its sun compass to navigate back home, even though sun compass cues are still readily available (Wehner and Müller 2006). The ant can’t use the sun compass to correct its course, though it could have had it been deprived of polarized light from the start. Perhaps it just doesn’t like using the sun compass, or it falsely believes the sun compass is inaccurate under these conditions – but absent such alternate accounts, the best explanation is that the ant is not i-representing the direction to its nest, but executing a routine that’s sensitive only to stimulation of the polarization detectors.

The similar Australian desert ant, Melophorus bagoti, also demonstrates insensitivity to stimuli that in other circumstances would allow it to recover from displacements (Wehner et al. 2006). These ants use their landmark guidance system to establish one habitual route from the nest to food, and another from the food to the nest. If displaced to any arbitrary point on their nest-bound route, the ants use their landmark guidance to navigate back to the nest. But if displaced from their nest-bound route to a spot on their food-bound route, they behave as though they have been displaced to an unknown location. They just walk on the trajectory output by the nest-bound PI vector – even though the surrounding landmarks should be sufficient to guide the ant back to its food source and thence back to the nest. Again, the ants seem to be relying on triggered motor routines that cannot be revised in light of new information. Whether states of the system are “correct” or “incorrect” will make no difference to the operation of the system in any counterfactual conditions. So, attributing i-representations to the ant’s navigation system provides no explanatory gain.

The issues can get subtle: it turns out the ant exhibits some flexibility. For example, the PI and landmark guidance system do seem to interact. When the output of the PI system and that of the landmark guidance system conflict, ants steer a course intermediate between them (Collett 2012). This behavior might be thought to be evidence of the ant i-representing the locations of various landmarks, mapping them onto locations i-represented by its PI system, and then correcting its course when there is a mismatch from the i-representational output by these two systems. However, to the contrary, Collett proposes that the ant is simply computationally super-imposing the outputs of the two systems to arrive at this motor routine, a computation of what Burge (2010: 501) calls a “weighted average” that would appear not to require intentionality: the two systems don’t permit the ant to recover from error, but just – again, fortuitously – to avoid making errors in the first place.

Similar points apply to the ant’s supposed ability to use its wind compass to compensate for uncertainty in the PI system. When walking to a familiar food source, ants use their wind compass to walk to a position downwind of the food (Wolf and Wehner 2000). They then rely on the odor plumes emanating from the food to guide them the rest of the way, and walk in the direction of increasing odor concentration, changing how far downwind they walk as a linear function of the distance between the food and the nest. Wolf and Wehner (2005: 4228) conclude that this behavior may be driven by what “might be interpreted as the ants’ own assessment of their navigation uncertainty”: the ant correctly i-represents that its error in navigating to the food increases as the distance to the food increases, and compensates by aiming for a target downwind of the food just beyond the maximum possible error. That way, if the ant errs maximally in the upwind direction, it will still arrive downwind of the food. If it errs maximally in the downwind direction, it will still be in contact with odors that can guide it to the food.

Nonetheless, there’s an alternative, non-intentional, explanation. Upon receiving wind stimuli from the ant’s antennae, the PI system multiplies the direction component of the motor routine by a factor of the distance component. It’s a happy accident that this factor corresponds to the ants’ actual tendency to err, and that in so doing, it takes the ant to an area appropriately downwind of the food source. We need not suppose that the ant has i-representations of its own error factor, or of the distance of the food from the nest.

If this explanation is correct, the ant does manifest at least a degree of Carruthers’ “flexibility,” but, as the examples illustrate, this flexibility can be accomplished simply by rote, non-intentional operations over proximal stimuli. Wehner and colleagues13 claim that the ant’s integration of all its navigation systems is best understood in just this way, as a “toolkit” (Wehner 2009). Each “tool” – the PI system, polarization compass, sun compass, wind compass, and landmark guidance system – can be fully characterized in terms of computational processes operating over proximal inputs. Interactions among the systems are explained by taking the outputs from one system as inputs to another, which in turn trigger its operation. For example, whether input from the sun compass affects the output of PI depends on what input it’s receiving from the polarized light compass. But, again, while such interactions decrease the likelihood of error, they don’t require recovery from it, and so the intentional content of the input would still seem to be explanatorily inert.

Whether this “toolkit” architecture continues to hold under continued empirical scrutiny as the correct model of Cataglyphis cognition remains to be seen. But, at the least, Wehner’s model shows us how it’s possible for a creature to exhibit extensive cognitive integration and behavioral flexibility without having intentional states.

Honeybees: navigation with i-representations

In contrast to the toolkit architecture for the ant, Menzel and Giurfa (2001)14 propose a more integrated architecture for the honeybee (Apis mellifera) that does seem best characterized in intentional terms. Honeybees have a similar suite of modular navigation systems as the ant: a polarization compass, an optic flow detector that correlates with speed and distance flown, a PI system that combines these inputs, a visual landmark guidance system, and the like. Whereas Wehner’s ant architecture specifies how the deliverances of each subsystem supply input to others, Menzel and Giurfa propose that, in the honeybee, outputs of individual modular navigation systems enter into a common “central integration” space (CIS). The systems are free to influence one another in indefinite ways before outputting motor routines: deliverances from any one subsystem can, in principle, have an effect at any point on any other.

Evidence that bees employ such central integration comes essentially from two studies. The key finding in each is that bees evince systematic sensitivity to indefinite disjunctions of proximal stimuli. Menzel et al. (2005) displaced bees from locations in their foraging grounds to a variety of novel locations, taking care to shield them from visual stimuli during displacement. Upon release, most of the bees initially fly on a course that would have taken them back to the hive were they still at the point from which they had been displaced. So far, this is the same behavior displayed by Cataglyphis under analogous conditions. But, unlike the ants, the bees then change course and make their way back, either directly to the hive or to a previously encountered feeder, and thence on to the hive. They do this on the basis of specific input from release points that do not correspond to positions the bees have been at before, so it is impossible for them to have stored retinal snapshots that they can match to their current positions.15 Unlike the ants, the bees seem sensitive to errors at arbitrarily different points in their flights and are able to recover from them, all of which invites explanation in terms of i-representation.

Moreover, bees also navigate in response to observing the famous “waggle dances,” performed by conspecifics at the hive, which indicate the distance and direction from the hive to feeding locations.16 In a second study, Menzel et al. (2011) discovered that bees who have been trained on a route from the hive to one feeder (FT), but then observe the dance of another bee at the hive indicating the distance and direction to a second feeder (FD), are able to pursue novel shortcuts between the two feeders (see Figure 1.1). In particular, bees can fly a route from the hive in the direction of FD, but then switch over mid-course and chart a path to FT. Furthermore, once the bees have arrived at either FD or FT, they are able to fly to the other feeder first before returning to the hive. Once again, their navigational capacity seems to elude generalization in terms of proximal stimuli alone. Exposure to the waggle dance eventuates not just in a particular motor routine, but rather a capacity that seems capable of taking the bee to the same location via indefinite different routes.

Figure 1.1

Figure 1.1 Bees take novel shortcuts between a trained route to a feeder (FT) and dance-indicated route to another feeder (FD).

Distal i-representations provide a common coin that allows for the bees’ recovery from error at arbitrarily different points, and therefore for generalizations about the operations of the CIS across an indefinitely large, disjunctive motley of proximal input.17 Given the motley, there aren’t law-like generalizations of the form: “Bees will modify their motor routine in response to proximal stimuli of type x.” Instead, such generalizations need to be of the form: “Bees will modify their motor routine in response to updated distal representations of its location relative to the hive.” Distal i-representations capture what’s in common across different occasions, different bees and different proximal input.18

The role of i-representations is reinforced by consideration of further features of intentionality that we don’t have space adequately to discuss: non-existentiality and sensitivity to aspect. It’s certainly a plausible interpretation of the above experiments that the bees represent the feeders as such, and would continue to i-represent them even had they been removed. More interestingly, for an animal’s responses to be fully effective, information from different input sources have to bear on the same aspects of the world, e.g., whether a distal object is represented, say, as a feeder and not as a trap. A distal object might in fact be both, and which of these the agent represents it as will make a difference to how this information is integrated with the rest of the animal’s representations – in many instances, making a difference between life and death. These topics deserve much further discussion.19 But, towards that, we conclude with impressive evidence of bees’ susceptibility to perceptual illusions, specifically their sensitivity both to “non-existent” distal objects, and to the aspects under which they perceive them.

Van Hateren, Srinivasan, and Wait (1990) gathered data suggesting that bees are sensitive to Kanizsa rectangle illusions (see Figure 1.2).20 Trained to associate sugar water with lines oriented in a particular direction, they responded to both genuine rectangles and Kanizsa figures oriented in the same direction – even though, in the latter case, there is no actual rectangle (occlude the “pacman” figures, and the appearance of a rectangle disappears). Bees weren’t sensitive to collections of pacman figures at the same positions but which are rotated so as to disrupt the illusion. Their representations thereby exhibited all three of the features that are taken to be characteristic of intentionality: they were non-factive, indeed, erroneous ones of a non-existent object, which they saw as (or “qua,” or under the aspect of being) rectangular.21

Figure 1.2

Figure 1.2 A Kanizsa rectangle

Conclusion: (explanatory) i-representation iff cognitive integration?

We tentatively conclude that the evidence so far suggests that bees do seem to i-represent features of their distal environments, while desert ants do not. A further tentative conclusion we’d like to offer is that there is no integration without i-representation, whereby “integration” means the kind of responsiveness to indefinite ranges of proximal stimuli at indefinitely different points that is displayed by central integration systems. We are also inclined, even more tentatively, to propose the converse: that there is at least no explanatory i-representation without integration.22 “Informational” systems that can be characterized in terms merely of many separate systems sensitive to a limited range of proximal stimuli do not require intentional explanation in and of themselves. Intentional explanation is only needed when the integration of information from different subsystems requires generalization over distal stimuli. The resulting i-representations are non-factive, non-existential and (plausibly) aspective, in the way that the states of bees, but not of the ants, seem to be.

Notes

1 We thank Carsten Hansen, Marc Hauser and Karen Neander for insightful comments.

2 The term “intentional(ity)” was resurrected from scholastic philosophy by the nineteenth-century Austrian philosopher Franz Brentano (1874/1995), who plausibly claimed that intentionality was a mark of the mental, and, more controversially, that it was irreducible to the physical (note that “intentional” here and below doesn’t mean the usual “deliberate”). See Chisholm (1957) for the classic introduction of it to Anglo-American philosophy, and Jacob (2014) for a recent review of the literature.

3 See Whippo (2006).

4 “Path integration” is the usual term in the literature, as is the abbreviation “PI,” which we adopt particularly to avoid confusion with our use of “integration” below for a particular sort of processing. See Srinivasan (2015) for a recent review of PI in ants and bees.

5 Rescorla (2013: 96) makes a similar allowance. As Gallistel notes, “representation” in mathematics is indeed used for mere isomorphism.

6 Fodor (1987) originally raised this problem, based on famous experiments of Lettvin, McCullough, and Pitts (1959/68). See Millikan (1989) and Neander (1995, 2017) for further discussion. It is virtually identical to the problem Kripke (1980/2004) claims to find in Wittgenstein (1953). Burge (2010: 322–323) dismisses the problem as “largely an artifact of reductive programs,” or efforts to define the intentional in non-intentional terms. Although we don’t share his dismissal of these efforts, we agree with him (2010: 298) that they are inessential to psychology: indeed, that “one could hardly have better epistemic ground to rely on a notion than that it figures centrally in a successful science” (2010: 298), whether or not it is reducible to physics (whatever that might actually amount to). But the disjunction problem is separate from reductionism: as the general form indicates, the problem arises for any representations that are non-factive, or, in Fodor’s (1990: Ch. IV) term, “robust” – they can be entokened erroneously – as he claims any serious representation must be.

7 Burge (2010: 502) acknowledges as much, and was focusing on PI merely as a way of marking distinctions among bases for explanatory ascriptions of i-representations.

8 See Grah, Wehner and Ronacher (2005, 2007, 2008); Wohlgemuth, Ronacher, and Wehner (2001); Wohlgemuth, Ronacher, and Wehner (2002) and Wintergerst and Ronacher (2012) for the hill data; see Steck et al. (2009) for the corrugations.

9 See Wehner (2009: 88), Ronacher (2008: 59), Collett (2010) and Wystrach and Graham (2012: 16–17). We assume the “snapshots” are proximal patterns produced by landmarks, as are the patterns produced by their nests and prey.

10 Along lines of Pietroski and Rey (1995), we use “ceteris paribus” to rule out indefinitely many independent influences, e.g., memory dysfunction, change in motivation, motor inability: i.e., were no such factors at work, then the non-factivity should make a difference to the animal’s behavior. Similar remarks apply to the non-existential and aspective properties, which there is not space to discuss (but see the brief discussion of illusions at the end).

11Which is not to say that a creature recovering from error need have an i-representation of (in)correctness itself, à la Davidson (1975); just a sensitivity to when an i-representation does(n’t) in fact apply. Could there be errors in proximal i-representations? Arguably, without distality there is no objective basis for an explanatorily relevant distinction between correct and incorrect (cp. Burge 2010: 396).

12 This particular experiment was on another, similar species of Cataglyphis: C. bicolor.

13 See Cruse and Wehner (2011); Ronacher (2008) and Wehner et al. (2006).

14 See also Giurfa and Menzel (2001); Wiener et al. (2011) and Giurfa and Menzel (2013).

15 For the record, Wehner (2009: 93), Cruse and Wehner (2011), and Collett, Chittka, and Collett (2013: R795–R797) dispute Menzel et al.’s claims, arguing that the bees’ behavior could be explained with the same retinotopic landmark guidance systems they ascribe to the desert ant, which, we’ve seen, can be plausibly characterized in terms of non-intentional computations on proximal stimuli. For the sake of clarifying the distinctions we’re after, we’re going to stick with Menzel’s theory, while acknowledging it hasn’t been fully established.

16 We put aside whether responding to and performing these waggle dances requires i-representations (see Rescorla 2013, for discussion). Tautz et al. (2004) and Wray et al. (2008) cast doubt on the long reported result that bees reject as implausible dances indicating food in the middle of a lake (Gould and Gould 1982).

17 That is, the disjunction would consist of the motley proximal stimuli that have nothing in common other than they are evidence of the distal stimulus. Moreover, the disjunction could be indefinitely extended counterfactually, e.g., by increased sensitivity of the animal’s receptors, or by further deliberation, linking new proximal evidence to the distal. The point is sometimes expressed in terms of such creatures having a “cognitive map” (see, e.g., Menzel et al. 2005, 2011).

18 Someone might think one could type individuate the content of the i-representations in terms of their computational role. But if different proximal states give rise to the same distal content in different bees, then explanatory generalizations across bees would be unlikely in the extreme – unless, of course, one distinguishes essential from accidental roles, which it’s by no means clear it’s possible to do in a principled way (see Fodor 1998).

19 As does another issue related to representations, the “systematicity” of bee navigation: e.g., [they can navigate right and then left] iff [they can navigate left and then right]. See Fodor (1987) for the general issue, and Tetzlaff and Rey (2009) for discussion of it in relation to the bees.

20 van Hateren et al. caution that their findings are preliminary; Horridge, Zhang, and O’Carroll (1992) show bees are sensitive to other illusory contours. See Rey (2012) for discussion of the significance of Kanizsa figures for theories of intentionality, and the problem of “non-existent objects.”

21 Other examples are of cats trying to catch illusory snakes (https://youtu.be/CcXXQ6GCUb8) and dogs chasing illusory flying things (Dodman 1996). See Nieder (2002) for a review.

22 Note that the perceptual constancies stressed by Burge (2010: 408ff) as crucial to intentionality also involve generalizations over indefinite proximal stimuli, as in varying perspectival views of a shape.

References

Andel, D., and Wehner, R. (2004) “Path Integration in Desert Ants, Cataglyphis: How to Make a Homing Ant Run Away From Home,” Proceedings of the Royal Society B: Biological Sciences, 271(1547), 1485–1489. doi:10.1098/rspb.2004.2749

Brentano, F. (1874/1995) Psychology From an Empirical Standpoint (A. Rancurello, D. Terrell, and L. McAlister, trans.), London: Routledge and Kegan Paul.

Buehlmann, C., Hansson, B. S., and Knaden, M. (2012) “Path Integration Controls Nest Plume Following in Desert Ants,” Current Biology, 22, 645–649. doi:10.1016/j.cub.2012.02.029

——— (2013) “Flexible Weighing of Olfactory and Vector Information in the Desert Ant Cataglyphis fortis,” Biology Letters, 9(3), 1–4. doi:10.1098/rsbl.2013.0070

Burge, T. (2010) Origins of Objectivity, Oxford: Oxford University Press.

Carruthers, P. (2004) “On Being Simple Minded,” American Philosophical Quarterly, 41(3), 205–220.

Cartwright, B. A., and Collett, T. S. (1983) “Landmark Learning in Bees,” Journal of Comparative Physiology A, 151(4), 521–543. doi:10.1007/bf00605469

——— (1987) “Landmark Maps for Honeybees,” Biological Cybernetics, 57(1–2), 85–93. doi:10.1007/bf00318718

Chisholm, R. M. (1957) Perceiving: A Philosophical Study, Ithaca, NY: Cornell University Press.

Collett, M. (2010) “How Desert Ants Use a Visual Landmark for Guidance Along a Habitual Route,” Proceedings of the National Academy of Sciences, 107(25), 11638–11643. doi:10.1073/pnas.1001401107

——— (2012) “How Navigational Guidance Systems are Combined in a Desert Ant,” Current Biology, 22(10), 927–932. doi:10.1016/j.cub.2012.03.049

Collett, M., Chittka, L., and Collett, T. (2013) “Spatial Memory in Insect Navigation,” Current Biology, 23(17), R789–R800. doi:10.1016/j.cub.2013.07.020

Cruse, H., and Wehner, R. (2011) “No Need for a Cognitive Map: Decentralized Memory for Insect Navigation,” PLoS Computational Biology, 7(3), e1002009. doi:10.1371/journal.pcbi.1002009

Cummins, R. (1989) Meaning and Mental Representation, Cambridge, MA: MIT Press.

Davidson, D. (1975) “Thought and Talk,” in Samuel D. Guttenplan (ed.) Mind and Language (pp. 7–23), Oxford: Clarendon Press.

Dodman, N. (1996) The Dog Who Loved Too Much, New York: Bantam Books.

Dretske, F. (1986) “Misrepresentation,” in R. Bogdan (ed.) Belief: Form, Content, and Function (pp. 17–36), Oxford: Oxford University Press.

——— (1987) Explaining Behavior: Reasons in a World of Causes, Cambridge, MA: MIT Press.

Fodor, J. A. (1987) Psychosemantics, Cambridge, MA: MIT Press.

——— (1990) A Theory of Content and Other Essays, Cambridge, MA: MIT Press.

——— (1998) Concepts: Where Cognitive Science Went Wrong, Cambridge, MA: MIT Press.

Gallistel, C. R. (1990) The Organization of Learning, Cambridge, MA: MIT Press.

Giurfa, M., and Menzel, R. (2013) “Cognitive Components of Insect Behavior,” in R. Menzel and P. R. Benjamin (eds.) Invertebrate Learning and Memory (pp. 14–25), Amsterdam: Elsevier Associative Press.

Gould, J. L., and Gould, C. G. (1982). “The Insect Mind: Physics or Meta-Physics?” in D. R. Griffin (ed.) Animal Mind – Human Mind (pp. 269–298), Berlin: Springer-Verlag.

Grah, G., and Ronacher, B. (2008) “Three-Dimensional Orientation in Desert Ants: Context-Independent Memorisation and Recall of Sloped Path Segments,” Journal of Comparative Physiology A, 194(6), 517–522. doi:10.1007/s00359-008-0324-4

Grah, G., Wehner, R., and Ronacher, B. (2005) “Path Integration in a Three-Dimensional Maze: Ground Distance Estimation Keeps Desert Ants Cataglyphis fortis on Course,” Journal of Experimental Biology, 208(21), 4005–4011. doi:10.1242/jeb.01873

——— (2007) “Desert Ants Do Not Acquire and Use a Three-dimensional Global Vector,” Frontiers in Zoology, 4(1), 12. doi:10.1186/1742-9994-4-12

Horridge, G. A., Zhang, S. W., and O’Carroll, D. (1992) “Insect Perception of Illusory Contours,” Philosophical Transactions: Biological Sciences 337(1279), 59–64.

Jacob, P. (2014) “Intentionality,” in N. Zalta (ed.) Stanford Encyclopedia of Philosophy (Winter 2014 Edition), http://plato.stanford.edu/archives/win2014/entries/intentionality/

Kripke, S. (1980/2004) Wittgenstein on Rules and Private Language: An Elementary Exposition, Oxford: Oxford University Press.

Lettvin, J. Y., Maturana, H. R., McCullough, W. S., and Pitts, W. H. (1959/68) “What the Frog’s Eye Tells the Frog’s Brain,” Proceedings of the IRE, 47, 1940–1951.

Menzel, R., and Giurfa, M. (2001) “Cognitive Architecture of a Mini-Brain: The Honeybee,” Trends in Cognitive Sciences, 5(2), 62–71. doi:10.1016/s1364-6613(00)01601-6

Menzel, R., Greggers, U., Smith, A., Berger, S., Brandt, R., Brunke, S., … Watzl, S. (2005) “Honey Bees Navigate According to a Map-Like Spatial Memory,” Proceedings of the National Academy of Sciences, 102(8), 3040–3045. doi:10.1073/pnas.0408550102

Menzel, R., Kirbach, A., Haass, W., Fischer, B., Fuchs, J., Koblofsky, M., … Greggers, U. (2011) “A Common Frame of Reference for Learned and Communicated Vectors in Honeybee Navigation,” Current Biology, 21(8), 645–650. doi:10.1016/j.cub.2011.02.039

Millikan, R. G. (1984) Language, Thought, and Other Biological Categories: New Foundations for Realism, Cambridge, MA: MIT Press.

–—— (1989) “Biosemantics,” Journal of Philosophy, 86, 281–297.

Müller, M., and Wehner, R. (1994) “The Hidden Spiral: Systematic Search and Path Integration in Desert Ants, Cataglyphis fortis,” Journal of Comparative Physiology, 175, 525–530.

——— (2007) “Wind and Sky as Compass Cues in Desert Ant Navigation,” Naturwissenschaften, 94(7), 589–594. doi:10.1007/s00114-007-0232-4

Neander, K. (1995) “Misrepresenting and Malfunctioning,” Philosophical Studies, 79, 109–141.

——— (2017), A Mark of the Mental: In Defense of Informational Teleosemantics, Cambridge, MA: MIT Press.

Nieder, A. (2002) Seeing More Than Meets the Eye: Processing of Illusory Contours in Animals. Journal of Comparative Physiology A: Neuroethology, Sensory, Neural, and Behavioral Physiology, 188(4), 249–260.

Pietroski, P., and Rey, G. (1995) “When Other Things Aren’t Equal: Saving Ceteris Paribus Laws from Vacuity,” The British Journal For the Philosophy of Science, 46(1), 81–110. http://doi.org/10.1093/bjps/46.1.81

Rescorla, M. (2013) “Millikan on Honeybee Navigation and Communication,” in D. Ryder, J. Kingsbury, and K. Williford (eds.) Millikan and Her Critics, Malden, MA: John Wiley & Sons.

Rey, G. (2012) “Externalism and Inexistence in Early Content,” in R. Schantz (ed.) Prospects for Meaning, New York: deGruyter.

Ronacher, B. (2008) “Path Integration as the Basic Navigation Mechanism of the Desert Ant Cataglyphis fortis,” Myrmecological News, 11, 53–62.

Srinivasan, M. V. (2015) “Where Paths Meet and Cross: Navigation by Path Integration in the Desert Ant and the Honeybee,” Journal of Comparative Physiology A, 201(6), 533–546. doi:10.1007/s00359-015-1000-0

Steck, K., Wittlinger, M., and Wolf, H. (2009) “Estimation of Homing Distance in Desert Ants, Cataglyphis fortis, Remains Unaffected by Disturbance of Walking Behaviour,” Journal of Experimental Biology, 212(18), 2893–2901. doi:10.1242/jeb.030403

Tautz, J., Zhang, S., Spaethe, J., Brockmann, A., Si, A., and Srinivasan, M. (2004), “Honeybee Odometry: Performance in Varying Natural Terrain,” PLoS Biology, 2(7), 0195–0923.

Tetzlaff, M., and Rey, G. (2009) “Systematicity and Intentional Realism in Honeybee Navigation,” in R. L. Lurz (ed.) The Philosophy of Animal Minds (pp. 72–88), Cambridge: Cambridge University Press.

van Hateren, J. H., Srinivasan, M. V., and Wait, P. B. (1990) “Pattern Recognition in Bees: Orientation Discrimination,” Journal of Comparative Physiology A, 167, 649–654.

Wehner, R. (2009) “The Architecture of the Desert Ant’s Navigational Toolkit,” Myrmecological News, 12, 85–96.

Wehner, R., Boyer, M., Loertscher, F., Sommer, S., and Menzi, U. (2006) “Ant Navigation: One-Way Routes Rather than Maps,” Current Biology, 16(1), 75–79. doi:10.1016/j.cub.2005.11.035

Wehner, R., and Müller, M. (2006) “The Significance of Direct Sunlight and Polarized Skylight in the Ant’s Celestial System of Navigation,” Proceedings of the National Academy of Sciences, 103(33), 12575–12579. doi:10.1073/pnas.0604430103

Wehner, R., and Srinivasan, M. V. (1981) “Searching Behaviour of Desert Ants,” Journal of Comparative Physiology, 142(3), 315–338. doi:10.1007/bf00605445

Whippo, C. W. (2006) “Phototropism: Bending Towards Enlightenment,” The Plant Cell, 18(5), 1110–1119. doi:10.1105/tpc.105.039669

Wiener, J., Shettleworth, S., Bingman, V. P., Cheng, K., Healy, S., Jacobs, L. F., … Newcombe, N. S. (2011) “Animal Navigation: A Synthesis,” in R. Menzel and J. Fischer (eds.) Animal Thinking: Contemporary Issues in Comparative Cognition, Cambridge, MA: MIT Press.

Wintergerst, S., and Ronacher, B. (2012) “Discrimination of Inclined Path Segments by the Desert Ant Cataglyphis fortis,” Journal of Comparative Physiology A, 198(5), 363–373. doi:10.1007/s00359-012-0714-5

Wittgenstein, L. (1953) Philosophical Investigations, New York: Macmillan.

Wittlinger, M., Wehner, R., and Wolf, H. (2006) “The Ant Odometer: Stepping on Stilts and Stumps,” Science, 312(5782), 1965–1967. doi:10.1126/science.1126912

——— (2007) “The Desert Ant Odometer: A stride Integrator that Accounts for Stride Length and Walking Speed,” Journal of Experimental Biology, 210(2), 198–207. doi:10.1242/jeb.02657

Wohlgemuth, S., Ronacher, B., and Wehner, R. (2001) “Ant Odometry in the Third Dimension,” Nature, 411(6839), 795–798. doi:10.1038/35081069

——— (2002) “Distance Estimation in the Third Dimension in Desert Ants,” Journal of Comparative Physiology A, 188(4), 273–281. doi:10.1007/s00359-002-0301-2

Wolf, H., and Wehner, R. (2000) “Pinpointing Food Sources: Olfactory and Anemotactic Orientation in Desert Ants, Cataglyphis fortis,” Journal of Experimental Biology, 203, 857–868.

——— (2005) “Desert Ants Compensate for Navigation Uncertainty,” Journal of Experimental Biology, 208(22), 4223–4230. doi:10.1242/jeb.01905

Wray, M. K., Klein, B. A., Mattila, H. R., and Seeley, T. D. (2008) “Honeybees Do Not Reject Dances for ‘Implausible’ Locations: Reconsidering the Evidence for Cognitive Maps in Insects,” Animal Behavior, 76, 261–269. doi:10.1016/j.anbehav.2008.04.005

Wystrach, A., and Graham, P. (2012) “What Can We Learn From Studies of Insect Navigation?” Animal Behaviour, 84(1), 13–20. doi:10.1016/j.anbehav.2012.04.017