Ever since George Romanes came under heavy criticism for his anecdotal approach to animal mind, the capacity to learn has been presented by psychologists as a deflationary alternative to the attribution of higher cognitive capacities. The capacity to learn – i.e., to alter behavior as a function of experience – is ubiquitous in the animal kingdom, leading theorists from Conwy Lloyd Morgan to Edward Thorndike to B. F. Skinner and beyond to consider learning explanations of complex behavior to be preferable to cognitive alternatives, which posit inner representational states of unknown structure and provenance.
Learning, however, comes in many forms. From simple habituation or sensitization (respectively, a decrease or increase in responsiveness to a repeated stimulus) through associative learning (whether of the classical/Pavlovian or instrumental/Skinnerian variety) to more elaborate forms of discriminative learning (Rescorla and Wagner 1972), observational learning (Galef and Laland 2005), and convention learning (Thompson-Schill et al. 2009), the cognitive demands on the learner are quite different (see, also, Chapter 34 by Rachael Brown and Chapter 39 by Cameron Buckner in this volume). Gallistel (1990) emphasizes the cognitive dimensions of even the simpler forms of animal learning, leading him to adopt an explicitly anti-associationist, cognitivist approach that emphasizes representations and computation. But, as Buckner (2011) argues, the distinction between cognition and “mere association” may, in one important sense of these terms, present a false dichotomy insofar as higher cognitive functions rely upon associative mechanisms.
Despite the enormous growth of interest in animal minds among philosophers, there remains relatively little awareness among philosophers of the complexities of animal learning theory. Partly this is a function of the area being rife with jargon (see the previous paragraph!) and correspondingly difficult to penetrate, and partly it is a function of the philosophical myth that animal learning theory hit a dead end somewhere in the late 1950s, with B. F. Skinner’s work at the apotheosis and Chomsky’s supposedly decisive refutation of Skinner launching cognitive science as we know it. Strict behaviorism may have been a dead end, but the learning theory that went with it is far from irrelevant to current cognitive science.
A goal of this chapter is to unpack some of the jargon and thereby familiarize the reader with some of the main concepts and developments in animal learning theory, and their application to questions about animal minds. Along the way, I will also touch upon the relevance of learning theory to human cognition. The goal, however, is not to provide a systematic or comprehensive review of the literature. (For that, see a textbook such as Domjan and Grau 2014.) Rather, my goal is to produce something akin to a New Yorker style map of the landscape as I see it, highlighting some of the features of learning that have guided me in my thinking about certain aspects of animal cognition. I do this in the hope that such a map will prove useful to others, but also that it will encourage them to produce alternative maps. To be clear, however, the goal here is not to converge on the one, true representation of animal learning. Just as no single flat map projection of the planet is fully adequate to its spheroid reality, even textbook treatments of animal learning selectively distort the complexities of animal learning to fit the purposes of those who write them and teach from them. While the psychologists’ view of learning is definitely useful to philosophers interested in animal minds, it may not satisfy all our needs. For a similar reason, I am not starting by rehearsing a textbook definition of learning. Such definitions serve important pedagogical purposes, but do not bear the metaphysical weight that philosophers are eager to place upon them.
To illustrate the view from where I sit, and to begin unpacking some of the jargon, let me start with a distinction between two forms of classical conditioning – delay conditioning and trace conditioning. Classical conditioning is the form of associative learning linked to Pavlov’s discovery that dogs, who naturally salivate in the presence of food (the unconditioned stimulus, or US), would come to salivate in response to a different stimulus, such as the ring of a bell (the conditioned stimulus, or CS) given a certain amount of experience with the pairing of the CS and the US. This much is familiar to almost anyone with a college education, and to everyone with a basic course in psychology, although frequent repetition of “CS” and “US” may already prove challenging. Less familiar is the distinction between Pavlovian delay conditioning and Pavlovian trace conditioning. In Pavlovian training, the CS (e.g., the auditory stimulus emanating from the bell to which the dog’s response is to be conditioned) is introduced just before the US (e.g., the food, to which the dog already has a prior response of salivation). In standard delay conditioning, the CS remains present when the US is introduced (i.e., the bell can still be heard ringing when the food arrives). This method of conditioning with CS and US co-present is effective in the sea slug Aplysia, using a light touch to the animal’s siphon as the CS and electric shock to the tail as the US (Hawkins et al. 1983), leading to a conditioned siphon-withdrawal response. Delay conditioning is just as effective a method in humans and rabbits, using an auditory tone as the CS and a puff of air directed at the eye leading to an eye-blink response (Clark and Squire 1998). The acquisition of the delay-conditioned response is effectively 100%, whether the subjects are sea slugs, rabbits, or humans (although, of course, the conditionable responses are limited by the fact that sea slugs don’t have eyes to blink, and neither rabbits nor humans have siphons to withdraw). In contrast to delay conditioning, trace conditioning involves terminating the CS before the US is delivered. For instance, a tone is heard, then silence (spanning the trace interval), then a puff of air is delivered to the eye of a subject. Under these conditions, subjects are much less likely to acquire the response (around 50% in the non-amnesiac, normal human subjects tested by Clark & Squire 1998).
Why should philosophers of animal mind and cognition care about this seemingly arcane distinction within Pavlovian learning? After all, Pavlovian classical conditioning represents the epitome of a “simple” learning mechanism, known for nearly a century, putatively involving very little cognitive capacity, capable of occurring completely without conscious awareness, and found even in organisms such as Aplysia with fewer than 20,000 neurons. But here is where the distinction makes a true difference. Despite a mistaken report to the contrary (Bekinschtein et al. 2011), trace conditioning has not been shown in Aplysia. Furthermore, Clark and Squire (1998) provided evidence that for their human subjects, who did not know they were in a conditioning experiment, there was no correlation between delay conditioning of the eye blink response and awareness of the relationship between the stimuli (the sound and the puff of air) as revealed by questioning after the procedure – all were conditioned, but only some of them could answer the questions about the relationship between the stimuli – whereas there was a perfect correlation between acquiring the response during trace conditioning and awareness of the relationship between the stimuli as revealed by subsequent questioning. Delay conditioning can be automatic and unconscious. Trace conditioning perhaps not. Furthermore, neuroscientists have shown that the neural mechanisms underlying delay and trace conditioning are independent and quite different. These differences, as I will go on to explain, are significant to our understanding of the cognitive architecture involved, and have implications for issues that philosophers of animal cognition care about.
Before going into more detail about the neural mechanisms, this case of trace conditioning reveals an important distinction between two important dimensions along which learning may be compared. Psychology textbooks are typically organized along a methodological dimension: whether the experimental setup involves single-stimulus learning (habituation, sensitization) or whether the setup involves an association of some kind; whether the association is between two stimuli (classical conditioning), or whether the experiment is set up to allow the animal to learn to associate its own actions with outcomes (instrumental, operant); whether or not the animal is pre-exposed to the stimuli without any associated reward (latent learning). Along this dimension, delay conditioning and trace conditioning are very similar, differing only in the offset of the CS in relation to the onset of the US. Psychologists have some practical and historical reasons for emphasizing the methodological dimension when teaching students about learning: if you can run a delay conditioning experiment, for example, it is a minor tweak of the procedure to run a trace conditioning experiment. But from a mechanistic perspective, there is no guarantee that similarity along the methodological dimension should correspond to similarity of mechanism, as investigation of the mechanisms of delay conditioning and trace conditioning has shown.
It is not my intention to provide a comprehensive review of the mechanisms – the relevant neuroscience is, after all, rapidly developing. Rather, my goal is to whet the appetite for philosophers to attend to the details. In the case of trace conditioning, an intact hippocampus is widely believed to be a key component of the system. Solomon et al. (1986) established that rabbits with hippocampal lesions were not impaired with respect to delay conditioning, but could not be conditioned with a trace conditioning procedure. Clark and Squire (1998) described a similar pattern of results with human amnesiac patients who had suffered hippocampal damage. It is tempting to speculate about the role of the hippocampus, so I will! We know from many studies that the hippocampus plays several roles in processes for which temporal and spatial sequencing is important, as well as in autobiographical memory. Trace conditioning requires a trace of the post-offset CS to be sustained long enough for the onset of the US to be associated with it. Such buffering through the trace interval must be inherently “open” insofar as the organism does not know in advance which (if any) significant events might follow, although experience may support anticipation of some of them. A sound that has just ceased may signal several different things (or signal nothing at all). But the brain cannot keep such a buffer filled with potentially relevant stimuli forever: active storage is limited, and long-range associations are perhaps less likely to be of critical importance to individual biological success (although the particular ecological application matters; see the discussion of trace conditioning in halibut, below). We have, then, in trace conditioning a manifestation of something akin to William James’ (1890) conception of the stream (or waves) of consciousness constituting the “specious present.” And while we cannot prove that rabbits in experiments mentioned by Clark and Squire (1998) who were successfully conditioned using the trace procedure were consciously aware of the tone preceding the puff of air in time, we can be fairly certain that they have the mechanisms which support such experiences in human beings.
That much is admittedly and deliberately speculative, but its application to questions about animal minds and consciousness is clear. Woe betide those who wade into these areas relying only on the testimony of others, however. Mistakes have happened, such as in the aforementioned paper by Bekinschtein and colleagues. Citing Clark and Squire and a number of other relevant studies, they deliberately link trace conditioning to consciousness, making connections between trace conditioning and conscious experience of events in time that are similar in sprit to those in my previous paragraph, as well as bringing in other relevant considerations about the role of attention in learning and consciousness. Their goal of using trace conditioning to answer questions about consciousness in human patients who are in a persistent vegetative state (PVS), a state of partial arousal that is above a coma but below fully functional awareness, is interesting and suggestive, especially in combination with Bekinschtein’s earlier report of trace conditioning in PVS patients (Bekinschtein et al. 2009). They take a misstep, however, by regarding Aplysia as presenting a challenge to their view because (wrongly it turns out) they believe Glanzman (1995) to have reported trace conditioning in these sea slugs, which they presume to be unconscious. The reasons for this misstep are unclear. Glanzman’s article is a review of the cellular basis of classical conditioning in Aplysia (“it’s less simple than you think,” he declares in the article’s subtitle). However, nowhere in his article does Glanzman use the term “trace conditioning” (nor “delay conditioning” for that matter), and all of the studies he reviews in the literature on Aplysia use a standard delay conditioning approach. In places, Bekinschtein and colleagues seem to conflate classical and trace conditioning (e.g., the first sentence of the abstract begins, “Classical (trace) conditioning …), although they are also very clear about the distinction between trace and delay conditioning procedures (e.g., their Figure 1 on page 1344). The point here is just that the problems of relating learning to higher cognition and consciousness is hard enough without the invention of spurious difficulties.
None of this is to say that a trace conditioning procedure might not someday be found to be effective in Aplysia, although here we must be careful to separate some other dimensions of comparison. By saying that rabbits and humans are capable of being conditioned by a trace conditioning procedure, we mean that these organisms can associate a wide (but not unlimited) range of CSs with a similarly wide range of USs through a trace interval (with the exact profile of that interval varying inter- and perhaps intra-specifically). Without a hippocampus, or something functionally equivalent, any capacity for trace conditioning will be much more limited. The mechanism matters to the inferences we want to make, although attention to the mechanisms must go hand-in-hand with careful analysis of the exact learning capacities. At the cutting edge of such comparisons lie questions about the capacities of various fish species. Some capacity for trace conditioning has been established in certain teleost fish (e.g., cod – Nilsson et al. 2008; trout – Nordgreen et al. 2010; halibut – Nilsson et al. 2010; see Allen 2013 for a review) and a species of shark (Guttridge and Brown 2014). Interestingly, trace conditioning in halibut was not recognized experimentally until the trace interval was extended beyond that used for cod and trout, in line with halibut feeding ecology as sit-and-wait predators, rather than being pursuit predators like trout and cod (Nilsson et al. 2010). However, the full range of trace conditioning and the corresponding neural mechanisms in fish remain unknown (Vargas et al. 2009), and are likely to remain so until scientists understand more about the functional neuroanatomy of fish brains. Trinh et al. (2016) suggest that part of the fish pallium may be functionally analogous to the mammalian hippocampus, but more investigation is needed.
Trace conditioning provides but one example of how appreciation for distinctions among different forms of learning that are usually not distinguished carefully by those outside the field of animal learning can inform theories in comparative animal cognition. Another example is provided by a distinction I have also written about before (Allen et al. 2009), borrowed from Jim Grau (Grau and Joynes 2005). It is the distinction between instrumental learning and operant conditioning, a distinction which is often not made at all, even in psychology textbooks. Where it is made, it is often treated as a methodological distinction only. In either case, instrumental or operant conditioning concerns the ability of animals to learn to associate a behavioral response with an outcome. Thus, for example, Edward Thorndike pioneered the use of puzzle boxes – contraptions from which animals could escape to a food reward if they discovered the correct sequence of actions – and B. F. Skinner invented the Skinner box – a space where animals could be trained via reward or punishment to emit behaviors spontaneously or in response to specific stimuli. When instrumental and operant procedures are distinguished by psychologists, it is sometimes on the basis that Thorndike’s (instrumental) procedure specifies exactly one response to obtain the reward, whereas Skinner’s (operant) procedure allows the animal to emit any available response (such as pressing a lever or pecking at a key) with any frequency. Most psychologists, however, regard “instrumental” and “operant” learning as synonymous.
From their more mechanistic perspective, Grau and Joynes distinguish between instrumental response-outcome learning and more sophisticated operant response-learning. Both forms of learning satisfy methodological criteria for response-outcome conditioning, but only operant learning meets additional functional criteria concerning the relatively unconstrained nature of behavioral response and effective reinforcers, akin to what I referred to above as the “open” nature of trace conditioning. Basic instrumental learning is highly constrained with respect to which responses can be associated with which outcomes. In contrast, operant learning allows a variety of reinforcers – e.g., food, water, access to a mate, access to recreation, and even money (or tokens that can be used for exchange) – to shape a variety of behaviors. Research from Grau’s lab (reviewed by Allen et al. 2009) establishes that the more constrained form of instrumental learning can be found even in the spinal cord. The more advanced form of operant learning seems to require brain circuitry whose functionality is not replicated in the mammalian spinal cord; but this is compatible with some brain circuitry being just as constrained in its ability to associate responses and outcomes.
Importantly for thinking about the relevance of learning to higher cognition, in cases of full operant conditioning, the behaviors and rewards are relatively fungible and goal-oriented. For instance, Rumbaugh and Washburn (2003) describe work by Rumbaugh and colleagues showing that monkeys trained on a computer-mediated task, using a joystick they could only reach with their feet, switched to using their hands and were more effective at the task when the equipment was re-arranged. This kind of goal-directed flexibility in operant behavior (corresponding to what Rumbaugh et al. 1996 call an “emergent”) provides a useful dimension along which the capacities of different species and different individuals may be compared. It requires associative cortex or its functional equivalent (although much more would need to be said for a complete account), involving mechanisms quite different from those that are sufficient for more basic forms of instrumental learning.
The associative learning phenomena described originally within the paradigm of behavioristic psychology remain highly relevant to comparative cognition, but the early models of such phenomena were too limited, in all sorts of ways. The shift to a more integrated view of associative learning and cognition has roots in early work by Tolman on latent learning and cognitive maps in rats (1948). But the challenge to early ideas about animal learning was accelerated by the discovery of various learning phenomena that were hard for strict Pavlovians and Skinnerians to explain. These phenomena include latent inhibition (the longer time taken to learn an association after pre-exposure to a stimulus; Lubow and Moore 1959), the Garcia effect (one-trial learning to avoid food after administration of an emetic (a vomiting inducer) with a long delay; Garcia et al. 1966), and blocking (the inability to learn about a predictive stimulus when it is presented in the context of a previously learned association; Kamin 1969).
In a sense, Chomsky (1959) was correct that the theories of learning espoused by Skinner were not up to the task of accounting for all behavior. But Chomsky (1967) was wrong to think that Skinner’s views represented the pinnacle of associationist learning theory. The phenomena described just above have led and continue to lead modelers to develop ever more sophisticated theories, applying ideas about information processing to the elaboration of representations of the world. For example, to explain the blocking effect, Rescorla and Wagner (1972) described a model for classical conditioning based upon error correction, in which learning is proportional to the amount of “surprise” generated by an outcome. The notion of “surprise” here can be cashed out in information-theoretic terms concerning the likelihood of the outcome given previous experience. But unlike strict behaviorism, which eschews the idea of cognitive or mental representations, the Rescorla-Wagner model provides a method for discriminating and representing the most predictive cues by a process of cue competition, rather than merely associating co-occurring stimuli. The original Rescorla-Wagner model has its own limitations, and has been subsequently elaborated in various ways (e.g., Van Hamme and Wasserman 1994). Nevertheless, the naive discriminative learning capability of the basic model has promising applications, even to human language learning (Baayen et al. 2016).
The shift from categorizing learning by the methodological dimensions of training procedures used in laboratory preparations, to models of learning and description of the mechanisms required to support various kinds of behavioral flexibility, holds potential for a better understanding of the evolution of learning and cognition. Eric Kandel and colleagues (Castellucci et al. 1970; Hawkins et al. 1983) already regarded classical conditioning in Aplysia as an elaboration of single-stimulus sensitization. Grau and Joynes (2005, p. 4) advocate for what they call a “neuro-functional” approach, pointing out that, “a single mechanism may be called upon to solve a variety of environmental challenges” and suggesting that a single mechanism may be implicated in different cases of learning that are categorized quite differently using methodological criteria. Methodological considerations are important, but now, more than century since Thorndike (1911) invented the experimental approach to animal learning, we are in a better position than ever to recognize that detailed attention to all aspects of learning – methodology, information processing, neural mechanisms, and ecological context – provide us with the best chance of understanding how these diverse capacities came to be distributed across the animal kingdom.
Both learning theory and the comparative neuroscience of learning are currently undergoing rapid development, which makes tracking these fields from outside the field challenging. I do not pretend to know it all, but I do know that philosophers are in a privileged position: if we are willing to rise to the challenge, we are able to sample a wider range of literature than most practicing scientists, and we can connect the dots among diverse findings. These developments can be used to enrich our understanding of the diversity of learning capacities and mechanisms that make up the species of minds that have evolved on this planet.
In addition to the textbook by Domjan and Grau cited in the main text, S. Shettleworth’s textbook Cognition, Evolution, and Behavior (2nd edition, New York: Oxford University Press, 2009) also covers animal learning in detail. For more on the alleged contrast between cognitive and associative approaches, see contributions to S. Hurley and M. Nudds (eds.), Rational Animals? (Oxford: Oxford University Press, 2006), especially chapters by N. Clayton et al., Papineau & Heyes, and myself. My contribution to the volume was partly in reaction to T. R. Zentall’s (2001) “The case for a cognitive approach to animal learning and behavior” (Behavioural Processes 54, 65–78) in which he argues that the primary benefit of cognitive approaches to animal learning is to inspire better associative accounts.
Allen, C. (2013) “Fish cognition and consciousness,” Journal of Agricultural and Environmental Ethics 26 25–39.
Allen, C., Grau, J., and Meagher, M. (2009) “The lower bounds of cognition: What do spinal cords reveal?” in J. Bickle (ed.), The Oxford Handbook of Philosophy of Neuroscience, New York: Oxford University Press, pp. 129–142.
Baayen, R. H., Shaoul, C., Willits, J., and Ramscar, M. (2016) “Comprehension without segmentation: A proof of concept with naive discriminative learning,” Language, Cognition and Neuroscience 31 (1) 106–128.
Bekinschtein, T. A., Peeters, M., Shalom, D., and Sigman, M. (2011) “Sea slugs, subliminal pictures, and vegetative state patients: Boundaries of consciousness in classical conditioning,” Frontiers in Psychology 2 337.
Bekinschtein, T. A., Shalom, D. E., Forcato, C., Herrera, M., Coleman, M. R., Manes, F. F., and Sigman, M. (2009) “Classical conditioning in the vegetative and minimally conscious state,” Nature Neuroscience 12 (10) 1343–1349.
Buckner, C. (2011) “Two approaches to the distinction between cognition and ‘Mere Association’,” International Journal for Comparative Psychology 24(1) 1–35.
Castellucci, V., Pinsker, H., Kupfermann, I., & Kandel, E. R. (1970) “Neuronal mechanisms of habituation and dishabituation of the gill-withdrawal reflex in Aplysia,” Science 167 (3926) 1745–1748.
Chomsky, N. (1959) “A review of B. F. Skinner’s verbal behavior,” Language 35 (1) 26–58.
Chomsky, N. (1967) “Preface to the reprint of Chomsky 1959,” in L. A. Jakobovits and M. S. Miron (eds.), Readings in the Psychology of Language, Princeton, NJ: Prentice-Hall, Inc., pp. 142–143.
Clark, R. E., and Squire, L. R. (1998) “Classical conditioning and brain systems: The role of awareness,” Science 280 77–81.
Domjan, M. P., and Grau, J. (2014) The Principles of Learning and Behavior (7th Edition), Belmont, CA: Wadsworth/Cengage Learning.
Galef, B. G., and Laland, K. N. (2005) “Social learning in animals: Empirical studies and theoretical models,” BioScience 55 489–499.
Gallistel, C. R. (1990) The Organization of Learning, Cambridge, MA: MIT Press.
Garcia, J., Ervin, F. R., and Koelling, R. A. (1966) “Learning with prolonged delay of reinforcement,” Psychonomic Science 5 121–122.
Glanzman, D. L. (1995) “The cellular basis of classical conditioning in Aplysia californica – it’s less simple than you think,” Trends in Neuroscience 18 (1) 30–6.
Grau, J. W., and Joynes, R. L. (2005) “A neural-functionalist approach to learning,” International Journal of Comparative Psychology 18 1–22.
Guttridge, T. L., and Brown, C. (2014) “Learning and memory in the Port Jackson shark, Heterodontus portusjacksoni,” Animal Cognition 17 (2) 415–425.
Hawkins, R. D., Abrams, T. W., Carew, T. J., and Kandel, E. R. (1983) “A cellular mechanism of classical conditioning in Aplysia: Activity-dependent amplification of presynaptic facilitation,” Science 219 (4583) 400–405.
James, W. (1890) The Principles of Psychology, New York: Dover.
Kamin, L. J. (1969) “Predictability, surprise, attention and conditioning,” in B. A. Campbell and R. M. Church (eds.), Punishment and Aversive Behavior, New York: Appleton-Century-Crofts, pp. 279–296.
Lubow, R. E., and Moore, A. U. (1959) “Latent inhibition: The effect of non-reinforced preexposure to the conditioned stimulus,” Journal of Comparative and Physiological Psychology 52 415–419.
Nilsson, J., Kristiansen, T. S., Fosseidengen, J. E., Fernö, A., and van den Bos, R. (2008) “Learning in cod (Gadus morhua): Long trace interval retention,” Animal Cognition 11 215–222.
Nilsson, J., Kristiansen, T. S., Fosseidengen, J. E., Stien, L. H., Ferno, A., and van den Bos, R. (2010) “Learning and anticipatory behaviour in a ‘sit-and-wait’ predator: The Atlantic halibut,” Behavioral Processes 83 257–266.
Nordgreen, J., Janczak, A. M., Hovland, A. L., Ranheim, B., and Horsberg, T. E. (2010) “Trace classical conditioning in rainbow trout (Oncorhynchus mykiss): What do they learn?” Animal Cognition 13 303–309.
Rescorla, R. A., and Wagner, A. R. (1972) “A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement,” in A. H. Black and W. F. Prokasy (eds.), Classical Conditioning II, New York: Appleton-Century-Crofts, pp. 64–99.
Rumbaugh, D. M., and Washburn, D. A. (2003) Intelligence of Apes and Other Rational Beings, New Haven: Yale University Press.
Rumbaugh, D. M., Washburn, D. A., and Hillix, W. A. (1996) “Respondents, operants, and emergents: Toward an integrated perspective on behavior,” in K. Pribram and J. King (eds), Learning as Self-Organizing Process, Hillsdale, NJ: Erlbaum, pp. 57–73.
Solomon, P. R., Vander Schaaf, E. R., Thompson, R. F., and Weisz, D. J. (1986) “Hippocampus and trace conditioning of the rabbit’s classically conditioned nictitating membrane response,” Behavioral Neuroscience 100 729–744.
Thompson-Schill, S. L., Ramscar, M., and Chrysikou, E. G. (2009) “Cognition without control: When a little frontal lobe goes a long way,” Current Directions in Psychological Science 18(5) 259–263.
Thorndike, E. L. (1911) Animal Intelligence: An Experimental Study of Associative Processes in Animals, New York: Macmillan.
Tolman, E. C. (1948) “Cognitive maps in rats and men,” Psychological Review 55(4) 189–208.
Trinh, A-T., Harvey-Girard, E., Teixeira, F., and Maler, L. (2016) “Cryptic laminar and columnar organization in the dorsolateral pallium of a weakly electric fish,” Journal of Comparative Neurology 524 408–428.
Van Hamme, L. J., and Wasserman, E. A. (1994) “Cue competition in causality judgements: The role of nonpresentation of compound stimulus elements,” Learning and Motivation 25 127–151.
Vargas, J. P., López, J. C., and Portavella, M. (2009) “What are the functions of fish brain pallium?” Brain Research Bulletin 79 436–440.