2imageA TALE OF TWO SCHOOLS

Do Dogs Desire?

Given the prominent role that jackdaws and little silvery fish known as three-spined sticklebacks—my favorite childhood animals—played in the early years of ethology, the discipline was an easy sell to me. I learned about it when, as a biology student, I heard a professor explain the zigzag dance of the stickleback. I was floored: not by what these little fish did but by how seriously science took what they did. It was the first time I realized that what I liked doing best—watching animals behave—could be a profession. As a boy, I had spent hours observing self-caught aquatic life that I kept in buckets and tanks in our backyard. The high point had been breeding sticklebacks and releasing the young back into the ditch from which their parents had come.

Ethology is the biological study of animal behavior that arose in continental Europe right before and after World War II. It reached the English-speaking world when one of its founders, Niko Tinbergen, moved across the Channel. A Dutch zoologist, Tinbergen started out in Leiden and accepted a position in Oxford in 1949. He described the male stickleback’s zigzag dance in great detail, explaining how it draws the female to the nest where the male fertilizes her eggs. The male then chases her off and protects the eggs, fanning and aerating them, until they hatch. I had seen it all with my own eyes in an abandoned ­aquarium—its luxurious algae growth was exactly what the fish needed—including the stunning transformation of silvery males into brightly red and blue show-offs. Tinbergen had noticed that males in tanks in the windowsill of his lab in Leiden would get agitated every time a red mail truck drove by in the street below. Using fish models to trigger courtship and aggression, he confirmed the critical role of a red signal.

Clearly, ethology was the direction I wanted to go in, but before pursuing this goal, I was briefly diverted by its rival discipline. I worked in the lab of a psychology professor trained in the behaviorist tradition that dominated comparative psychology for most of the last century. This school was chiefly American but evidently had reached my university in the Netherlands. I still remember this professor’s classes, in which he made fun of anyone who believed to know what animals “want,” “like,” or “feel,” carefully neutralizing such terminology with quotation marks. If your dog drops a tennis ball in front of you and looks up at you with wagging tail, do you think she wants to play? How naïve! Who says dogs have desires and intentions? Her behavior is the product of the law of effect: she must have been rewarded for it in the past. The dog’s mind, if such a thing even exists, remains a black box.

Its focus on nothing but behavior is what gave behaviorism its name, but I had trouble with the idea that animal behavior could be reduced to a history of incentives. It presented animals as passive, whereas I view them as seeking, wanting, and striving. True, their behavior changes based on its consequences, but they never act randomly or accidentally to begin with. Let’s take the dog and her ball. Throw a ball at a puppy, and she will go after it like an eager predator. The more she learns about prey and their escape tactics—or about you and your fake throws—the better a hunter or fetcher she will become. But still, at the root of everything is her immense enthusiasm for the pursuit, which takes her through shrubs, into water, and sometimes through glass doors. This enthusiasm manifests itself before any skill development.

Now, compare this behavior with that of your pet rabbit. It doesn’t matter how many balls you throw at him, none of the same learning will take place. Absent a hunting instinct, what is there to acquire? Even if you were to offer your rabbit a juicy carrot for every retrieved ball, you’d be in for a long, tedious training program that would never generate the excitement for small moving objects known of cats and dogs. Behaviorists totally overlooked these natural proclivities, forgetting that by flapping their wings, digging holes, manipulating sticks, gnawing wood, climbing trees, and so on, every species sets up its own learning opportunities. Many animals are driven to learn the things they need to know or do, the way kid goats practice head butts or human toddlers have an insuppressible urge to stand up and walk. This holds even for animals in a sterile box. It is no accident that rats are trained to press bars with their paws, pigeons to peck keys with their beaks, and cats to rub their flanks against a latch. Operant conditioning tends to reinforce what is already there. Instead of being the omnipotent creator of behavior, it is its humble servant.

One of the first illustrations came from the work on kittiwakes by Esther Cullen, a postdoctoral student of Tinbergen. Kittiwakes are seabirds of the gull family; they differ from other gulls in that to deter predators, they nest on narrow cliffs. These birds rarely give alarm calls and do not vigorously defend their nests—they don’t need to. But what is most intriguing is that kittiwakes fail to recognize their young. Ground-nesting gulls, in which the young move around after hatching, recognize their offspring within days and do not hesitate to kick out strange ones that scientists place in their nests. Kittiwakes, on the other hand, can’t tell the difference between their own and strange young, treating the latter like their own. Not that they need to worry about this situation: fledglings normally stay put at the parental nest. This is, of course, precisely why biologists think kittiwakes lack individual recognition.1

For the behaviorist, though, such findings are thoroughly puzzling. Two similar birds differing so starkly in what they learn makes no sense, because learning is supposedly universal. Behaviorism ignores ecology and has little room for learning that is adapted to the specific needs of each organism. It has even less room for an absence of learning, as in the kittiwake, or other biological variation, such as differences between the sexes. In some species, for example, males roam a large area in search of mates, whereas females occupy smaller home ranges. Under such conditions, males are expected to have superior spatial abilities. They need to remember when and where they ran into a member of the opposite sex. Giant panda males travel far and wide through the wet bamboo forest, which is uniformly green in all directions. It is crucial for them to be at the right place at the right time given that females ovulate only once per year and are receptive for just a couple of days—which is why zoos have such trouble breeding this magnificent bear. That males have better spatial abilities than females was confirmed when Bonnie Perdue, an American psychologist, tested pandas at the Chengdu Research Base of Giant Panda Breeding in China. She did so by spreading out food boxes over an outdoor area. Panda males were much better than females at remembering which boxes had recently been baited. In contrast, when the Asian small-clawed otter, a member of the same arctoidea (bear-like) family of carnivores, was tested on a similar task, both sexes performed the same. This otter being monogamous, males and females occupy the same territory. Similarly, males of sexually promiscuous rodent species navigate mazes more easily than females, whereas monogamous rodents show no sex difference.2

If learning talents are a product of natural history and mating strategies, the whole notion of universality begins to fall apart. We can expect huge variation. Evidence for inborn learning specializations has been steadily mounting.3 There are many different types, from the way ducklings imprint on the first moving object they see—whether it is their mother or a bearded zoologist—to the song learning of birds and whales and the way primates copy one another’s tool use. The more variation we discover, the shakier gets the claim that all learning is essentially the same.4

Yet during my student days, behaviorism still ruled supreme, at least in psychology. Luckily for me, the professor’s pipe-smoking associate, Paul Timmermans, regularly took me aside to induce some much-needed reflection on the indoctrination I was being subjected to. We worked with two young chimpanzees who offered me my first contact with primates apart from my own species. It was love at first sight. I had never met animals that so clearly possessed a mind of their own. Between puffs of smoke, Paul would ask rhetorically, with a twinkle in his eyes, “Do you really think chimps lack emotions?” He would do so just after the apes had thrown a shrieking temper tantrum for not getting their way, or laughed their hoarse chuckles during roughhousing. Paul would also mischievously ask my opinion about other taboo topics, without necessarily saying that the professor was wrong. One night the chimps escaped and ran through the building, only to return to their cage, carefully closing its door behind them before going to sleep. In the morning, we found them curled up in their straw nests and would not have suspected a thing had it not been for the smelly droppings discovered in the hallway by a secretary. “Is it possible that apes think ahead?” Paul asked when I wondered why the apes had closed their own door. How to deal with such crafty, volatile characters without assuming intentions and emotions?

To drive this point home more bluntly, imagine that you wish to enter a testing room with chimpanzees, as I did every day. I would suggest that rather than rely on some behavioral coding scheme that denies intentionality, you pay close attention to their moods and emotions, reading them the way you would any person’s, and beware of their tricks. Otherwise, you might end up like one of my fellow students. Despite the advice we gave him of how to dress for the occasion, he came to his first encounter in a suit and tie. He was sure he could handle such relatively small animals, while mentioning how good he was with dogs. The two chimps were mere juveniles, only four and five years old at the time. But of course, they were already stronger than any grown man, and ten times more cunning than a dog. I still remember the student staggering out of the testing room, having trouble shedding both apes clinging to his legs. His jacket was in tatters, with both sleeves torn off. He was fortunate that the apes never discovered the choking function of his tie.

One thing I learned in this lab was that superior intelligence doesn’t imply better test outcomes. We presented both rhesus monkeys and chimpanzees with a simple task, known as haptic (touch) discrimination. They were to stick their hand through a hole to feel the difference between two shapes and pick the correct one. Our goal was to do hundreds of trials per session, but whereas this worked well with the monkeys, the chimps had other ideas. They would do fine on the first dozen trials, showing that the discrimination posed no problem, but then their attention would wander. They’d thrust their hands farther so as to reach me, pulling at my clothes, making laughing faces, banging on the window that separated us, and trying to engage me in play. Jumping up and down, they’d even gesture to the door, as if I didn’t know how to get to their side. Sometimes, unprofessionally, I would give in and have fun with them. Needless to say, the apes’ performance on the task was well below that of the monkeys, not due to an intellectual deficit but because they were bored out of their minds.

The task was just not up to their intellectual level.

The Hunger Games

Are we open-minded enough to assume that other species have a mental life? Are we creative enough to investigate it? Can we tease apart the roles of attention, motivation, and cognition? Those three are involved in everything animals do; hence poor performance can be explained by any one of them. With the above two playful apes, I opted for tedium to explain their underperformance, but how to be sure? It takes human ingenuity indeed to know how smart an animal is.

It also takes respect. If we test animals under duress, what can we expect? Would anyone test the memory of human children by throwing them into a swimming pool to see if they remember where to get out? Yet the Morris Water Maze is a standard memory test used every day in hundreds of laboratories that make rats frantically swim in a water tank with high walls until they come upon a submerged platform that saves them. In subsequent trials, the rats need to remember the platform’s location. There is also the Columbia Obstruction Method, in which animals have to cross an electrified grid after varying periods of deprivation, so researchers can see if their drive to reach food or a mate (or for mother rats, their pups) exceeds the fear of a painful shock. Stress is, in fact, a major testing tool. Many labs keep their animals at 85 percent of typical body weight to ensure food motivation. We have woefully little data on how hunger affects their cognition, although I do remember a paper entitled “Too Hungry to Learn?” about food-deprived chickens that were not particularly good at noticing the finer distinctions of a maze task.5

The assumption that an empty stomach improves learning is curious. Think about your own life: absorbing the layout of a city, getting to know new friends, learning to play the piano or do your job. Does food play much of a role? No one has ever proposed permanent food deprivation for university students. Why would it be any different for animals? Harry Harlow, a well-known American primatologist, was an early critic of the hunger reduction model. He argued that intelligent animals learn mostly through curiosity and free exploration, both of which are likely killed by a narrow fixation on food. He poked fun at the Skinner box, seeing it as a splendid instrument to demonstrate the effectiveness of food rewards but not to study complex behavior. Harlow added this sarcastic gem: “I am not for one moment disparaging the value of the rat as a subject for psychological investigation; there is very little wrong with the rat that cannot be overcome by the education of the experimenters.”6

I was amazed to learn that the nearly century-old Yerkes Primate Center went through an early period in which it tested food deprivation on chimpanzees. In the early years, the center was still located in Orange Park, Florida, before it moved to Atlanta, where it became a major institute for biomedical and behavioral neuroscience research. While still in Florida, in 1955, the center set up an operant conditioning program modeled on procedures with rats, including a drastic reduction in body weight and the replacement of chimp names with numbers. Treating apes as rats proved no success, however. Due to the gigantic tensions this program engendered, it lasted only two years. The director and most of the staff deplored the fasting imposed on their apes and constantly argued with the hard-nosed behaviorists who claimed that this was the only way to give the apes “purpose in life,” as they blithely called it. Expressing no interest in cognition—the existence of which they didn’t even acknowledge—they investigated reinforcement schedules and the punitive effect of time-outs. Rumor had it that staff sabotaged their project by secretly feeding the apes at night. Feeling unwelcome and unappreciated, the behaviorists left because, as Skinner later put it, “­tender-hearted colleagues frustrated [their] efforts to reduce chimpanzees to a satisfactory state of deprivation.”7 Nowadays, we would recognize the friction as about not just methodology but also ethics. That creating morose, grumpy apes through starvation was unnecessary was clear from one of the behaviorists’ own attempts with an alternative incentive. Chimpanzee number 141, as he called him, successfully learned a task after each correct choice was rewarded with an opportunity to groom the experimenter’s arm.8

The difference between behaviorism and ethology has always been one of human-controlled versus natural behavior. Behaviorists sought to dictate behavior by placing animals in barren environments in which they could do little else than what the experimenter wanted. If they didn’t, their behavior was classified as “misbehavior.” Raccoons, for example, are almost impossible to train to drop coins into a box, because they prefer to hold on to them and frantically rub them together—a perfectly normal foraging behavior for this species.9 Skinner had no eye for such natural proclivities, however, and preferred a language of control and domination. He spoke of behavioral engineering and manipulation, and not just in relation to animals. Later in life he sought to turn humans into happy, productive, and “maximally effective” citizens.10 While there is no doubt that operant conditioning is a solid and valuable idea and a powerful modifier of behavior, behaviorism’s big mistake was to declare it the only game in town.

Ethologists, on the other hand, are more interested in spontaneous behavior. The first ones were eighteenth-century Frenchmen, who already used the label ethology, derived from the Greek ethos, “character,” to refer to the study of species-typical characteristics. In 1902 the great American naturalist William Morton Wheeler made the English term popular as the study of “habits and instincts.”11 Ethologists did conduct experiments and were not averse to working with captive animals, but still a world of difference lay between Lorenz calling his jackdaws down from the sky or being followed by a gaggle of waddling goslings and Skinner standing before rows of cages with singly housed pigeons, firmly closing his hand around the wings of one of his birds.

Ethology developed its own specialized language about instincts, fixed action patterns (a species’ stereotypical behavior, such as the dog’s tail wagging), innate releasers (stimuli that elicit specific behavior, such as the red dot on a gull’s bill that triggers pecking by hungry chicks), displacement activities (seemingly irrelevant actions resulting from conflicting tendencies, such as scratching oneself before a decision), and so on. Without going into the details of its classical framework, ethology’s focus was on behavior that develops naturally in all members of a given species. A central question was what purpose a behavior might serve. Initially, the grand architect of ethology was Lorenz, but after he and Tinbergen met in 1936, the latter became the one to fine-tune the ideas and develop critical tests. Tinbergen was the more analytical and empirical of the two, with an excellent eye for the questions behind observable behavior; he conducted field experiments on digger wasps, sticklebacks, and gulls to pinpoint behavioral functions.12

The two men formed a complementary relationship and friendship, which was tested by World War II in which they were on opposite sides. Lorenz served as medical officer in the German army and opportunistically sympathized with Nazi doctrine; Tinbergen was imprisoned for two years by the German occupiers of the Netherlands for joining a protest against the way his Jewish colleagues at the university were treated. Remarkably, both scientists patched things up after the war for the sake of their shared love of animal behavior. Lorenz was the charismatic, flamboyant thinker—he didn’t conduct a single statistical analysis in his life—while Tinbergen did the nitty-gritty of actual data collection. I have seen both men speak and can attest to the difference. Tinbergen came across as academic, dry, and thoughtful, whereas Lorenz enthralled his audiences with his enthusiasm and intimate animal knowledge. Desmond Morris, a Tinbergen student famous for writing The Naked Ape and other popular books, got his socks knocked off by Lorenz, saying that the Austrian understood animals better than anyone he’d ever met. He described Lorenz’s 1951 lecture at Bristol University as follows:

To describe his performance as a tour de force is an understatement. Looking like a cross between God and Stalin, his presence was overpowering. “Contrary to your Shakespeare,” he boomed, “there is madness in my method.” And indeed there was. Almost all his discoveries were made by accident and his life consisted largely of a series of disasters with the menageries of animals with which he surrounded himself. His understanding of animal communication and display patterns was revelatory. When he spoke about fish, his hands became fins, when he talked about wolves his eyes were those of a predator, and when he told tales about his geese his arms became wings tucked into his sides. He wasn’t anthropomorphic, he was the opposite—theriomorphic—he became the animal he was describing.13

A journalist once recounted how she had been sent into Lorenz’s office by a receptionist with the words that he was expecting her. His office turned out to be empty. When she asked around, people assured her that he had never left. After a while, she discovered the Nobelist partially submerged in an enormous aquarium built into the office wall. This is how we like our ethologists: as close to their animals as possible. It reminds me of my own encounter with Gerard Baerends, the silverback of Dutch ethology and the very first student of Tinbergen. After my stint in the behaviorist lab, I sought to enter Baerends’s ethology course at the University of Groningen to work with the jackdaw colony that flew around the institution’s nest boxes. Everyone warned me that Baerends was very strict and did not let just anybody in. When I walked into his office, my eyes were immediately drawn to a large well-kept tank with convict cichlids. Being an avid aquarist myself, I hardly took the time to introduce myself before we launched into a discussion of how these fish raise and guard their fry, which they do extraordinarily well. Baerends must have taken my passion as a good sign, because I was admitted without a problem.

The great novelty of ethology was to bring the perspective of morphology and anatomy to bear on behavior. This was a natural step, because whereas behaviorists were mostly psychologists, ethologists were mostly zoologists. They discovered that behavior is not nearly as fluid and hard to define as it might seem. It has a structure, which can be quite stereotypical, such as the way young birds flutter their wings while begging for food with gaping mouths, or how some fish keep fertilized eggs in their mouth until they hatch. Species-typical behavior is as recognizable and measurable as any physical trait. Given their invariant structure and meaning, human facial expressions are another good example. The reason we now have software that reliably recognizes human expressions is that all members of our species contract the same facial muscles under similar emotional circumstances.

image

Konrad Lorenz and other ethologists wanted to know how animals behave of their own accord and how it suits their ecology. In order to understand the parent-­offspring bond in waterfowl, Lorenz let goslings imprint on himself. They followed the pipe-smoking zoologist around wherever he went.

Insofar as behavior patterns are innate, Lorenz argued, they must be subject to the same rules of natural selection as physical traits and be traceable from species to species across the phylogenetic tree. This is as true for the mouth brooding of certain fish as it is for primate facial expressions. Given that the facial musculature of humans and chimpanzees is nearly identical, the laughing, grinning, and pouting of both species likely goes back to a common ancestor.14 Recognition of this parallel between anatomy and behavior was a great leap forward, which is nowadays taken for granted. We all now believe in behavioral evolution, which makes us Lorenzians. Tinbergen’s role was, as he put it himself, to act as the “conscience” of the new discipline by pushing for more precise formulations of its theories and developing ways to test them. He was overly modest saying so, though, because in the end it was he who best spelled out the ethological agenda and turned the field into a respectable science.

Keeping It Simple

Despite the differences between ethology and behaviorism, the two schools had one thing in common. Both were reactions against the overinterpretation of animal intelligence. They were skeptical of “folk” explanations and dismissed anecdotal reports. Behaviorism was the more vehement in its rejection, saying that behavior is all we have to go by and that internal processes can be safely ignored. There is even a joke about its complete reliance on external cues, in which one behaviorist asks another after lovemaking: “That was great for you. How was it for me?”

In the nineteenth century, it was perfectly acceptable to talk about the mental and emotional lives of animals. Charles Darwin himself had written a whole tome about the parallels between human and animal emotional expressions. But while Darwin was a careful scientist who double-checked his sources and conducted observations of his own, others went overboard, almost as in a contest of who could come up with the wildest claim. When Darwin chose the Canadian-born George Romanes as his protégé and successor, the stage was set for an avalanche of misinformation. About half the animal stories collected by Romanes sound plausible enough, but others are embellished or plainly unlikely. They range from a story about rats forming a supply line to their hole in the wall, carefully handing down stolen eggs with their forepaws, to one about a monkey hit by a hunter’s bullet who smeared his hand with his own blood and held it out to the hunter to make him feel guilty.15

Romanes knew the mental operations required for such behavior, he said, by extrapolating from his own. The weakness of his introspective approach was, of course, its reliance on one-time events and on trust in one’s own private experiences. I have nothing against anecdotes, especially if they have been caught on camera or come from reputable observers who know their animals; but I do view them as a starting point of research, never an end point. For those who disparage anecdotes altogether, it is good to keep in mind that almost all interesting work on animal behavior has begun with a description of a striking or puzzling event. Anecdotes hint at what is possible and challenge our thinking.

But we cannot exclude that the event was a fluke, never to be repeated again, or that some decisive aspect went unnoticed. The observer may also unconsciously have filled in missing details based on his or her assumptions. These issues are not easily resolved by collecting more anecdotes. “The plural of anecdote is not data,” as the saying goes. It is ironic, therefore, that when it was his own turn to find a protégé and successor, Romanes chose Lloyd Morgan, who put an end to all this unrestrained speculation. Morgan, a British psychologist, formulated in 1894 the probably most quoted recommendation in all of psychology:

In no case may we interpret an action as the outcome of the exercise of a higher psychical faculty, if it can be interpreted as the outcome of the exercise of one which stands lower on the psychological scale.16

Generations of psychologists have dutifully repeated Morgan’s Canon, taking it to mean that it is safe to assume that animals are stimulus-­response machines. But Morgan never meant it that way. In fact, he rightly added, “But surely the simplicity of an explanation is no necessary criterion of its truth.”17 Here he was reacting against the mindset according to which animals are blind automata without souls. No self-respecting scientist would talk of “souls,” but to deny animals any intelligence and consciousness came close enough. Taken aback by these views, Morgan added a provision to his canon according to which there is nothing wrong with more complex cognitive interpretations if the species in question has already been proven to have high intelligence.18 With animals such as chimpanzees, elephants, and crows, for which we have ample evidence of complex cognition, we really do not need to start at zero every time we are struck by seemingly smart behavior. We don’t need to explain their behavior the way we would that of, say, a rat. And even for the poor underestimated rat, zero is unlikely to be the best starting point.

Morgan’s Canon was seen as a variation on Occam’s razor, according to which science should seek explanations with the smallest number of assumptions. This is a noble goal indeed, but what if a minimalist cognitive explanation asks us to believe in miracles? Evolutionarily speaking, it would be a true miracle if we had the fancy cognition that we believe we have while our fellow animals had none of it. The pursuit of cognitive parsimony often conflicts with evolutionary parsimony.19 No biologist is willing to go this far: we believe in gradual modification. We don’t like to propose gaps between related species without at least coming up with an explanation. How did our species become rational and conscious if the rest of the natural world lacks any stepping-stones? Rigorously applied to animals—and to animals alone!—Morgan’s Canon promotes a saltationist view that leaves the human mind dangling in empty evolutionary space. It is to the credit of Morgan himself that he recognized the limitations of his canon and urged us not to confuse simplicity with reality.

It is less known that ethology, too, arose amid skepticism about subjective methods. Tinbergen and other Dutch ethologists were shaped by the hugely popular illustrated books of two schoolmasters who taught love and respect for nature while insisting that the only way to truly understand animals was to watch them outdoors. This inspired a massive youth movement in Holland, with field excursions every Sunday, that laid the groundwork for a generation of eager naturalists. This approach did not combine well, however, with the Dutch tradition of “animal psychology,” the dominant figure of which was Johan Bierens de Haan. Internationally famous, erudite, and professorial, Bierens de Haan must have looked rather out of place as an occasional guest at Tinbergen’s field site in the Hulshorst, a dune area in the middle of the country. While the younger generation ran around in shorts holding butterfly nets, the older professor came in suit and tie. These visits attest to the cordiality between both scientists before they grew apart, but young Tinbergen soon began to challenge the tenets of animal psychology, such as its reliance on introspection. Increasingly, he put distance between his own thinking and Bierens de Haan’s subjectivism.20 Not being from the same country, Lorenz showed less patience with the old man, whom he—in a play on his name—mischievously dubbed Der Bierhahn (German for “the beertap”).

Tinbergen is nowadays best known for his Four Whys: four different yet complementary questions that we ask about behavior. But none of them explicitly mentions intelligence or cognition.21 That ethology avoided any mention of internal states was perhaps essential for a budding empirical science. As a consequence, ethology temporarily closed the book on cognition and focused instead on the survival value of behavior. In doing so, it planted the seeds of sociobiology, evolutionary psychology, and behavioral ecology. This focus also offered a convenient way around cognition. As soon as questions about intelligence or emotions came up, ethologists would quickly rephrase them in functional terms. For example, if one bonobo reacted to the screams of another by rushing over for a tight embrace, classical ethologists will first of all wonder about the function of such behavior. They’d have debates about who benefited the most, the performer or the recipient, without asking what bonobos understood about one another’s situations, or why the emotions of one should affect those of another. Might apes be empathic? Do bonobos evaluate one another’s needs? This kind of cognitive query made (and still makes) many ethologists uncomfortable.

Blaming the Horse

It is curious that ethologists looked down on animal cognition and emotions as too speculative, while feeling on safe ground with behavioral evolution. If there is one area rife with conjecture, it is how behavior evolved. Ideally, you’d first establish the behavior’s heredity and then measure its impact on survival and reproduction over multiple generations. But we rarely get anywhere close to having this information. With fast-breeding organisms, such as slime molds or fruit flies, these questions may be answerable, but evolutionary accounts of elephant behavior, or human behavior for that matter, remain largely hypothetical since these species don’t permit large-scale breeding experiments. While we do have ways of testing hypotheses and mathematically modeling the consequences of behavior, the evidence is largely indirect. Birth control, technology, and medical care make our own species an almost hopeless test case for evolutionary ideas, which is why we have a plethora of speculations about what happened in the Environment of Evolutionary Adaptedness (EEA). This refers to the living conditions of our hunter-gatherer ancestors, about which we obviously have incomplete knowledge.

In contrast, cognition research deals with processes in real time. Even though we cannot actually “see” cognition, we are able to design experiments that help us deduce how it works while eliminating alternative accounts. In this regard, it really isn’t different from any other scientific endeavor. Nevertheless, the study of animal cognition is still often considered a soft science, and until recently young scientists were advised away from such a tricky topic. “Wait until you have tenure,” some older professors would say. The skepticism goes all the way back to the curious case of a German horse named Hans, who lived around the time Morgan crafted his canon. Hans became its proof positive. The black stallion was known in German as Kluger Hans, translated as Clever Hans, since he seemed to excel at addition and subtraction. His owner would ask him to multiply four by three, and Hans would happily tap his hoof twelve times. He could also tell you what the date of a given weekday was if he knew the date of an earlier day, and he could tell the square root of sixteen by tapping four times. Hans solved problems he had never heard before. People were flabbergasted, and the stallion became an international sensation.

image

Clever Hans was a German horse that drew admiring crowds about a century ago. He seemed to excel at arithmetic, such as addition and multiplication. A more careful examination revealed, however, that his main talent was the reading of human body language. He succeeded only if he could see someone who knew the answer.

That is, until Oskar Pfungst, a German psychologist, investigated the horse’s abilities. Pfungst had noticed that Hans was successful only if his owner knew the answer and was visible to the horse. If the owner or any other questioner stood behind a curtain while posing their question, the horse failed. It was a frustrating experiment for Hans, who would bite Pfungst if he got too many answers wrong. Apparently, the way he got them right is that the owner would subtly shift his position or straighten his back the moment Hans reached the correct number of taps. The questioner would be tense in face and posture until the horse reached the answer, at which point he would relax. Hans was very good at picking up these cues. The owner also wore a hat with a wide brim, which would be down as long as he looked at Hans’s tapping hoof and go up when Hans reached the right number. Pfungst demonstrated that anyone wearing such a hat could get any number out of the horse by lowering and then raising his head.22

Some spoke of a hoax, but the owner was unaware that he was cuing his horse, so there was no fraud involved. Even once the owner knew, he found it nearly impossible to suppress his signals. In fact, following the report by Pfungst, the owner was so disappointed that he accused the horse of treachery and wanted him to spend the rest of his life pulling hearses as punishment. Instead of being mad at himself, he blamed his horse! Luckily for Hans, he ended up with a new owner who admired his abilities and tested them further. This was the right spirit, because instead of looking at the whole affair as a downgrading of animal intelligence, it proved incredible sensitivity. Hans’s talent at arithmetic may have been flawed, but his understanding of human body language was outstanding.23

As an Orlov Trotter stallion, Hans appears to have perfectly fit the description of this Russian breed: “Possessed of amazing intelligence, they learn quickly and remember easily with few repetitions. There is often an uncanny understanding of what is wanted and needed of them at any given time. Bred to love people, they bond very tightly to their owners.”24

Instead of being a disaster for animal cognition studies, the horse’s exposé proved a blessing in disguise. Awareness of the Clever Hans Effect, as it became known, has greatly improved animal testing. By illustrating the power of blind procedures, Pfungst paved the way for cognitive studies that were able to withstand scrutiny. Ironically, this lesson is often ignored in research on humans. Young children are typically presented with cognitive tasks while sitting on their mothers’ laps. The assumption is that mothers are like furniture, but every mother wants her child to succeed, and nothing guarantees that her body movements, sighs, and nudges don’t cue her child. Thanks to Clever Hans, the study of animal cognition is now more rigorous than that. Dog labs test the cognition of their animals while the human owner is blindfolded or stands in a corner while facing away. In one well-known study, in which Rico, a border collie, recognized more than two hundred words for different toys, the owner would ask for a specific toy located in a different room. This prevented the owner from looking at the toy and unconsciously guiding the dog’s attention. Rico would need to run to the other room to fetch the mentioned item, which is how the Clever Hans Effect was avoided.25

We owe Pfungst a profound debt for demonstrating that humans and animals develop communication that they are unaware of. The horse reinforced behavior in his owner, and the owner in his horse, whereas everyone was convinced that they were doing something else entirely. While the realization of what was going on moved the historical pendulum to swing firmly from rich to lean interpretations of animal ­intelligence—where it unfortunately got stuck for too long—other appeals to simplicity have fared less well. Below I describe two examples, one concerning self-awareness and the other culture, both concepts that, whenever mentioned in relation to animals, still send some scholars through the roof.

Armchair Primatology

When American psychologist Gordon Gallup, in 1970, first showed that chimpanzees recognize their own reflection, he spoke of self-­awareness—a capacity that he said was lacking in species, such as monkeys, that failed his mirror test.26 The test consisted of putting a mark on the body of an anesthetized ape that it could find only, once awake, by inspecting its reflection. Gallup’s choice of words obviously annoyed those leaning toward a robotic view of animals.

The first counterattack came from B. F. Skinner and colleagues, who promptly trained pigeons to peck at dots on themselves while standing in front of a mirror.27 Reproducing a semblance of the behavior, they felt, would solve the mystery. Never mind that it took them hundreds of grain rewards to get the pigeons to do something that chimpanzees and humans do without any coaching. One can train goldfish to play soccer and bears to dance, but does anyone believe that this tells us much about the skills of human soccer stars or dancers? Worse, we aren’t even sure that this pigeon study is replicable. Another research team spent years trying the exact same training, using the same strain of pigeon, without producing any self-pecking birds. They ended up publishing a report critical of the original study with the word Pinocchio in its title.28

The second counterattack was a fresh interpretation of the mirror test, suggesting that the observed self-recognition might be a by-­product of the anesthesia used in the marking procedure. Perhaps when a chimpanzee recovers from the anesthesia, he randomly touches his face, resulting in accidental contact with the mark.29 This idea was quickly disproved by another team that carefully recorded which facial areas chimpanzees touch. It turned out that the touching is far from random: it specifically targets the marked area and peaks right after the ape has seen his own reflection.30 This was, of course, what the experts had been saying all along, but now it was official.

Apes really don’t need anesthesia to show how well they understand mirrors. They spontaneously use them to look inside their mouth, and females always turn around to check out their behinds—something males don’t care about. Both are body parts that they normally never get to see. Apes also use mirrors for special needs. For example, Rowena has a little injury on the top of her head caused by a scuffle with a male. Immediately, when we hold up a mirror, she inspects the injury and grooms around it while following the reflection of her movements. Another female, Borie, has an ear infection that we are trying to treat with antibiotics, but she keeps waving her hand in the direction of a table that is empty except for a small plastic mirror. It takes a while before we understand her intentions, but as soon as we hand her the toy, she picks up a straw and angles the mirror such that she can clean out her ear while watching the process in the mirror.

image

B. F. Skinner was more interested in experimental control over animals than spontaneous behavior. Stimulus-response contingencies were all that mattered. His behaviorism dominated animal studies for much of the last century. Loosening its theoretical grip was a prerequisite for the rise of evolutionary cognition.

A good experiment doesn’t create new and unusual behavior but taps into natural tendencies, which is exactly what Gallup’s test did. Given the apes’ spontaneous mirror use, no expert would ever have come up with the anesthesia story. So what makes scientists unaccustomed to primates think they know better? Those of us who work with exceptionally gifted animals are used to unsolicited opinions about how we ought to test them and what their behavior actually means. I find the arrogance behind such advice mind-boggling. Once, in his desire to underscore the uniqueness of human altruism, a prominent child psychologist shouted at a large audience, “No ape will ever jump into a lake to save another!” It was left to me to point out during the Q&A afterward that there are actually a handful of reports of apes doing precisely this—often to their own detriment, since they don’t swim.31

The same arrogance explains the doubts raised about one of the best-known discoveries in field primatology. In 1952 the father of Japanese primatology, Kinji Imanishi, first proposed that we may justifiably speak of animal culture if individuals learn habits from one another resulting in behavioral diversity between groups.32 By now fairly well accepted, this idea was so radical at the time that it took Western science forty years to catch up. In the meantime, Imanishi’s students patiently documented the spreading of sweet potato washing by Japanese macaques on Koshima Island. The first monkey to do so was a juvenile female, named Imo, now honored with a statue at the entrance to the island. From Imo the habit spread to her age peers, then to their mothers, and eventually to nearly all monkeys on the island. Sweet potato washing became the best-known example of a learned social tradition, passed on from generation to generation.

Many years later, this view triggered a so-called killjoy account—an attempt to deflate a cognitive claim by proposing a seemingly simpler alternative—according to which the monkey-see-monkey-do explanation of Imanishi’s students was overblown. Why couldn’t it just have been individual learning—that is, each monkey acquired potato washing on its own without the assistance of anybody else? There might even have been human influence. Perhaps potatoes were handed out selectively by Satsue Mito, Imanishi’s assistant, who knew every monkey by name. She may have rewarded monkeys who dipped their spuds in the water, thus prompting them to do so ever more frequently.33

The only way to find out was to go to Koshima and ask. Having been twice to this island in the subtropical south of Japan, I had a chance to interview the then eighty-four-year-old Mrs. Mito via an interpreter. She reacted with incredulity to my question about food provisioning. One cannot hand out food any way one wants, she insisted. Any monkey that holds food while high-ranking males are empty-handed risks getting into trouble. Macaques are very hierarchical and can be violent, so putting Imo and other juveniles before the rest would have endangered their lives. In fact, the last monkeys to learn potato washing, the adult males, were the first ones to be fed. When I brought up the argument to Mrs. Mito that she might have rewarded washing behavior, she denied that this was even possible. In the early years, potatoes were handed out in the forest far away from the freshwater stream where the monkeys did their cleaning. They’d collect their spuds and quickly run off with them, often bipedally since their hands were full. There was no way for Mito to reward whatever they did in the distant stream.34 But perhaps the strongest argument for social as opposed to individual learning was the way the habit spread. It can hardly be coincidental that one of the first to follow Imo’s example was her mother, Eba. After this, the habit spread to Imo’s peers. The learning of potato washing nicely tracked the network of social relations and kinship ties.35

image

The first evidence for animal culture came from sweet-potato-washing Japanese macaques on Koshima Island. Initially, the washing tradition spread among same-aged peers, but nowadays it is propagated transgenerationally, from mother to offspring.

Like the scientist who gave us the mirror-anesthesia hypothesis, the one who wrote an entire article debunking the Koshima discovery was a nonprimatologist who, moreover, never bothered to set foot on Koshima or check his ideas with the fieldworkers who had camped for decades on the island. Again, I can’t help but wonder about the mismatch between conviction and expertise. Perhaps this attitude is a leftover of the mistaken belief that if you know enough about rats and pigeons, you know everything there is to know about animal cognition. It prompts me to propose the following know-thy-animal rule: Anyone who wishes to stress an alternative claim about an animal’s cognitive capacities either needs to familiarize him- or herself with the species in question or make a genuine effort to back his or her counterclaim with data. Thus, while I admire Pfungst’s work with Clever Hans and its eye-opening conclusions, I have great trouble with armchair speculations devoid of any attempt to check their validity. Given how seriously the field of evolutionary cognition takes variation between species, it is time to respect the special expertise of those who have devoted a lifetime getting to know one of them.

The Thaw

One morning at Burgers’ Zoo, we showed the chimpanzees a crate full of grapefruits. The colony was in the building where it spends the night, which adjoins a large island, where it spends the day. The apes seemed interested enough watching us carry the crate through a door onto the island. When we returned to the building with an empty crate, however, pandemonium broke out. As soon as they saw that the fruits were gone, twenty-five apes burst out hooting and hollering in a most festive mood, slapping one another’s backs. I have never seen animals so excited about absent food. They must have inferred that grapefruits cannot vanish, hence must have remained on the island onto which the colony would soon be released. This kind of reasoning does not fall into any simple category of trial-and-error learning, especially since it was the first time we followed this procedure. The grapefruit experiment was a one-time event to study responses to cached food.

One of the first tests of inferential reasoning was conducted by American psychologists David and Ann Premack, who presented Sadie, a chimpanzee, with two boxes. They placed an apple in one and a banana in the other. After a few minutes of distraction, Sadie saw one of the experimenters munching on either an apple or a banana. This experimenter then left, and Sadie was released to inspect the boxes. She faced an interesting dilemma, since she had not seen how the experimenter had gotten his fruit. Invariably, Sadie would go to the box with the fruit that the experimenter had not eaten. The Premacks ruled out gradual learning, because Sadie made this choice on the very first trial as well as all subsequent ones. She seemed to have reached two conclusions. First, that the eating experimenter had removed his fruit from one of the two boxes, even if she had not actually seen him do so. And second, that this meant that the other box must still hold the other fruit. The Premacks note that most animals don’t make any such assumptions: they just see an experimenter consume fruit, that’s all. Chimpanzees, in contrast, try to figure out the order of events, looking for logic, filling in the blanks.36

Years later the Spanish primatologist Josep Call presented apes with two covered cups. They had learned that only one would be baited with grapes. If Call removed the tops and let them look inside the cups, the apes chose the one with grapes. Next, he kept the cups covered and shook first one, then the other. Only the cup with grapes made noise, which was the one they preferred. This was not too surprising. But making things harder, Call would sometimes shake only the empty cup, which made no noise. In this case, the apes would still pick the other one, thus operating on the basis of exclusion. From the absence of sound, they guessed where the grapes must be. Perhaps we are not impressed by this either, as we take such inferences for granted, but it is not all that obvious. Dogs, for example, flunk this task. Apes are special in that they seek logical connections based on how they believe the world works.37

Here it gets interesting, because aren’t we supposed to go for the simplest possible explanation? If large-brained animals, such as apes, try to get at the logic behind events, could this be the simplest level at which they operate?38 It reminds me of Morgan’s provision to his canon, according to which we are allowed more complex premises in the case of more intelligent species. We most certainly apply this rule to ourselves. We always try to figure things out, applying our reasoning powers to everything around us. We go so far as to invent causes if we can’t find any, leading to weird superstitions and supernatural beliefs, such as sports fans wearing the same T-shirt over and over for luck, and disasters being blamed on the hand of God. We are so logic-driven that we can’t stand the absence of it.

Evidently, the word simple is not as simple as it sounds. It means different things in relation to different species, which complicates the eternal battle between skeptics and cognitivists. In addition, we often get tangled up in semantics that aren’t worth the heat they generate. One scientist will argue that monkeys understand the danger posed by leopards, whereas another will say that monkeys have merely learned from experience that leopards sometimes kill members of their species. Both statements are really not that different, even though the first uses the language of understanding, and the second of learning. With the decline of behaviorism, debates on these issues have fortunately grown less fiery. By attributing all behavior under the sun to a single learning mechanism, behaviorism set up its own downfall. Its dogmatic overreach made it more like a religion than a scientific approach. Ethologists loved to slam it, saying that instead of domesticating white rats in order to make them suitable to a particular testing paradigm, behaviorists should have done the opposite. They should have invented paradigms that fit “real” animals.

The counterpunch came in 1953, when Daniel Lehrman, an American comparative psychologist, sharply attacked ethology.39 Lehrman objected to simplistic definitions of innate, saying that even species-­typical behavior develops from a history of interaction with the environment. Since nothing is purely inborn, the term instinct is in fact misleading and should be avoided. Ethologists were stung and dismayed by his unexpected critique, but once they got over their “adrenaline attack” (Tinbergen’s words), they discovered that Lehrman hardly fit the behaviorist bogeyman stereotype. He was an enthusiastic bird-watcher, for example, who knew his animals. This impressed the ethologists, and Baerends recalled that while meeting the “enemy” in person, they managed to resolve most misunderstandings, found common ground, and became “very good friends.”40 Once Tinbergen became acquainted with Danny, as they now knew Lehrman, he went so far as to call him more of a zoologist than a psychologist, which the latter took as a compliment.41

Their bonding over birds went way beyond the way John F. Kennedy and Nikita Khrushchev bonded over Pushinka, a little dog that the Soviet leader sent to the White House. Despite this gesture, the Cold War continued unabated. In contrast, Lehrman’s harsh critique and the subsequent meeting of minds between comparative psychologists and ethologists set in motion a process of mutual respect and understanding. Tinbergen, in particular, acknowledged Lehrman’s influence on his later thinking. Apparently, they had needed a big spat to start a rapprochement, which was hastened by ongoing criticism within each camp of its own tenets. Within ethology, the younger generation grumbled about the rigid Lorenzian drive and instinct concepts, whereas comparative psychology had an even longer tradition of challenges to its own dominant paradigm.42 Cognitive approaches had been tried off and on, even as early as the 1930s.43 But ironically, the biggest blow to behaviorism came from within. It all started with a simple learning experiment conducted on rats.

Anyone who has tried to punish a dog or cat for problematic behavior knows that it is best to do so quickly, while the offense is still visible or at least fresh in the animal’s mind. If you wait too long, your pet doesn’t connect your scolding with the stolen meat or the droppings behind the couch. Since short intervals between behavior and consequence have always been regarded as essential, no one was prepared when, in 1955, the American psychologist John Garcia claimed he had found a case that broke all the rules: rats learn to refuse poisoned foods after just a single bad experience even if the resulting nausea takes hours to set in.44 Moreover, the negative outcome had to be nausea—electric shock didn’t have the same effect. Since toxic nutrition works slowly and makes you sick, none of this was particularly surprising from a biological standpoint. Avoiding bad food seems a highly adaptive mechanism. For standard learning theory, however, these findings came like a bolt out of the blue, due to the assumption that time intervals should be short while the kind of punishment is irrelevant. The findings were in fact devastating, and Garcia’s conclusions were so unwelcome that he had trouble getting them published. One imaginative reviewer contended that his data were less likely than finding bird shit in a cuckoo clock! The Garcia Effect is now well established, though. In our own lives, we remember food that has poisoned us so well that we gag at the mere thought of it or never set foot in a certain restaurant again.

image

American psychologist Frank Beach lamented the narrow focus of behavioral science on the albino rat. His incisive critique featured a cartoon in which a Pied Piper rat is followed by a happy mass of white-coated experimental psychologists. Carrying their favorite tools—mazes and Skinner boxes—they are being led into a deep river. After S. J. Tatz in Beach (1950).

For readers who wonder about the fierce resistance to Garcia’s discovery despite the fact that most of us have firsthand experience with the power of nausea, it is good to realize that human behavior was (and still is) often seen as the product of reflection, such as an analysis of cause and effect, whereas animal behavior was supposedly free of such processes. Scientists were not ready to equate the two. Human reflection is chronically overrated, though, and we now suspect that our own reaction to food poisoning is in fact similar to that of rats. Garcia’s findings forced comparative psychology to admit that evolution pushes cognition around, adapting it to the organism’s needs. This became known as biologically prepared learning: each organism is driven to learn those things it needs to know in order to survive. This realization obviously helped the rapprochement with ethology. Moreover, the geographic distance between both schools fell away. Once comparative psychology took hold in Europe—which is how I briefly ended up in a behaviorist lab—and ethology was being taught in North American zoology departments, students on both sides of the Atlantic could absorb the entire range of views and begin to integrate them. The synthesis between the two approaches did not take place just at international meetings or in the literature, therefore, but also in the classrooms.

We entered a period of crossover scholars, which I’ll illustrate with just two examples. The first is the American psychologist Sara Shettleworth, who for most of her career taught at the University of Toronto, and who has been influential through her textbooks on animal cognition. She started out in the behaviorist corner but ended up advocating a biological approach to cognition that is sensitive to the ecological needs of each species. She remains as cautious in her interpretations of cognition as one would expect from someone of her background, yet her work gained a clear ethological flavor, which she attributes to certain professors when she was a student as well as involvement with her husband’s fieldwork on sea turtles. In an interview about her career, Shettleworth explicitly mentions Garcia’s work as a turning point that opened the eyes of her field to the evolutionary forces shaping learning and cognition.45

At the other end of the scale is one of my heroes, Hans Kummer, a Swiss primatologist and ethologist. As a student, I avidly devoured every paper he wrote, mostly his field studies on hamadryas baboons in Ethiopia. Kummer did not just observe social behavior and relate it to ecology; he always puzzled about the cognition behind it and conducted field experiments on (temporarily) captured baboons. He later moved to captive work on long-tailed macaques at the University of Zürich. Kummer felt that the only way to test cognitive theories was by means of controlled experiments. Observation alone was not going to cut it, so primatologists should become more like comparative psychologists if they ever wished to unravel the puzzle of cognition.46

I went through a similar transition from observation to experimentation and was greatly inspired by Kummer’s macaque lab when I set up my own lab for capuchin monkeys. The trick is to house the animals socially, hence build large indoor and outdoor areas, where the monkeys can spend most of the day playing, grooming, fighting, catching insects, and so on. We trained them to enter a test chamber where they could work on a touchscreen or a social task before we’d return them to the group. This arrangement had two advantages over traditional labs, which keep monkeys, rather like Skinner’s pigeons, in single cages. First of all, there is the quality of life issue. It is my personal feeling that if we are going to keep highly social animals in captivity, the very least we can do for them is permit them a group life. This is the best and most ethical way to enrich their lives and make them thrive.

Second, it makes no sense to test monkeys on social skills without giving them a chance to express these skills in daily life. They need to be completely familiar with one another for us to investigate how they share food, cooperate, or judge one another’s situation. Kummer understood all this, having started out, like myself, as a primate watcher. In my opinion, anyone who intends to conduct experiments on animal cognition should first spend a couple thousand hours observing the spontaneous behavior of the species in question. Otherwise we get experiments uninformed by natural behavior, which is precisely the approach we should be leaving behind.

Today’s evolutionary cognition is a blend of both schools, taking the best parts of each. It applies the controlled experimental methodology developed by comparative psychology combined with the blind testing that worked so well with Clever Hans, while adopting the rich evolutionary framework and observation techniques of ethology. For many young scientists, it is now immaterial whether we call them comparative psychologists or ethologists, since they integrate concepts and techniques from both. On top of this comes a third major influence, at least for work in the field. The impact of Japanese primatology is not always recognized in the West—which is why I have called it a “silent ­invasion”—but we routinely name individual animals and track their social careers across multiple generations. This allows us to understand the kinship ties and friendships at the core of group life. Begun by Ima­ni­shi right after World War II, this method has become standard in work on long-lived mammals, from dolphins to elephants and primates.

Unbelievably, there was a time when Western professors warned their students away from the Japanese school because naming animals was considered too humanizing. There was of course also the language barrier, which made it hard for Japanese scientists to get heard. Junichiro Itani, Imanishi’s foremost student, was met with disbelief when he toured American universities in 1958 because no one believed that he and his colleagues were able to tell a hundred or more monkeys apart. Monkeys look so much alike that Itani obviously was making things up. He once told me that he was mocked to his face and had no one to defend him except the great American primatological pioneer Ray Carpenter, who did see the value of this approach.47 Nowadays, of course, we know that recognizing a large number of monkeys is possible, and we all do it. Not unlike Lorenz’s emphasis on knowing the whole animal, Imanishi urged us to empathize with the species under study. We need to get under its skin, he said, or as we would nowadays put it, try to enter its Umwelt. This old theme in the study of animal behavior is quite different from the misguided notion of critical distance, which has given us excessive worries about anthropomorphism.

The eventual international embrace of the Japanese approach illustrates something else that we learned from the tale of two schools—ethology and comparative psychology—which is that the initial animosity between divergent approaches can be overcome if we realize that each has something to offer that the other lacks. We may weave them together into a new whole that is stronger than the sum of its parts. The fusing of complementary strands is what makes evolutionary cognition the promising approach it is today. But sadly it took a century of misunderstandings and colliding egos before we got there.

Beewolves

Tinbergen was in tears when I last saw him. It was 1973, the year in which he, Lorenz, and von Frisch were honored with Nobel prizes. He had come to Amsterdam to receive a different medal and give a lecture. Speaking in Dutch, his voice quavering with emotion, he asked what we had done to his country. The magnificent little spot in the dunes where he had studied gulls and terns was no more. Decades earlier, while emigrating aboard a boat to England, he had pointed at the site—the eternal self-rolled cigarette in his hand—predicting that “it will all go, irrevocably.” Years later the place was swallowed up by the expansion of Rotterdam harbor, then the busiest in the world.48

Tinbergen’s lecture reminded me of all the great things he had done, which included animal cognition, even though he never used the term. He had worked on how digger wasps find their nest after a trip away. Also known as beewolves, these wasps capture and paralyze a honeybee, drag it to their nest in the sand (a long burrow), and leave it as a meal for their larvae. Before they go out to hunt for a bee, they make a brief orientation flight to memorize the location of their inconspicuous burrow. Tinbergen put objects around the nest, such as a circle of pinecones, to see what information they used to find it back. He was able to trick the wasps, making them search at the wrong location, by moving his pinecones around.49 His study addressed problem solving tied to a species’ natural history, precisely the topic of evolutionary cognition. The wasps proved very good at this particular task.

Brainier animals have less restricted cognition and often find solutions to novel or unusual problems. The ending of my grapefruit story with the chimpanzees offers a nice demonstration. After releasing the apes onto the island, a number of them passed over the site where we had hidden the fruits under the sand. Only a few small yellow patches were visible. Dandy, a young adult male, hardly slowed down when he ran over the place. Later in the afternoon, however, when all the apes were dozing off in the sun, he made a beeline for the spot. Without hesitation, he dug up the fruits and devoured them at his leisure, which he would never have been able to do had he stopped right when he saw them. He would have lost them to dominant group mates.50

Here we see the entire spectrum of animal cognition, from the specialized navigation of a predatory wasp to the generalized cognition of apes, which allows them to handle a great variety of problems, including novel ones. What struck me most is that Dandy at his first passing didn’t linger for a second. He must have made an instant calculation that deception was going to be his best bet.