5

The Evolution of Understanding

Animals designed to deal with affordances

Animals are designed by natural selection, of course, but such a declaration of confidence in evolution is not informative. How, more particularly, might evolution turn this trick? One of the fruits of our interlude on the designing of an elevator controller and its artifactual kin is a sharper sense of how different that R&D process is from evolution by natural selection. The computer on which the designers—the programmers—test and run their solutions is itself a product of intelligent design, as we have noted, and its initial set of building-block competences—arithmetic and conditional branching—invite all would-be programmers to conceive of their tasks in the top-down way as well, as problem-solving in which they try to embody their understanding of the problem in the solutions they build.

“How else?” one might well ask. Intelligent design of this sort starts with a goal (which may well be refined or even abandoned along the way) and works top-down, with the designers using everything they know to guide their search for solutions to the design problems (and sub-problems, and sub-sub-problems …) they set for themselves. Evolution, in contrast, has no goals, no predefined problems, and no comprehension to bring to the task; it myopically and undirectedly muddles along with what it has already created, mindlessly trying out tweaks and variations, and keeping those that prove useful, or at least not significantly harmful.

Could something as intellectually sophisticated as a digital computer, for instance, ever evolve by bottom-up natural selection? This is very hard to imagine or even to take seriously, and this has inspired some thinkers to conclude that since evolution couldn’t create a computer (or a computer program to run on it), human minds must not be products of natural selection alone, and the aspirations of Artificial Intelligence must be forlorn. The mathematician and physicist Roger Penrose (1989) is the most illustrious example. For the sake of argument let’s concede that evolution by natural selection could not directly evolve a living digital computer (a Turing machine tree or a Turing machine turtle, for example). But there is an indirect way: let natural selection first evolve human minds, and then they can intelligently design Hamlet, La Sagrada Familia, and the computer, among many other wonders. This bootstrapping process seems almost magical at first, even self-contradictory. Isn’t Shakespeare, or Gaudí, or Turing a more magnificent, brilliant “creation” than any of their brainchildren? In some regards, yes, of course, but it is also true that their brainchildren have features that couldn’t come into existence without them.

If you landed on a distant planet and were hunting along its seashore for signs of life, which would excite you more, a clam or a clam rake? The clam has billions of intricate moving parts, while the clam rake has just two crude, fixed parts, but it must be an artifact of some living thing, something much, much more impressive than a clam. How could a slow, mindless process build a thing that could build a thing that a slow mindless process couldn’t build on its own? If this question seems to you to be unanswerable, a rhetorical question only, you are still in thrall to the spell Darwin broke, still unable to adopt Darwin’s “strange inversion of reasoning.” Now we can see how strange and radical it is: a process with no Intelligent Designer can create intelligent designers who can then design things that permit us to understand how a process with no Intelligent Designer can create intelligent designers who can then design things.

The intermediate steps are instructive. What about the clam rake gives away its artifactual status? Its very simplicity, which indicates its dependence on something else for its ability to defy the Second Law of Thermodynamics, persisting as uniform and symmetrical collections of atoms of elements in improbable juxtapositions. Something gathered and refined these collections. Something complicated.

Let’s return once more to simple organisms. The idea that every organism has its ontology (in the elevator sense) was prefigured in Jakob von Uexküll’s (1934) concept of the organism’s Umwelt, the behavioral environment that consists of all the things that matter to its well-being. A close kin to this idea is the psychologist J. J. Gibson’s (1979) concept of affordances: “What the environment offers the animal for good or ill.” Affordances are the relevant opportunities in the environment of any organism: things to eat or mate with, openings to walk through or look out of, holes to hide in, things to stand on, and so forth. Both von Uexküll and Gibson were silent about the issue of whether consciousness (in some still-to-be-defined sense) was involved in having an Umwelt populated by affordances, but since von Uexküll’s case studies included amoebas, jellyfish, and ticks, it is clear that he, like Gibson, was more interested in characterizing the problems faced and solved by organisms than on how, internally, these solutions were carried out. The sun is in the ontology of a honey bee; its nervous system is designed to exploit the position of the sun in its activities. Amoebas and sunflowers also include the sun in their Umwelten; lacking nervous systems, they use alternative machinery to respond appropriately to its position. So the engineer’s concept of elevator ontology is just what we need at the outset. We can leave until later the questions of whether, when, and why the ontology of an organism, or a lineage of organisms, becomes manifest in consciousness of some sort and not just implicit in the designed responses of its inner machinery. In other words, organisms can be the beneficiaries of design features that imply ontologies without themselves representing those ontologies (consciously, semiconsciously, or unconsciously) in any stronger sense. The shape of a bird’s beak, together with a few other ancillary features of anatomy, imply a diet of hard seeds, or insects or fish, so we can stock the Umwelten of different species of birds with hard seeds, insects, and fish, as species-specific affordances on the basis of these anatomical features alone, though of course it is wise to corroborate the implication by studying behavior if it is available. The shape of the beak does not in any interesting sense represent its favored foodstuff or way of obtaining it.

image

FIGURE 5.1: Clam rake. © Daniel C. Dennett.

Paleontologists draw conclusions about the predatory preferences and other behaviors of extinct species using this form of inference, and it is seldom noted that it depends, ineliminably, on making adaptationist assumptions about the designs of the fossilized creatures. Consider Niles Eldredge’s (1983) example of Fisher’s (1975) research on horseshoe crab swimming speed. He cites it to demonstrate that asking the historical question “what has happened?” (“how come”) is a better tactic than asking the adaptationist question (“what for”), with its optimality assumptions. But Fisher’s conclusion about how fast the ancient horseshoe crabs swam

depends on a very safe adaptationist assumption about what is good: Faster is better—within limits. The conclusion that Jurassic horseshoe crabs swam faster depends on the premise that they would achieve maximal speed, given their shape, by swimming at a certain angle, and that they would swim so as to achieve maximal speed. So … [Fisher needs an] entirely uncontroversial, indeed tacit, use of optimality considerations to get any purchase at all on “what happened” 150 million years ago. (Dennett 1983)

Remember, biology is reverse engineering, and reverse engineering is methodologically committed to optimality considerations. “What is—or was—this feature good for?” is always on the tip of the tongue; without it, reverse engineering dissolves into bafflement.

As I said in the opening paragraph of the book, bacteria don’t know they are bacteria, but of course they respond to other bacteria in bacteria-appropriate ways and are capable of avoiding or tracking or trailing things they distinguish in their Umwelt, without needing to have any idea about what they are doing. Bacteria are in the ontology of bacteria the same way floors and doors are in the ontology of elevators, only bacteria are much more complicated. Just as there are reasons why the elevator’s control circuits are designed the way they are, there are reasons why the bacteria’s internal protein control networks are designed the way they are: in both cases the designs have been optimized to handle the problems encountered efficiently and effectively.18 The chief difference is that the design of the elevator circuits was done by intelligent designers who had worked out descriptions of the problems, and representations of reasoned solutions, complete with justifications. In the R&D history of the bacteria, there was no source code, and no comments were ever composed, to provide hints of what Mother Nature intended. This does not stop evolutionary biologists from assigning functions to some evolved features (webbed feet are for propulsion in water), and interpreting other features as mistakes of Nature (a two-headed calf). Similarly, literary editors of texts of long-dead authors don’t have to rely on autobiographical divulgences left behind in the author’s papers to interpret some unlikely passages as deliberately misleading and others as typographical errors or memory lapses.

Software development is a relatively new domain of human endeavor. While still in its infancy many foibles and glitches have been identified and corrected, and a Babel Tower of new programming languages has been created, along with a host of software-writing tools to make the job easier. Still, programming is an “art,” and even commercially released software from the best purveyors always turns out to have “bugs” in it that require correction in post-purchase updates. Why hasn’t debugging been automated, eliminating these costly errors from the outset? The most intelligent human designers, deeply informed about the purposes of the software, still find debugging code a daunting task, even when they can examine carefully commented source code produced under strictly regimented best practices (Smith 1985, 2014). There is a reason why debugging cannot be completely automated: what counts as a bug depends on all the purposes (and sub-purposes, and sub-sub-purposes) of the software, and specifying in sufficient detail what those purposes are (in order to feed them to one’s imagined automated debugger program) is, at least for practical purposes, the very same task as writing debugged code in the first place!19 Writing and debugging computer code for ambitious systems is one of the most severe tests of human imagination yet devised, and no sooner does a brilliant programmer devise a new tool that relieves the coder of some of the drudgery than the bar is raised for what we expect the coder to create (and test). This is not an unprecedented phenomenon in human activity; music, poetry, and the other arts have always confronted the would-be creator with open-ended spaces of possible “moves” that did not diminish once musical notation, writing, and ready-made paints were made available, nor does artistic creation become routinized by the addition of synthesizers and MIDI files, word-processing and spell-checking, and million-color, high-resolution computer graphics.

How does Nature debug its designs? Since there is no source code or comments to read, there can be no debugging by brilliant intellectual explanation; design revision in Nature must follow the profligate method of releasing and test-driving many variants and letting the losers die, unexamined. This won’t necessarily find the globally optimal design but the best locally accessible versions will thrive, and further test-driving will winnow the winners further, raising the bar slightly for the next generation.20 Evolution is, as Richard Dawkins’s (1986) memorable title emphasizes, the Blind Watchmaker, and given the R&D method used, it is no wonder that evolution’s products are full of opportunistic, short-sighted, but deviously effective twists and turns—effective except when they aren’t! One of the hallmarks of design by natural selection is that it is full of bugs, in the computer programmer’s sense: design flaws that show up only under highly improbable conditions, conditions never encountered in the finite course of R&D that led to the design to date, and hence not yet patched or worked around by generations of tinkering. Biologists are very good at subjecting the systems they are studying to highly improbable conditions, imposing extreme challenges to see where and when the systems fail, and why.

What they typically discover, when reverse engineering an organism, is like the all-but-undecipherable “spaghetti code” of undisciplined programmers. If we make the effort to decipher spaghetti code, we can usually note which unlikely possibilities never occurred to the designers in their myopic search for the best solution to the problems posed for them. What were they thinking? When we ask the same question about Mother Nature, the answer is always the same: nothing. No thinking was involved, but nevertheless she muddled through, cobbling together a design so effective that it has survived to this day, beating out the competition in a demanding world until some clever biologist comes along and exposes the foibles.

Consider supernormal stimuli, a design glitch found in many organisms. Niko Tinbergen’s (1948, 1951, 1953, 1959) experiments with seagulls revealed a curious bias in their perceptual/behavioral machinery. The adult female has an orange spot on her beak, at which her chicks instinctually peck, to stimulate their mother to regurgitate and feed them. What if the orange spot were bigger or smaller, brighter or less distinct? Tinbergen showed that chicks would peck even more readily at exaggerated cardboard models of the orange spot, that supernormal stimuli evoked supernormal behaviors. Tinbergen also showed that birds that laid light blue, gray-dappled eggs preferred to sit on a bright blue black polka-dotted fake egg so large that they slid off it repeatedly.

“This isn’t a bug, it’s a feature!” is the famous programmers’ retort, and the case can be made for supernormal stimuli. As long as their Umwelt doesn’t have sneaky biologists with vivid imaginations challenging the birds with artificial devices, the system works very well, focusing the organism’s behavior on what (almost always) matters. The free-floating rationale of the whole system is clearly good enough for practical purposes, so Mother Nature was wise not to splurge on something more foolproof that would detect the ruse. This “design philosophy” is everywhere in Nature, providing the opportunities for arms races in which one species exploits a shortcut in another species’ design, provoking a counter-ploy in Design Space that ratchets both species to develop ever better defenses and offenses. Female fireflies sit on the ground watching male fireflies emit patterns of flashes, showing off and hoping for an answer from the female. When the female makes her choice and flashes back, the male rushes down for a mating. But this ingenious speed-dating system has been invaded by another species of firefly, Photuris, that pretends to be a female, luring the males to their death. The Photuris prefers males with longer, stronger signals, so the males are evolving shorter love letters (Lewis and Cratsley 2008).

Higher animals as intentional systems: the emergence of comprehension

Competence without comprehension is Nature’s way, both in its methods of R&D and in its smallest, simplest products, the brilliantly designed motor proteins, proofreading enzymes, antibodies, and the cells they animate. What about multicellular organisms? When does comprehension emerge? Plants, from tiny weeds to giant redwood trees, exhibit many apparently clever competences, tricking insects, birds, and other animals into helping them reproduce, forming useful alliances with symbionts, detecting precious water sources, tracking the sun, and protecting themselves from various predators (herbivores and parasites). It has even been argued (see, e.g., Kobayashi and Yamamura 2003; Halitschke et al. 2008) that some species of plants can warn nearby kin of impending predation by wafting distress signals downwind when attacked, permitting those that receive the signals to heighten their defense mechanisms in anticipation, raising their toxicity or generating odors that either repel the predators or lure symbionts that repel the predators. These responses unfold so slowly that they are hard to see as proper behaviors without the benefit of time-lapse photography, but, like the microscopic behaviors of single cells, they have clear rationales that need not be understood by the actors.

Here we see emerging something like a double standard of attribution. It is well-nigh impossible to describe and explain these organized-processes-in-time without calling them behaviors and explaining them the way we explain our own behaviors, by citing reasons and assuming that they are guided by something like perceptual monitoring, the intake of information that triggers, modulates, and terminates the responses. And when we do this, we seem to be attributing not just competence but also the comprehension that—in us—“normally goes with” such behavioral competence. We are anthropomorphizing the plants and the bacteria in order to understand them. This is not an intellectual sin. We are right to call their actions behaviors, to attribute these competences to the organisms, to explain their existence by citing the rationales that account for the benefits derived from these competences by the organisms in their “struggle” for survival. We are right, I am saying, to adopt what I call the intentional stance. The only mistake lies in attributing comprehension to the organism or to its parts. In the case of plants and microbes, fortunately, common sense intervenes to block that attribution. It is easy enough to understand how their competence can be provided by the machinery without any mentality intruding at all.

Let’s say that organisms that have spectacular competences without any need for comprehension of their rationales are gifted. They are the beneficiaries of talents bestowed on them, and these talents are not products of their own individual investigation and practice. You might even say they are blessed with these gifts, not from God, of course, but from evolution by natural selection. If our imaginations need a crutch, we can rely on the obsolescing stereotype of the robot as a mindless mechanism: plants don’t have understanding; they’re living robots. (Here’s a prediction: in a hundred years, this will be seen as an amusing fossil of biocentrism, a bit of prejudice against comprehending robots that survived well into the twenty-first century.)

While we’re on this topic, it’s interesting to recall that in the twentieth century one of the most popular objections to GOFAI was this:

The so-called intelligence in these programs is really just the intelligence—the understanding—of the programmers. The programs don’t understand anything!

I am adopting and adapting that theme but not granting understanding (yet) to anyone or anything:

The so-called intelligence in trees and sponges and insects is not theirs; they are just brilliantly designed to make smart moves at the right time, and while the design is brilliant, the designer is as uncomprehending as they are.

The opponents of GOFAI thought they were stating the obvious when they issued their critique of so-called intelligent machines, but see how the emotional tug reverses when the same observation is ventured about animals. Whereas—I surmise—most readers will be quite comfortable with my observation that plants and microbes are merely gifted, blessed with well-designed competences, but otherwise clueless, when I then venture the same opinion about “higher” animals, I’m being an awful meanie, a killjoy.

When we turn to animals—especially “higher” animals such as mammals and birds—the temptation to attribute comprehension in the course of describing and explaining the competences is much greater, and—many will insist—entirely appropriate. Animals really do understand what they’re doing. See how amazingly clever they are! Well, now that we have the concept of competence without comprehension firmly in hand, we need to reconsider this gracious opinion. The total weight of all life on the planet now—the biomass—is currently estimated as more than half made up of bacteria and other unicellular “robots,” with “robotic” plants making up more than half the rest. Then there are the insects, including all the clueless termites and ants that outweigh the huge human population celebrated by MacCready. We and our domesticated animals may compose 98% of the terrestrial vertebrate biomass, but that is a small portion of life on the planet. Competence without comprehension is the way of life of the vast majority of living things on the planet and should be the default presumption until we can demonstrate that some individual organisms really do, in one sense or another, understand what they are doing. Then the question becomes when, and why, does the design of organisms start representing (or otherwise intelligently incorporating) the free-floating rationales of their survival machinery? We need to reform our imaginations on this issue, since the common practice is to assume that there is some kind of understanding in “higher animals” wherever there is a rationale.

Consider a particularly striking example. Elizabeth Marshall Thomas is a knowledgeable and insightful observer of animals (including human animals), and in one of her books, The Hidden Life of Dogs (1993), she permits herself to imagine that dogs enjoy a wise understanding of their ways: “For reasons known to dogs but not to us, many dog mothers won’t mate with their sons” (p. 76). There is no doubt about their instinctive resistance to such inbreeding; they probably rely mainly on scent as their cue, but who knows what else contributes—a topic for future research. But the suggestion that dogs have any more insight into the reasons for their instinctual behaviors and dispositions than we have into ours is romanticism run wild. I’m sure she knows better; my point is that this lapse came naturally to her, an extension of the prevailing assumption, not a bold proposal about the particular self-knowledge of dogs. This is like a Martian anthropologist writing, “For reasons known to human beings but not to us, many human beings yawn when sleepy and raise their eyebrows when they see an acquaintance.” There are reasons for these behaviors, what for reasons, but they are not our reasons. You may fake a yawn or raise your eyebrows for a reason—to give a deliberate signal, or to feign acquaintance with an attractive but unfamiliar person you encounter—but in the normal case you don’t even realize you do it, and hence have no occasion to know why you do it. (We still don’t know why we yawn—and certainly dogs aren’t ahead of us on this point of inquiry, though they yawn just as we do.)

What about more obviously deliberate behaviors in animals? Cuckoos are brood parasites that don’t make their own nests. Instead, the female cuckoo surreptitiously lays her egg in the nest of a host pair of some other species of birds, where it awaits the attentions of its unwittingly adoptive parents. Often, the female cuckoo will roll one of the host eggs out of the nest—in case the host parents can count. And as soon as the cuckoo chick is hatched (and it tends to incubate more quickly than the host eggs), the little bird goes to great efforts to roll any remaining eggs out of the nest. Why? To maximize the nurture it will get from its adoptive parents. The video clips of this behavior by the hatchling cuckoo are chilling demonstrations of efficient, competent killing, but there is no reason to suppose that mens rea (guilty intention, in the law) is in place. The baby bird knows not what it is doing but is nevertheless the beneficiary of its behavior. What about nest building in less larcenous species? Watching a bird build a nest is a fascinating experience, and there is no doubt that highly skilled weaving and even sewing actions are involved (Hansell 2000). There is quality control, and a modicum of learning. Birds hatched in captivity, never having seen a nest being built, will build a serviceable species-typical nest out of the available materials when it is time to build a nest, so the behavior is instinctual, but it will build a better one the next season.

How much understanding does the nest-building bird have? This can be, and is being, probed by researchers (Hansell 2000, 2005, 2007; Walsh et al. 2011; Bailey et al. 2015), who vary the available materials and otherwise interfere with the conditions to see how versatile and even foresighted the birds can be. Bearing in mind that evolution can only provide for challenges encountered during R&D, we can predict that the more novel the artificial intrusions in the bird’s Umwelt are, the less likely it is that the bird will interpret them appropriately, unless the bird’s lineage evolved in a highly varied selective environment that obliged natural selection to settle on designs that are not entirely hard-wired but have a high degree of plasticity and the learning mechanisms to go with it. Interestingly, when there isn’t enough stability over time in the selective environment to permit natural selection to “predict” the future accurately (when “selecting” the best designs for the next generation), natural selection does better by leaving the next generation’s design partially unfixed, like a laptop that can be configured in many different ways, depending on the purchaser’s preferences and habits.21 Learning can take over where natural selection left off, optimizing the individuals in their own lifetimes by extracting information from the world encountered and using it to make local improvements. We will soon turn to a closer examination of this path to understanding, but, first, I want to explore a few more examples of behaviors with free-floating rationales and their implications.

You may have seen video of antelopes being chased across the plains by a predator and noticed that some of the antelopes leap high in the air during their attempts to escape their pursuer. This is called stotting. Why do antelopes stot? It is clearly beneficial, because antelopes that stot seldom get caught and eaten. This is a causal regularity that has been carefully observed, and it demands a what for explanation. No account of the actions of all the proteins and the like in the cells of all the antelopes and predators chasing them could reveal why this regularity exists. For an answer we need the branch of evolutionary theory known as costly signaling theory (Zahavi 1975; Fitzgibbon and Fanshawe 1988). The strongest and fastest of the antelopes stot in order to advertise their fitness to the pursuer, signaling, in effect, “Don’t bother chasing me; I’m too hard to catch; concentrate on one of my cousins who isn’t able to stot—a much easier meal!” and the pursuer takes this to be an honest, hard-to-fake signal and ignores the stotter. This is both an act of communication and an act with only a free-floating rationale, which need not be appreciated by either antelope or lion. That is, the antelope may be entirely oblivious of why it is a good idea to stot if you can, and the lion may not understand why it finds stotting antelopes relatively unattractive prey, but if the signaling wasn’t honest, costly signaling, it couldn’t persist in the evolutionary arms race between predator and prey. (If evolution tried a “cheap” signal, like tail flicking, which every antelope, no matter how frail or lame, could send, it wouldn’t pay lions to attend to it, so they wouldn’t.) This may seem an overly skeptical killjoy demotion of the intelligence of both antelope and lion, but it is the strict application of the same principles of reverse engineering that can account for the cuckoo and the termite and the bacterium. The rule of attribution must be then, if the competence observed can be explained without appeal to comprehension, don’t indulge in extravagant anthropomorphism. Attributing comprehension must be supported by demonstrations of much more intelligent behavior. Since stotting is not (apparently) an element in a more elaborate system of interspecies or intraspecies communication on many topics, the chances of finding a need for anything that looks like comprehension here are minimal. If you find this verdict too skeptical, try to imagine some experiments that could prove you right.

How could experiments support the verdict of comprehension? By showing that the animals can do what we comprehenders can do with variations on the behavior. Stotting is a variety of showing off, or bragging, and we can do that, but we also can bluff or refrain from bragging or showing off if conditions arise that render such behavior counterproductive or worse. We can modulate our bragging, tuning it to different audiences, or do some transparently exaggerated bragging to telegraph that we don’t really mean it and are making a joke. And so forth, indefinitely. Can the antelope do any of this? Can it refrain from stotting in circumstances that, in novel ways, make stotting inappropriate? If so, this is some evidence that it has—and uses—some minimal understanding of the rationale of its actions.

A rather different free-floating rationale governs the injury-feigning, ground-nesting bird, such as a piping plover, that lures a predator away from her nest by seeming to have a broken wing, keeping just out of the predator’s reach until she has drawn it far from her nest. Such a “distraction display” is found in many very widely separated species of ground-nesting birds (Simmons 1952; Skutch 1976). This seems to be deception on the bird’s part, and it is commonly called that. Its purpose is to fool the predator. Adopting Dawkins’s (1976) useful expository tactic of inventing “soliloquies,” we can devise a soliloquy for the piping plover:

I’m a low-nesting bird, whose chicks are not protectable against a predator that discovers them. This approaching predator can be expected soon to discover them unless I distract it; it could be distracted by its desire to catch and eat me, but only if it thought there was a reasonable chance of its actually catching me (it’s no dummy); it would contract just that belief if I gave it evidence that I couldn’t fly anymore; I could do that by feigning a broken wing, and so on.

Talk about sophistication! Not just a goal, but also a belief about an expectation, and a hypothesis about the rationality of the predator and a plan based on that hypothesis. It is unlikely in the extreme that any feathered “deceiver” is capable of such mental representation. A more realistic soliloquy to represent what is “in the mind” of the bird would be something like: “Here comes a predator; all of a sudden I feel this tremendous urge to do that silly broken-wing dance. I wonder why?” But even this imputes more reflective capacity to the bird than we have any warrant for. Like the elevator, the bird has been designed to make some important discriminations and do the right thing at the right time. Early investigators, rightly deeming the sophisticated soliloquy too good to be true as an account of the bird’s thinking, were tempted to hypothesize that the behavior was not deliberate at all, but a sort of panic attack, composed of unguided spasms that had the beneficial side effect of attracting the attention of the predator. But that drastically underestimates the bird’s grasp of the situation. Clever experiments on piping plovers by Ristau (1983, 1991) using a remote-controlled toy dune buggy with a stuffed raccoon mounted on it, demonstrated that the plover closely monitors the predator’s attention (gaze direction), and modulates its injury feigning, raising the intensity and then letting the predator get closer if the predator shows signs of abandoning the hunt. And, of course, she flies away at the opportune moment, once the predator is some distance from her nest. The bird doesn’t need to know the whole rationale, but it does recognize and respond appropriately to some of the conditions alluded to in the rationale. The behavior is neither a simple “knee-jerk” reflex inherited from her ancestors nor a wily scheme figured out in her rational mind; it is an evolution-designed routine with variables that respond to details in the circumstances, details that the sophisticated soliloquy captures—without excess—in the rationale of that design.

The free-floating rationale answers the reverse-engineering question: Why is this routine organized like this? If we are squeamish about anthropomorphism, we can pretend to put the answer in somewhat less “mentalistic” terms by liberal use of scare quotes: The routine is an “attention-grabbing” behavior that depends for its success on the likely “goals” and “perceptions” of a predator, designed to provoke the predator into “approaching” the plover and thus distancing itself from the nest; by “monitoring” the predator’s “attention” and modulating the behavior to maintain the predator’s “interest,” the plover typically succeeds in preventing the predation of its young. (This long-winded answer is only superficially more “scientific” than the intentional-stance version expressed in the soliloquy; the two explanations depend on the same distinctions, the same optimality assumptions, and the same informational demands.) Further empirical research may reveal further appropriate sensitivities, or it may reveal the foibles in this cobbled-together device. There is some evidence that piping plovers “know enough” not to engage in injury feigning when a cow approaches, but instead fly at the cow, pushing, not luring, it away from the nest. Would a plover resist the urge to put on an injury-feigning display if it could see that an actually injured bird, or other vulnerable prey item, had already captured the attention of the predator? Or even more wonderful, as suggested by David Haig (2014, personal correspondence):

One could imagine a bird with an actual broken wing unconvincingly attempting to escape with the intention that the predator interpret its actions as “this is a broken wing display therefore the bird is not easy prey but a nest is near.” If the predator started to search for a nest, then the predator would have recognized that the bird’s actions were a text but misunderstood the bird’s motives. The interpretation of the text is “wrong” for the predator but “right” for the bird. The text has achieved the bird’s intention but foiled that of the predator who has been deliberately misled.

Haig speaks unguardedly of the bird’s motives and intentions and of the predator’s “interpretation” of the “text,” recognizing that the task of thinking up these varied opportunities for further experiments and observations depends on our adoption of the intentional stance, but also appreciating that there is a graceful and gradual trade-off between interpreting animals (or for that matter, plants or robots or computers) as themselves harboring the reasons and the reasoning, and relegating the rationale to Mother Nature, as a free-floating rationale exposed by the mindless design-mining of natural selection.

The meaning of the injury-feigning signal will have its intended effect only if the predator does not recognize it to be a signal but interprets it as an unintentional behavior—and this is true whether or not the bird or the predator understands the situation the way we do. It is the risk that the predator will catch on that creates the selection pressure for better acting by the deceptive bird. Similarly, the strikingly realistic “eye-spots” on the wings of butterflies owe their verisimilitude to the visual acuity of their predators, but of course the butterflies are the clueless beneficiaries of their deceptive gear. The deceptive rationale of the eye-spots is there all the same, and to say it is there is to say that there is a domain within which it is predictive and, hence, explanatory. (For a related discussion, see Bennett 1976, §§ 52, 53, 62.) We may fail to notice this just because of the obviousness of what we can predict: For example, in an environmental niche with bats but not birds for predators, we don’t expect moths with eye-spots (for as any rational deceiver knows, visual sleight of hand is wasted on the blind and myopic).

Comprehension comes in degrees

The time has come to reconsider the slogan competence without comprehension. Since cognitive competence is often assumed to be an effect of comprehension, I went out of my way to establish that this familiar assumption is pretty much backward: competence comes first. Comprehension is not the source of competence or the active ingredient in competence; comprehension is composed of competences. We have already considered the possibility of granting a smidgen or two of comprehension to systems that are particularly clever in the ways that they marshal their competences but that may play into the misleading image of comprehension as a separable element or phenomenon kindled somehow by mounting competence.

The idea of comprehension or understanding as a separate, stand-alone, mental marvel is ancient but obsolete. (Think of Descartes’s res cogitans, or Kant’s Critique of Pure Reason, or Dilthey’s Verstehen—which is just the German word for understanding, but since, like all German nouns, it is capitalized, when it is said with furrowed brow, it conjures up in many minds a Bulwark against Reductionism and Positivism, a Humanistic alternative to Science.) The illusion that understanding is some additional, separable mental phenomenon (over and above the set of relevant competences, including the meta-competence to exercise the other competences at appropriate times) is fostered by the aha! phenomenon, or eureka effect—that delightful moment when you suddenly recognize that you do understand something that has heretofore baffled you. This psychological phenomenon is perfectly real and has been studied by psychologists for decades. Such an experience of an abrupt onset of understanding can easily be misinterpreted as a demonstration that understanding is a kind of experience (as if suddenly learning you were allergic to peanuts would show that allergies are a kind of feeling), and it has led some thinkers to insist that there can be no genuine comprehension without consciousness (Searle [1992] is the most influential). Then, if you were to think that it is obvious that consciousness, whatever it is, sunders the universe in two—everything is either conscious or not conscious; consciousness does not admit of degrees—it would stand to reason that comprehension, real comprehension, is enjoyed only by conscious beings. Robots understand nothing, carrots understand nothing, bacteria understand nothing, oysters, well, we don’t know yet—it all depends on whether oysters are conscious; if not, then their competences, admirable though they are, are competences utterly without comprehension.

I recommend we discard this way of thinking. This well-nigh magical concept of comprehension has no utility, no application in the real world. But the distinction between comprehension and incomprehension is still important, and we can salvage it by the well-tested Darwinian perspective of gradualism: comprehension comes in degrees. At one extreme we have the bacterium’s sorta comprehension of the quorum-sensing signals it responds to (Miller and Bassler 2001) and the computer’s sorta comprehension of the “ADD” instruction. At the other extreme we have Jane Austen’s comprehension of the interplay of personal and social forces in the emotional states of people and Einstein’s comprehension of relativity. But even at the highest levels of competence, comprehension is never absolute. There are always ungrasped implications and unrecognized presuppositions in any mind’s mastery of a concept or topic. All comprehension is sorta comprehension from some perspective. I once gave a talk at Fermi Lab in Illinois, to a few hundred of the world’s best physicists and confessed that I only sorta understood Einstein’s famous formula:

E = mc2

I can do the simple algebraic reformulations, and say what each term refers to, and explain (roughly) what is important about this discovery, but I’m sure any wily physicist could easily expose my incomprehension of some aspects of it. (We professors are good at uncovering the mere sorta understanding of our students via examinations.) I then asked how many in the audience understood it. All hands went up, of course, but one person jumped up and shouted “No, no! We theoretical physicists are the ones who understand it; the experimentalists only think they do!” He had a point. Where understanding is concerned, we all depend on something like a division of labor: we count on experts to have deep, “complete” understanding of difficult concepts we rely on every day, only half-comprehendingly. This is, in fact, as we shall see, one of the key contributions of language to our species’ intelligence: the capacity to transmit, faithfully, information we only sorta understand!

We human beings are the champion comprehenders on the planet, and when we try to understand other species, we tend to model their comprehension on our experience, imaginatively filling animals’ heads with wise reflections as if the animals were strangely shaped people in fur coats. The Beatrix Potter syndrome, as I have called it, is not restricted to children’s literature, though I think every culture on earth has folk tales and nursery stories about talking, thinking animals. We do it because, to a first approximation, it works. The intentional stance works whether the rationales it adduces are free floating or explicitly represented in the minds of the agents we are predicting. When a son learns from his father how to figure out what their quarry is attending to and how to foil its vigilance, both are treating the animal as a wise fellow thinker in a battle of wits. But the success of the intentional stance does not depend on this being a faithful representation of what is going on in the animal’s mind except to the extent that whatever is going on in the animal’s brain has the competence to detect and respond appropriately to the information in the environment.

The intentional stance gives “the specs” for a mind and leaves the implementation for later. This is particularly clear in the case of a chess-playing computer. “Make me a chess program that not only knows the rules and keeps track of all the pieces but also notices opportunities, recognizes gambits, expects its opponent to make intelligent moves, values the pieces soundly, and looks out for traps. How you accomplish that is your problem.” We adopt the same noncommittal strategy when dealing with a human chess player. In the midst of a chess match we rarely have hunches about—or bother trying to guess—the detailed thinking of our opponent; we expect her to see what’s there to be seen, to notice the important implications of whatever changes, and to have good ways of formulating responses to the moves we choose. We idealize everybody’s thinking, and even our own access to reasons, blithely attributing phantom bouts of clever reasoning to ourselves after the fact. We tend to see what we chose to do (a chess move, a purchase, parrying a blow) to have been just the right move at the right time, and we have no difficulty explaining to ourselves and others how we figured it out in advance, but when we do this we may often be snatching a free-floating rationale out of thin air and pasting it, retrospectively, into our subjective experience. Asked, “Why did you do that?,” the most honest thing to say is often “I don’t know; it just came to me,” but we often succumb to the temptation to engage in whig history, not settling for how come but going for a what for.22

When we turn to the task of modeling the competences out of which comprehension is composed, we can distinguish four grades, schematically characterized by successive applications of the tactic known in computer science as “generate and test.” In the first, lowest level we find Darwinian creatures, with their competences predesigned and fixed, created by the R&D of evolution by natural selection. They are born “knowing” all they will ever “know”; they are gifted but not learners. Each generation generates variations which are then tested against Nature, with the winners copied more often in the next round. Next come the Skinnerian creatures, who have, in addition to their hard-wired dispositions, the key disposition to adjust their behavior in reaction to “reinforcement”; they more or less randomly generate new behaviors to test in the world; those that get reinforced (with positive reward or by the removal of an aversive stimulus—pain or hunger, for instance) are more likely to recur in similar circumstances in the future. Those variants born with the unfortunate disposition to mislabel positive and negative stimuli, fleeing the good stuff and going for the bad stuff, soon eliminate themselves, leaving no progeny. This is “operant conditioning” and B. F. Skinner, the arch-behaviorist, noted its echo of Darwinian evolution, with the generation and testing occurring in the individual during its lifetime but requiring no more comprehension (mentalism-fie!) than natural selection itself. The capacity to improve one’s design by operant conditioning is clearly a fitness-enhancing trait under many circumstances, but also risky, since the organism must blindly try out its options in the cruel world (as blindly as evolution does) and may succumb before it learns anything.

Better still is the next grade, the Popperian creatures, who extract information about the cruel world and keep it handy, so they can use it to pretest hypothetical behaviors offline, letting “their hypotheses die in their stead” as the philosopher of science Karl Popper once put it. Eventually they must act in the real world, but their first choice is not random, having won the generate-and-test competition trial runs in the internal environment model. Finally, there are the Gregorian creatures, named in honor of Richard Gregory, the psychologist who emphasized the role of thinking tools in providing thinkers with what he called “potential intelligence.” The Gregorian creature’s Umwelt is well stocked with thinking tools, both abstract and concrete: arithmetic and democracy and double-blind studies, and microscopes, maps, and computers. A bird in a cage may see as many words every day (on newspaper lining the cage floor) as a human being does, but the words are not thinking tools in the bird’s Umwelt.

The merely Darwinian creature is “hard-wired,” the beneficiary of clever designs it has no need to understand. We can expose its cluelessness by confronting it with novel variations on the conditions it has been designed by evolution to handle: it learns nothing and flounders helplessly. The Skinnerian creature starts out with some “plasticity,” some optionality in a repertoire of behaviors that is incompletely designed at birth; it learns by trial-and-error forays in the world and is hard-wired to favor the forays that have “reinforcing” outcomes. It doesn’t have to understand why it now prefers these tried-and-true behaviors when it does; it is the beneficiary of this simple design-improvement ratchet, its own portable Darwinian selection process. The Popperian creature looks before it leaps, testing candidates for action against information about the world it has stored in its brain somehow. This looks more like comprehension because the selective process is both information-sensitive and forward-looking, but the Popperian creature need not understand how or why it engages in this pretesting. The “habit” of “creating forward models” of the world and using them to make decisions and modulate behavior is a fine habit to have, whether or not you understand it. Unless you were a remarkably self-reflective child, you “automatically” engaged in Popperian lookahead and reaped some of its benefits long before you noticed you were doing it. Only with the Gregorian creature do we find the deliberate introduction and use of thinking tools, systematic exploration of possible solutions to problems, and attempts at higher-order control of mental searches. Only we human beings are Gregorian creatures, apparently.

Here is where the hot button of human exceptionalism gets pushed, with fierce disagreements between romantics and killjoys (see chapter 1) about how much comprehension is exhibited by which species or which individual animals. The prevailing but still tentative conclusion these days among researchers in animal intelligence is that the smartest animals are not “just” Skinnerian creatures but Popperian creatures, capable of figuring out some of the clever things they have been observed to do. Corvids (crows, ravens, and their close kin), dolphins and other cetaceans, and primates (apes and monkeys) are the most impressive wild animals so far investigated, with dogs, cats, and parrots leading the pet parade. They engage in exploratory behavior, for instance, getting the lay of the land, and often making landmarks to ease the burden on their memories, stocking their heads with handy local information. They need not know that this is the rationale for their behavior, but they benefit from it by reducing uncertainty, extending their powers of anticipation (“look before you leap” is the free-floating maxim of their design), and thereby improving their competences. The fact that they don’t understand the grounds of their own understanding is no barrier to calling it understanding, since we humans are often in the same ignorant state about how we manage to figure out novel things, and that is the very hallmark of understanding: the capacity to apply our lessons to new materials, new topics.

Some animals, like us, have something like an inner workshop in which they can engage in do-it-yourself understanding of the prefabricated designs with which they were born. This idea, that the individual organism has a portable design-improvement facility that is more powerful than brute trial-and-error-and-take-your-lumps, is, I submit, the core of our folk understanding of understanding. It doesn’t depend on any assumptions about conscious experience, although that is a familiar decoration, an ideological amplification, of the basic concept. We are slowly shedding the habit of thinking that way, thanks in part to Freud’s championing of unconscious motivations and other psychological states, and thanks also to cognitive science’s detailed modeling of unconscious processes of perceptual inference, memory search, language comprehension, and much else. An unconscious mind is no longer seen as a “contradiction in terms”; it’s the conscious minds that apparently raise all the problems. The puzzle today is “what is consciousness for (if anything)?” if unconscious processes are fully competent to perform all the cognitive operations of perception and control.

To summarize, animals, plants, and even microorganisms are equipped with competences that permit them to deal appropriately with the affordances of their environments. There are free-floating rationales for all these competences, but the organisms need not appreciate or comprehend them to benefit from them, nor do they need to be conscious of them. In animals with more complex behaviors, the degree of versatility and variability exhibited can justify attributing a sort of behavioral comprehension to them so long as we don’t make the mistake of thinking of comprehension as some sort of stand-alone talent, a source of competence rather than a manifestation of competence.

In part II, we zero in on the evolution of us Gregorian creatures, the reflective users of thinking tools. This development is a giant leap of cognitive competence, putting the human species in a unique niche, but like all evolutionary processes it must be composed of a series of unforeseen and unintended steps, with “full” comprehension a latecomer, not leading the way until very recently.

18There is much controversy about using the term “optimize” when referring to the “good enough” products of natural selection. The process of natural selection cannot “consider all things” and is always in the midst of redesign, so it is not guaranteed to find the optimal solution to any specific design problem posed, but it does amazingly well, typically better than intelligent human designers who are striving for optimal design.

19Legendary software designer Charles Simonyi, the principal creator of Microsoft Word, has devoted more than twenty years to the task of creating what he calls “Intentional Software,” which would ideally solve this problem or a valuable subset of these problems. The fact that several decades of high-quality work by a team of software engineers has not yet yielded a product says a lot about the difficulty of the problem.

20Evolution explores the “adjacent possible,” see Kauffman (2003).

21How can I speak of evolution, which famously has no foresight, being able or unable to predict anything? We can cash out this handy use of the intentional stance applied to evolution itself by saying, less memorably and instructively, that highly variable environments have no information about future environments for natural selection to (mindlessly) exploit (see chapter 6).

22The useful term Whig history refers to interpreting history as a story of progress, typically justifying the chain of events leading to the interpreter’s privileged vantage point. For applications of the term to adaptationism in evolutionary biology, both favorably and unfavorably, see Cronin (1992) and Griffiths (1995).