“No-Brainers”; or, Reason in Nature
An Ordered Whole
Logic might be, metaphorically, an octopus, as Cicero said. But the octopus, literally, is no logician, even if, significantly, between 2008 and 2010 an octopus named Paul was hailed as possessing the power to divine the outcomes of football matches. Widespread public openness to cephalopod intelligence helped to create the appearance that something more than simple divination was occurring—as one might believe in trying to predict the future from the course of motion of an ant or a goldfish: something more like a true prophetic intelligence. But of course few would confess to being truly convinced by the appearance, as reason, on the most widespread view today, belongs to human beings alone (and even human beings cannot predict the outcome of unfixed future sporting matches). Everything else in nature, in turn, from bears and sharks to cyanobacteria, rain clouds, and comets, is a great force of unreason, a primordial, violent chaos that allows us to exist within it, for a while, always subject to its arbitrary whims.
This view sets us up, as human beings, starkly against, or at least outside of, nature. And this is the view that has been held by the majority of philosophers throughout history. Most of them have understood this outsider status to be a result of our possession of some sort of nonnatural essence that makes us what we are, such as an immortal soul, endowed from a transcendent source and ultimately unsusceptible to erosion, corrosion, and other natural effects. For philosophers of a more naturalistic bent, who have dominated philosophy only in the most recent era, human reason is not ontologically distinct from vision or echolocation or any of the other powers evolution has come up with, enabling different kinds of organism to move through the world. It is part of something vastly larger—namely, nature, and all the evolved adaptations that it permits favoring the survival of organisms by myriad pathways—but that vastly larger thing itself still has no share in reason.
This feature of the currently prevalent, naturalistic understanding of reason—namely, that it is found within the human being exclusively, even if it is just as natural as echolocation or photosynthesis—is more indebted to the Cartesian tradition than is usually acknowledged. Descartes grounded his human exceptionalism in dualism, taking the soul as something nonnatural and ontologically discontinuous with the human body, which for its part was on the same side of the great ontological divide as animal bodies, oceans, volcanoes, and stars. But naturalism has been effective at finding ways to preserve human exceptionalism while at the same time collapsing the ontological divide posited by dualism. The most prevalent view today is that reason is something uniquely human, which we deploy in a world that is variously conceived as either nonrational or positively irrational. In this, modern thought sharply departs from certain basic presuppositions of the ancient world. On the most common ancient understanding of the human being as the rational animal, it was taken for granted that human beings were sharing in something, reason, that did not simply exist immanently within them, but rather had its own independent existence. Human beings were, among animals, the only ones that possessed reason as a mental faculty that they could bring to bear in their choices and actions, but this did not mean that the rest of nature had no share of reason at all. Rather, the world itself was a rationally ordered whole: it was permeated by, was characterized by, was an expression of, reason.
It is true that in the history of analytic philosophy we find a prominent view that is fairly similar to the ancient one. Thus in Gottlob Frege and the early Wittgenstein, the structure of facts in the world is the same as the structure of propositions in human-generated arguments: the real and the intelligible are one. In more recent years John McDowell has pushed an even bolder account of the identity of mind and world, to the point that some critics have accused him—as if it were prima facie evident that this is a bad thing—of absolute idealism.1 But for the most part the presumption has been that, as Gassendi put it in the seventeenth century, logic is the art of ordering our thoughts, and not the force that makes the world itself an ordered whole rather than a dark chaos.
The widespread ancient sense of rationality is perhaps what also lies behind the curious expression in contemporary American English, in which we describe a decision that is particularly easy to make as a “no-brainer.” The implication here is that one could take the prescribed path even if one did not have a brain—the organ standing in here metonymically for its function—simply in virtue of the fact that its rightness is inscribed in the order of things. Not having a brain, or any consciousness at all, yet doing the correct thing anyway, this peculiar phrase reminds us, might be the ultimate expression of reason.2
This is the vision of the world, and of humanity’s place in it, imparted in the Australian poet Les Murray’s lines:
Everything except language
knows the meaning of existence.
Trees, planets, rivers, time
know nothing else. They express it
moment by moment as the universe.3
The world itself is, on this view, what bears meaning. Our own language, and our efforts to portray the world in it, far from being what is meaningful, are only feeble and inadequate echoes of this world, cutting us off from it. It does not connect us to the world; still less does it make us the world’s masters. This is also something like the metaphysical and cosmological vision, if we dare to call it that, at the heart of the Gospels. When John writes, “In the beginning was the Word,” he is describing the condition of the world independently of human reflection on it, and the term he finds to best characterize that condition is logos. In St. Jerome’s Latin Bible this term will be rendered not as ratio, but rather as verbum, not as “reason,” but as “word.” But the historical and conceptual link to reason is clear. The world is an ordered whole, with each part where it should be, thanks to logos. This logos has often been assimilated to Christ, or seen as the abstract conceptual principle underlying the concrete natural world, and whose human counterpart, or whose embodiment, is Christ. Conceptualizing Christ in this way was central to the early articulation of a philosophically rich Christian theology, one that made it palatable to the Greeks. Thus, for example, Origen, the third-century CE church father writing in Alexandria, articulated an account of Christ as that being whose soul is most perfectly assimilated to the logos, and in turn took the logos to be nothing other than the rational order of nature. As Carlos Fraenkel remarks, someone who thinks in this way “will hardly concede that the doctrine at the heart of Christianity is not accessible to reason.”4
These associations might seem too specific to Christian theology to be of much use for our understanding of the history of the concept of reason. But we might also understand them, from the opposite direction, as the result of an effort among Christian thinkers to render philosophical concepts so as to anchor them in the holy texts, and thus, perhaps, give them safe passage in a civilization increasingly narrowly devoted to its scripture as the exclusive authority in human life. In philosophical schools from antiquity to the modern period, Christ is conceptualized abstractly as the principle pervading the world and making it an ordered whole. Such a view is particularly prominent in early modern rationalism. In the seventeenth century, the Jewish philosopher Baruch Spinoza explicitly states his sympathy for a version of Christianity in which Christ is rendered abstractly in just this way, as the rational principle of the world. In his Ethics, published in 1676, as well as in his Theological-Political Treatise of 1670, Spinoza explicitly identifies the “Spirit of Christ” as nothing other than “the idea of God.”5 On this alone, he explains in the later work, “it depends that man should be free, and desire for other men the good he desires for himself.” Here Christianity, into which centuries earlier Origen had worked to incorporate philosophy, has now itself been converted into the articulation of a bold rationalist egalitarianism. The Cambridge Platonist philosopher Anne Conway, a near contemporary of Spinoza, gives a similar philosophical interpretation of Christ—she comes to this view via the burgeoning Quaker movement, which had syncretistically incorporated many influences from Jewish Kabbalistic thought. In seventeenth-century Europe, Judaism and Christianity were in some spots converging to produce new articulations of reason grounded in ancient traditions of faith.
For Leibniz—who was born in Leipzig as a Lutheran but would arrive by the end of his life at a maximally liberal variety of nondenominational Christianity—the persons of the Trinity for the most part do not figure into the treatment of the world’s rationality. This rationality for him seems rather to be inspired in no small measure by the ancient Stoic vision of the cosmos as a harmony, in which everything “breathes together” or “conspires.” But the Stoic vision and the Christian vision, in turn, are both variations on a more general idea: that reason is not just “in our heads,” and if it is in there, this is only because the human mind, with its faculty of reason, is a reflection of the rational order of the world.
To hold that the world is itself rational is generally to hold that it is composed in a rational way, that all of its parts make up an ordered and unified whole—that it, to cite the Greek term that translates the Hebrew Bible’s “formless and empty,” is not a “chaos” (“Now the earth was formless and empty, darkness was over the surface of the deep, and the Spirit of God was hovering over the waters” [Genesis 1:2]). On the most common understanding of divine creation in the Abrahamic religious traditions, God does not make the world out of nothing, but rather imposes order on something that is already there, namely, that which had previously been formless and empty, “chaotic.” It is only when the order is there that it deserves to be called a “world” at all, or, alternatively, a “cosmos.” Both of these terms are connected etymologically with the ideas of decoration and adornment. The latter term shares a common ancestor with “cosmetics”: what you apply to your face in the hope of transforming it from a chaotic mess into an ordered whole.
Thinking of reason as the principle of order in a composite whole brings us closer to the other primary meaning of the Latin term ratio. It is not just the equivalent of our “reason,” from which we also get “rationality”; “ratio,” in the mathematical sense, is the relation between the numerator and denominator of a fraction. The two senses are not as far apart as they might as first seem. Think, for example, of the traditional study of musical harmonies. In Pythagorean tuning there are “pure” and “impure” ratios between intervals. Knowing what the ratios are is what enables us to play a scale on a musical instrument, picking out notes that agree with one another and that sound pleasant together. This pleasant sound is a sensual sign of the world being constituted out of ratios, or, to put it another way, of the world’s rationality (bracketing, for now, problems such as the diagonal of the square that caused Hippasus such grief).
The Stoic philosophers, such as Epictetus, would hold that the well-orderedness of the cosmos qualifies it as a living being in the literal sense, with its own organic body like an animal or plant. This view would be rejected by the majority of subsequent philosophers, in part because it seemed to threaten to bring with it the corollary view that God is the soul whose body is the world. It would be the worst sort of heresy, in the Abrahamic traditions, to conceive of God as immanent in the world rather than as transcendent. Spinoza, famously, courted precisely that heresy with the view that “God” and “nature” afford two different ways of saying exactly the same thing. The Stoics for their part had not held this view, but instead defended the doctrine of a “world soul,” distinct from any transcendent creator God.
Some, such as Leibniz, who rejected the view that the world is ordered by a unifying soul principle, would nonetheless take the world to be in itself rational in the sense that its order is a reflection of the rationality of its creator. The world is for him rational, not in that it consciously makes inferences or carries out proofs, but rather in the more limited sense that it reflects, in the totality of individuals that exist within it and in the degrees of perfection that these individuals realize, the existence of reason. Where does this reason exist, if not as a faculty of the rationally ordered world? Most often it is attributed to the creator God, outside of the world, of whose reason the world is a sort of mirror or testimony. All of nature would thus be rational somewhat as a pocket watch with intricate interworking parts is rational. It is not itself reflecting on the concept of time, but it is nonetheless reflecting this concept, as a sort of monument to, and congelation of, some external agent’s rational mastery of it.
Brute Beasts
Natural beings are, we might say, on the account articulated in the previous section, embodiments of reason, but not possessors of reason. They are themselves no-brainers, in the sense just described. This view, in turn, offers a new insight into the question whether there are any beings in nature besides humans who may be considered rational. Excluding angels, which are arguably not natural (though this has also been much debated in the history of philosophy), the nonhuman candidates for rationality that have most frequently presented themselves are animals, often denoted by the somewhat more derogatory name of “beasts” (Latin, bestia), to which is frequently added the pejorative “brute” (Latin, bruta).
The majority view has been that human beings are rational, and animals are not, because human beings are capable of entertaining propositions, and making inferences based on them. Animal cognition has typically been held to be based on knowledge of concrete individual things alone, rather than of the universals under which these individual things are subsumed. Such a degree of cognition has generally been held to be something of which a being endowed only with a sensitive soul, as opposed to a rational soul, is capable. It is difficult to know how exactly such cognition might work, and philosophers have long debated whether the theory is even coherent. A dog is supposed to be able to recognize its master, but not to be able to subsume its master under the universal concept of “human.” What is involved in the recognition, then? Is there no awareness that the master is in some respects more like other human beings, including even strangers, than he is like, say, a cat? And how can this awareness occur if it does not involve some sort of mastery of the universal concept in question?
In more recent times, following the demise of belief in animal souls (a belief that had previously been a matter of the straightforward meaning of the term involved: an animal is just that which is endowed with an anima, the Latin word for “soul”), we have tended to account for animal cognition in terms of “instinct” and “stimulus”—though it is worth asking how much of this new account is really new, and how much it simply involves updated vocabulary that preserves a much more deeply rooted theory. Somehow we have managed, from the era of animal souls to the era of animal instincts, to adhere to some sort of hierarchy of higher and lower, with humans and nonhuman animals occupying exactly the same positions as before.
Consider, again, the octopus. In recent years, this animal has been elevated from its previous lowly rank of self-destructive autocannibal to being, as the media reports have tended to put it, the alien among us, our invertebrate equals, the minds in the sea, and so on. In his remarkable 2016 book Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness,6 a work based on both rigorous philosophical inquiry and deep knowledge of the relevant empirical facts, Peter Godfrey-Smith explores the cephalopod’s evolution of mind in a process quite distinct from our own evolution. We would have to go back six hundred million years to find a common ancestor, and what we see in the octopus today is a system for instantiating mind quite unlike our own. The neurons responsible for what we take to be conscious activity in it, for one thing, are distributed throughout its entire body, not least throughout the arms that Plutarch believed he had observed being devoured by their owners (he may in fact have been observing the detached male hectocotylus, an adapted arm with genital power, that remains lodged in the female after mating).7 Octopus intelligence, distributed as it is throughout the body, might well serve as an emblem of what we have called the “no-brainer”, though now in a very literal sense: reason realized in nature otherwise than through the activity of an outsized neocortex.
The octopus, Godfrey-Smith suggests, “lives outside the usual body/brain divide.”8 Because it has evolved along such a different path from us, and its mental capacities seem to be organized so differently from ours, it seems misguided to seek to assess its intelligence according to a scale that sets up humans as the standard. None of the evidence adduced in Other Minds for the octopus’s curiosity and resourcefulness suggests, however, that in its mental life the octopus is, say, racked by doubt, that it is ever blocked in the course of its actions by deliberation, or that it worries about what lies beyond the sphere of its own knowledge; that it wonders about the ontology of negative numbers, or the social construction of gender. It also evidently lacks episodic memory of the sort that would enable it to reminisce about its own early life, as well as lacking the mental projective capacity to reflect on its future or to worry about its finitude. Insofar as it lives in an eternal present, there is no compelling argument for attributing any great moral status to the octopus, any inherent right to life or to not being a prized commodity in Mediterranean cuisine. To this extent, even in one of the most forceful cases for the richness of another animal’s mental life, the basic divide between human beings and all the others is kept in place.
The octopus, we presume, makes fairly clever choices, but does so without deliberating, hesitating, or doubting. In this respect we are generally prepared to attribute to it only a semblance of rationality, for true rationality, many have thought, requires the power to entertain alternatives and to decide between them. There has, however, been a minority view throughout history, which has held that animals, precisely to the extent that they do not make inferences, and therefore do not deliberate, are for this very reason not only rational, but still more rational than human beings. This is the view expressed in Girolamo Rorario’s sixteenth-century treatise Quod animalia bruta ratione utantur melius homine, that is, That Brute Animals Make Better Use of Reason than Men.9 Animals do not deliberate; they simply cut, as the saying goes, right to the chase. They act, rather than thinking about acting, and they are never, ever, wrong. This is not to say that they are never foiled in their actions, that they never turn the wrong way in fleeing from a predator only to find themselves cornered. It is simply to say that their actions are, so to speak, one with the flow of natural events in the world, unhesitating.
It is human hesitation, deliberation, reasoning things through before acting, that has often been thought to cause a sort of ontological rupture between us and nature. Many philosophers have held this rupture up as what makes us distinctive and special, and as what makes us the only beings in nature that are not entirely of nature. But it has always been a great problem that it is precisely this rupture—which is, on the one hand, the thing that makes us relatively impressive among natural beings, the thing that connects us to the angels—that is also, on the other hand, the very thing that cuts us off from nature; it makes our movement through the natural world often feel more like groping and grasping than like real mastery. For some philosophers, such as Descartes, this problem is expressed in terms of the proneness of our faculty of reason to make mistakes. It is a flawed faculty because our will is infinite, while our understanding is finite.10 If we were simply not to will ourselves to draw conclusions about what our understanding does not yet know, then we would never be in error. Brute beasts, to the extent that they have no free will, for the same reason are incapable of error.
Many later philosophers would in turn come to see this power not as something that might give us a sense of security or power in relation to the external world, but indeed as the source of a deep unease. The twentieth-century French existential philosopher Jean-Paul Sartre described the human being, or the “for-itself” character of human existence, as a “hole of being at the heart of being.”11 Hardly a comforting thought. Animals, by contrast, have generally been seen as beings of nature in the fullest sense, not as holes in being but rather as, so to speak, that which fills being up. This does not mean that they are less than us, but rather that they do not share in our peculiar existential plight, of being immersed in nature while also set apart from it: set apart, that is, by our deliberations and our long, labored processes of decision making within the space of application of our reason. According to Heidegger, animals are “poor-in-world”: you look at a squirrel, and you can be confident there is just not that much going on in there. But being poor-in-world means that animals are more solidly of the world than we are, not cut off from it, but moving through it in a way that they do not themselves experience as problematic or complicated.
Rorario’s work, while composed in the first half of the sixteenth century, would become far better known only with its several reeditions beginning in 1642, and would inspire an extensive and influential article, entitled “Rorarius,” in the French freethinker Pierre Bayle’s Historical and Critical Dictionary, first published in 1697.12 Bayle’s article, ostensibly on his Italian predecessor, reads like a strange experiment, the work of an early modern David Foster Wallace, as roughly 90 percent of it consists of long footnotes in which he gives himself over to free reflections on the problem of animal souls. He notes that Rorario’s own work is hardly a philosophical treatise, but more a compendium of “singular facts on the industry of animals and the malice of men.”13 The Italian author, who at the time of writing was serving as the papal nuncio to the king of Hungary, had believed that if these singular facts were simply acknowledged, it would be impossible to deny to animals the use of reason; the only remaining criterion for distinguishing human beings from animals would be not on the basis of reason, but rather on the basis of free will.
These two capacities have generally been run together: to be able to make free choices, to do this rather than that, presupposes the power to deliberate about the options, to make inferences, right or wrong, from known facts, as to the best course of action. Vice versa, to be able to deliberate has been supposed to involve the power to take one course of action rather than another as a result of this deliberation. But there is nothing essential about this connection. Rorario seems to think that it is enough for reason to be manifested in a creature’s actions, whether these actions be freely chosen or no, in order for that creature to be deemed rational. Bayle for his part asserts that the facts cataloged by Rorario should “be an embarrassment both to the sectaries of Descartes and to those of Aristotle.”14 As Dennis Des Chene has noted, this is a strange thing for Bayle to say.15 After all, the Scholastic philosophers, followers of Aristotle in the late Middle Ages and into the early modern period, had a perfectly coherent way, or so they thought (and so, evidently, does Des Chene think) of accounting for what appears to be learning and judgment in animals. For the sixteenth-century Scholastic philosopher Francisco Suárez, when the sheep flees the wolf, it is exercising a certain vis aestimativa or estimative power, which does not involve reason, indeed “does not exceed the grade of the sensitive [powers].”16 The sheep is able to recognize the wolf as a wolf, and as an enemy, but without having to subsume the wolf under any universal concepts. As Des Chene explains, it does not place the wolf “under a concept of badness, it simply recognizes the wolf as bad.”17
Again, whether such conceptless recognition is possible has been the source of long debate in the history of philosophy, and the debate continues today, though in updated terms, in discussions of animal cognition. Descartes would for his part find a way of accounting for the sheep’s ability to perceive the wolf and to flee it without appealing to any cognitive function in the sheep at all. He and the Scholastics were certainly aware of the remarkable industry of animals. But then as now, a priori commitments about what sort of being an animal is generally prove powerful enough to account for anything an animal is shown to be able to do. If you are committed on such a priori grounds to the view that no nonhuman is capable of higher cognitive function, you will always be able to account for any complex behavior in animals without having to revise your views.
What much of this discussion seems to miss, however, is that the attribution of reason to animals might not require any proof of higher cognitive function in them at all, for it may be that their “industry” itself is rational—just as the pocket watch is rational as a congelation of the reason that structures the world, as a “no-brainer.” We may extend this conception of rationality far further than animals. Emanuele Coccia argues, in a recent book calling upon philosophers to take plants seriously, that “it was not necessary to wait for the appearance of human beings, nor of the higher animals, for the technical force of shaping matter to become an individual faculty.”18 He asserts that there is a “cerebrality” innate in the vegetal seed, as “the operations of which the seed is capable cannot be explained except by presupposing that it is equipped with a form of knowing, a program for action, a pattern that does not exist in the manner of consciousness, but that permits it to accomplish everything it does without error.”19
This might seem like loose analogy, or equivocation on the meaning of “knowing,” and it might seem question begging to assume at the outset that whatever possesses a “program” must therefore possess knowledge. However, if we are in fact prepared to go with Rorario’s conception of animal reason, as grounded in what animals do rather than in what they think, then there can in fact be no reason to withhold reason from those natural beings, such as plants, that have seldom been suspected of having any knowing, in the sense of cognition, at all. This is the understanding of knowing or thinking that is also at work in Eduardo Kohn’s recent, bold account of “how forests think.”20 For him, setting out from a theory of signs elaborated by the American pragmatist philosopher C. S. Peirce in the nineteenth century, any system, such as a rain forest, may be interpreted as a system of signs, quite apart from the question whether any individual vectors within this system are beings capable of interpreting these signs. There are, Kohn argues, nonrepresentational signs too; once we acknowledge this, we are able to see thought spread abundantly throughout nature, rather than being limited to only a few “higher” creatures with particularly big brains.
A similar sort of uncertainty as to whether reason is an internal state of a thinking being, or rather the external execution of the right motions by a being that may or may not be thinking at all, seems to be the cause of an equivocation at the heart of much, if not most, discussion of the specter of artificial intelligence—particularly among people in the technology industry and in tech journalism with little patience for philosophical distinctions. Are the machines going to “surpass” us, as many AI commentators often say, at the moment they start doing, better than we do, things we now consider to be fundamentally human? Or are they going to surpass us when they begin to consciously deliberate about what they are doing, and when they develop the power to do one thing rather than another for no other reason than that that is their entirely arbitrary whim? Is it enough that they accomplish what they do without error? Is this too a form of knowing? If it is good enough for plants, then why should it not be good enough for machines?
The fact that these questions have never really been worked out to general satisfaction, for animals, plants, machines, extraterrestrials, and the physical universe, reveals that we really do not know what intelligence is, and so we cannot possibly know what we are looking for when we are seeking to identify instances of artificial intelligence or nonhuman intelligence. The same problems plague the discussion of reason. A sober assessment of the way the term “rationality” is in fact used would lead us to conclude, with Hartry Field, that it functions as little more than “an approval-term.”21 There is no settled fact of the matter as to what rationality is, whether it is something that can characterize unthinking natural or artificial systems, or whether it is the thinking itself, with which only a few beings are endowed, and in virtue of which these few beings stand to some extent outside of these systems. There is, however, an important asymmetry between rationality and irrationality here: what Rorario and Coccia emphasize about animals and plants is, in effect, that they always get things right. The absence of deliberation means that they do what they do without error—even if, again, sometimes there are other powers or beings in nature preventing them from arriving at their natural end. Even if rationality extends beyond human beings, or beings with higher cognition and the capacity for abstract representation, irrationality still seems to be limited to the narrower case of beings that have higher cognition and that fail to deploy their abstract representations in the correct way, beings that, as holes in being at the heart of being, just keep screwing up.
For Rorario, animals are more rational than human beings because, lacking higher cognition, they can only be rational. Higher cognition gives us, on this line of thinking, not rationality, but only irrationality. This might be cause to despair, but it is also an interesting reversal of a familiar old formula. On this new inverted account, rationality is widely distributed, and all too common. What makes us human beings unique is our irrationality. We are the irrational animal.
An Imperfect Superpower
The prevailing view in philosophy remains the opposite of the one defended by Rorario. It holds, rather, that we are the rational animal. This definition is occasionally lengthened to include other apparently universal properties—thus, for example, the late-antique Spanish polymath Isidore of Seville writes that a human being “is an animal, rational, mortal, land-dwelling, bipedal, capable of laughter”22—but rationality and animality are the only members of this list that have consistently made the cut. This remains the case even though the notion of rationality is no longer one in which human beings share in some otherworldly or transcendent reality from which other beings are excluded. Rather, rationality is generally understood today as an adaptive trait common to all and only human beings, and comparable to any other trait we might find throughout nature, even if its origins and function are harder to account for, and even if it remains a great mystery why it is not more widespread in nature.
In their 2017 book The Enigma of Reason,23 which represents, at the time of this writing, a cutting-edge synthesis and novel interpretation of experimental and theoretical research on rationality, Hugo Mercier and Dan Sperber portray human rationality as comparable to echolocation in bats: a sort of superpower, which must have emerged as a result of selective pressures, but which is also perplexingly rare in the animal world. Mercier and Sperber, like nearly all researchers in their field, are thoroughgoing naturalists, but also human exceptionalists. They are trying to find a satisfying naturalistic account of what it is that makes human beings so special; they take it as more or less settled that human beings are special, and that human reason is not simply our own inflection of something that is spread much more widely throughout nature.
Reason is for them a special kind of inferential ability, which is acquired over the course of our early lives, rather than being instinctive. It is, further, something of which we are conscious when we are deploying it; it involves intuition rather than being a faculty, as many other thinkers have supposed, distinct from intuition. Reason is, for them, a variety of intuition that involves the representation not of things and events, but rather the representation of representations. In other words, it is intuition about abstract ideas, “a mechanism for intuitive inferences about one kind of representations, namely, reasons.”24 Mercier and Sperber take reason to be an “enigma” in a double sense: both because it is such a rare and exceptional superpower, and also, more relevantly for our purposes, because it is evidently so severely flawed, so apt to lead us astray. We continually find ourselves in situations in which we disagree with our fellow human beings as to what qualifies as a rational conclusion with respect to logical and social questions. And we are plagued by rampant confirmation bias: the systematic error of noticing, preferring, and selecting new information that reinforces what we already believe. Mercier and Sperber cite Descartes’s attempted explanation of how such flaws are possible: “The diversity of our opinions,” Descartes writes, “arises not from the fact that some of us are more reasonable than others, but solely that we have different ways of directing our thoughts, and do not take into account the same things … The greatest minds are capable of the greatest vices as well as the greatest virtues.”25
Mercier and Sperber protest that this does not provide a solution to the enigma of our flawed reason but simply restates it. Their answer to the enigma proceeds from their naturalism, where reason is in the end a “modest module,” existing alongside other intuitive inference models, and selected in the course of human evolution in view of the work it does for us in producing and evaluating “justifications and arguments in dialogue with others.”26 This is what they call the “interactionist” approach, which they contrast with the “intellectualist” approach, according to which the function of reason “is to reach better beliefs and make better decisions on one’s own.”27 On this latter, more traditional view, reason is expected to deliver to us the truth, and so we find it problematic when it fails to do so. On the new, interactionist approach, by contrast, reason is simply an adaptation that helps us, to some extent, in our interactions with others. It made no promise to be a deliverer of truth, and so if we are disappointed in it for leading us astray, our disappointment is misplaced. Mercier and Sperber locate the reason for reason’s imperfection in nature. Reason is imperfect because it is a product of natural evolution, which simply does the best it can with available materials within given environmental conditions. This account conforms well with the experience many of us have of our own faculty of reason, as akin to the experience of pain in the spinal column that holds up our bad backs: an arrangement that is doing the best it can, but that seems always on the verge of giving out.
One may view online ample footage of pitiable tourists, unable to continue along glass walkways over deep canyons. Some of them collapse, cling to the side, and moan with terror. It does no good to tell them that the glass building materials are structurally as sound as steel, and that the simple fact that we cannot see through steel, as we can see through glass, changes nothing as to the actual danger involved.
I myself am terrified of flying. This is not a terror of the unknown: I do it all the time, and each time I am thrust anew into indescribable dread. I feel forsaken, left to the cruel and indifferent whim of the sublime forces of the sky, where no human being was ever meant to penetrate. The last thing I wish to hear when I confess this very personal fact about myself is a recitation of statistics concerning the relative safety of air travel. Trust me, I want to reply, I know the statistics. I have memorized them. For any airline, I can tell you the year, the place, the causes, and the number of fatalities of all of its major disasters. I know that this all adds up to a small fraction of the comparable fatalities in car accidents, but it makes no difference. I am not afraid of cars; I am afraid of airplanes. I assume this has something to do with the continuity of highway travel with the sort of experiences my hominid ancestors may have had. Simply running down a hill or floating in rapids is experientially somewhat like riding in a car, while none of these is anything like flying several miles above the earth, over the ocean, over the clouds.
Nor do I enjoy discovering bats flying around my home, as happened some years ago, even if I know that they occupy an important ecological niche, that no blood-sucking bat species inhabit my continent, and so forth. Here, too, I feel as though there has been a change of subject, and as though I’m being scolded or lectured at—batsplained, as it were—as if my displeasure had something to do with a lack of education or awareness of the relevant scientific facts.
One more example of this sort suggests itself, though one removed from my own personal experience. I have largely overcome ethnocentrism and xenophobia in my own life, having worked toward a vision of the kind of life I decided I wanted to lead, early on, in which otherness was held to be high-status and desirable, rather than low-status and in need of avoidance. And yet it is fairly clear that this approach to human social reality functions as an inversion of the normal approach to diversity throughout human history and across human cultures, which has been, by default, based on at least an implicit presumption of one’s own group’s superiority. Research has shown, in fact, that the folk science of human difference tends to involve an implicit essentialism about the differences between groups, that is, a folk theory of difference that is expressed in the modern world as racism.28
We may presume that this has at least something to do with the fact that interacting with strangers really does involve risks that interaction with familiars does not. A certain wariness of other groups makes good sense—when it is, for example, perfectly likely that they are intending to raid your cattle under cover of night. It should not be at all surprising to find that this wariness has been underlain throughout human history by a propensity to essentialize, or to take as not merely superficial, but rather as deeply and irreversibly real, the differences that divide one human cultural group from another.
Racism is bad, in part because it is false. It is scientifically ungrounded: the differences we perceive as essential and salient are always ultimately trivial. Yet it is also, from the long perspective of human evolutionary history, perfectly rational. As Edouard Machery and Luc Faucher have written, sometimes bad folk science can be good epistemology: the way we divide the world up makes good sense for many purposes, even if science can make no use of it once we begin to articulate the underlying principles.29 This fact, in turn, can easily make the deployment of scientific information in argument against racist ideology seem futile, in much the same way that it is futile to tell me about airline safety statistics or the harmlessness of insectivorous bats. In all of these cases we are dealing with phobias, and it is only the most inadequate understanding of how phobias work that would take them to be curable by a supplement of information.
As we will consider in detail further on in this book, there is an additional problem, in the case of race if not in the case of airline safety. This problem arises from the concerted effort among many who suffer from the relevant phobia—that is, racists—to lapse into severe confirmation bias and to conjure up alternative information of their own. As a result, not only does correct information sent their way about the science of human diversity not have the desired effect, but it is shot down, before it can be processed, by the various half-truths and errors that the racist has weaponized in his defense. This effort on the racist’s part is roughly comparable to the unlikely scenario in which an aviophobe constructs for herself an alternative set of facts, a novel interpretation of the available statistics, say, in which travel by air turns out, for those who know the “real” facts, to be far more dangerous than travel by car.
Why does such an effort seem unlikely? Why is it that aviophobes usually just own up to the fact that they are “being irrational,” whereas racists build up such a thick carapace of protective pseudofacts? I may be underestimating my fellow aviophobes, or have not yet discovered that particular corner of the internet, but it seems likely that the difference lies in the fact that we suffer our way through in-flight turbulence alone, deeply alone, whereas racists turn their suffering, at the thought of the equal existence of others who do not appear to be like them, into joy and solidarity within a community of people who do so appear. This is easier to do with some phobias than with others, for reasons having to do with the nature of the phenomena or things or people triggering them. All of the phobias we have considered here seem to be, like back pain in vertebrates only recently converted to bipedalism, a consequence of the fact that reason is an evolved faculty, which does the best it can under real-world, and perpetually changing, circumstances. We can massage these phobias, and organize society so as to minimize their damage, but they are not going to go away.
Small Pain Points
Or are we just giving up too easily? Might there be some way to improve our thinking so as to truly overcome fear of flying, fear of bats, fear of ethnic others, fear of glass-bottom bridges? Perhaps the most prominent para-academic community of rationalists is the internet-based group known as LessWrong, founded in 2009 as an online forum and blog by Eliezer Yudkowsky. This group is devoted to applying Bayes’s theorem, borrowed from probability theory, in their own daily lives, in order to make decisions conducive to greater happiness and thriving. Its members are focused on studying how cognitive biases influence our unexamined reasoning processes, and thence on how to eliminate them. LessWrong is not a group of logicians in a narrow sense, but if we understand “logic” in the broad Gassendian sense of the art of ordering our thoughts, then it would include most of the core interests of LessWrong’s members. Yudkowsky has spelled out many of his theoretical views on these topics and on artificial intelligence, not in a doctoral dissertation or in academic or even popular nonfiction books, but in Harry Potter fan fiction, which he posts online at the LessWrong website. LessWrong is linked to the Center for Applied Rationality (CFAR), and to the Machine Intelligence Research Institute (MIRI), both based in Berkeley, and both thoroughly immersed in the world of Silicon Valley libertarianism.30
Tellingly, Yudkowsky is hailed, like many in this subculture, as a high school dropout, whose intelligence must therefore be something not inculcated by a methodical tradition of pedagogy, but rather a sort of innate spark of something called “genius” (to which we will turn our attention in chapter 4). Such a life course is valued throughout Silicon Valley, and not just at LessWrong. Thus a fellowship program run by Peter Thiel—the billionaire founder of PayPal and (at least initially) a supporter of Trump’s presidency, who as early as 2009 declared that he no longer believed in the compatibility of freedom and democracy31—offers $100,000 to selected young people who are willing to drop out of university to pursue a project of invention or innovation. Another project funded by Thiel, Imitatio, was “conceived as a force to press forward the consequences of [the French theorist] René Girard’s remarkable insights into human behavior and culture.”32 Imitatio’s website tells us of its executive director Jimmy Kaltreider that he is “Principal at Thiel Capital,” and, moreover, that he “studied History at Stanford, where he almost graduated.”33 Kaltreider’s bio perhaps reveals an ambivalence that is less evident among the more radical techno-capitalists: there is no better-credentialed academic than René Girard. He received his prerequisite diplomas—a first one in medieval history from the prestigious École des Chartes in 1947, and then a PhD in history from Indiana University—and went on to a comfortable teaching and writing career at Stanford. Kaltreider is caught in between Girard’s mandarin legitimacy and Thiel’s maverick outsiderhood: wishing to promote the significant body of work of a canonical academic thinker, he also wishes to share in the free spirit that funders like Thiel are aggressively promoting, part of which involves the idea that educational institutions, with their slowly accrued curricula and traditions, with their hoops to be jumped through, are nothing but an impediment to the full expression of individual genius. So while Yudkowsky dropped out of high school, Kaltreider “almost graduated” from Stanford. The age at which one jumps ship is also a measure of the depth, the hardcoreness, of commitment to the romantic ideal of the go-it-alone great man.
CFAR offers rationality workshops at which one can, for several thousand dollars, learn such things as “the science behind the body’s stress reactions, and skills to make it easier to ask experts for knowledge, clients for business, or investors for capital.”34 The Thiel Foundation has contributed over 1.5 million dollars to MIRI. We will return to the political dimensions of recent Silicon Valley ideology later on. What is important here is to note a curious development within the LessWrong community that seems to confirm the worries of Cicero, Gassendi, and so many others throughout history. In April 2017 the LessWrong community planned an event called, for reasons unclear to outsiders, a “Hufflepuff Unconference,” which seems to have been provoked by the realization that “many people in rationality communities feel lonely,” and that “there are lots of small pain points in the community.”35 The organizers determined it would therefore be necessary for the members of the community to get together to talk about their feelings. “The emotional vibe of the community,” it was explained, “is preventing people from feeling happy and connected, and a swath of skillsets that are essential for group intelligence and ambition to flourish are undersupplied.”36
One might wonder, particularly as an outsider: wasn’t this project of the emendation of the faculty of reason supposed to be the ticket to happiness? Wasn’t this, mutatis mutandis, the promise made by the Stoics and by Spinoza, that if you will just set about ordering your thoughts in the right way, and making the right inferences from what you know, then your thoughts will be harmonized with reality and you will therefore protect yourself from the disappointments that arise from disharmony with it? Is it not this disharmony that leads to the dominance of reason by the passions? The emphasis in the LessWrong announcement is on the fact that troubles arise when we move from individual rationality to group intelligence. (Hell is other people, Sartre wrote.)37 But long ago the Stoics, at least, posited that any individual is capable, by his own means, of ordering his thoughts in such a way as to be unperturbed by the various ways others disappoint and undermine us.
What has gone wrong? There seems to be an acknowledgment in the announcement for the Unconference that the negative emotional vibe is not simply incidental to the core activity of the group, but is somehow being generated by this activity. And one wonders, here, whether we are not seeing the limits of autodidacticism, of a go-it-alone approach whose pitfalls might have been avoided if Plutarch and Cicero had occupied a somewhat more prominent role in this movement’s canon, and Harry Potter a somewhat less prominent one. To attempt to work through these great problems, of rationality and happiness, to attempt to ameliorate self and society, without attention to history, is irrational if anything is.