CHAPTER NINE

The Impossible Syllogism; or, Death

“In the long run we are all dead”

Already in chapter 1, we called into question the idea that rationality involves determinations made by individual actors in the aim of improving their own individual lot: the idea that it is rational, for example, to seek one’s own long-term economic well-being and good health. This has been the default model of rationality in most research in economics and is the cornerstone of so-called rational-choice theory. While John Maynard Keynes did observe that “in the long run we are all dead,”1 thus acknowledging that there is a limit to how far into the future we might project our hope or expectation for improving our own individual plight, for the most part the economic model of the individual actor has tended to envision him or her (generally him) as being of infinite duration, as not standing before the horizon of his own finitude.

Although in the most recent epoch many Anglophone philosophers have taken their cue from economists in these matters, other philosophers, since antiquity, have generally been more prepared to see mortality as a fundamental condition or horizon of human existence, and thus as of the utmost importance for understanding what a human being is. If we were not mortal, many philosophers have thought, we would not be human at all. A human being is the mortal rational animal—which would perhaps be the most common formulation of the definition of the species, were it not for the fact that it is redundant, since all animals, being composed of fragile organic bodies that must eventually come apart, are by definition mortal. To try to take stock of human rationality without considering the way human life is conditioned by death is to skirt the subject at hand. Philosophy, as Socrates understood, and Montaigne after him, is nothing other than a preparation for death.

Socrates’s original insight concerned not just death, but the aging that leads to it, and was grounded in the awareness that earthly acquisitions, distinctions, and attachments grow increasingly ridiculous as one ages. The measure of the ridiculousness is proportional to the propinquity of death. What is the pleasure of being a high-status old man, having medals pinned on you every other day, being decorated, as Perry Anderson said of the elderly Jürgen Habermas, like some sunken Brezhnevite general?2 Such honors can easily seem more like sad consolations in the face of decline, demonstrations of the paradoxical law of the human life span, according to which rewards come only at a point when it is either unhelpful or unseemly to accept them. When they are, on occasion, bestowed on the young, as sometimes happens in the music and entertainment industries, the kids seem not to know what to do with them. It is as if they have been handed chunks of uranium, and one worries for their futures. By contrast, an old person who has achieved any sort of wisdom cannot relish them; if she or he accepts them, this can only be with a keen sense of embarrassment. Who, then, benefits from accolades, if not the young, and not the old?

Surely in our striving to accomplish things, to make a name for ourselves in a field, we are naturally striving for the sort of achievement that culminates in recognition of some sort? Surely our own awareness of the horizon of death should not cause us to abandon all outwardly focused projects or maneuvering for social distinction as mere vanity? After all, if it were simply for the love of the thing itself, we could write novels and then simply leave them in a desk drawer, or we could philosophize in a cave rather than publishing books and giving lectures. But in that case we would not just have fewer awards pinned on us; others would also have fewer books in their lives, less to think about. How can embarrassment be the appropriate response to recognition for legitimate contributions?

Some have sought a way out of this dilemma through public refusal of their awards, such as the economist Thomas Piketty, who turned down membership in the French Legion of Honor in 2015,3 or the surviving members of the Sex Pistols, who in 2006 turned down membership in the Rock & Roll Hall of Fame in a suitably vulgar memo they dashed off to that distinguished academy.4 Yet it is clear enough that the occasion to make this public refusal is its own reward, and that those who get a chance to do it delight in it as much as others might delight in having someone even more elderly than they are fasten a pin to their chest. If Piketty and Johnny Rotten manage to make their public display of defiance without the embarrassment they would have faced at the official awards ceremony, this is perhaps only another form of self-deception, as their inversion of the awards ceremony through public refusal is itself no less a self-celebration and a self-inflation. Better what then? To be offered no awards at all? But that is the fate only of those who do consistently mediocre work, and surely that cannot be one’s goal at the outset.

The horizon of death, in any case, transforms what we consider desirable, and transforms the significance of transformation itself. There is, again, no human being who in his or her essence matches the abstract economic model of the rational agent, an agent generally taken to be ageless, or generically at the prime of life. Yet one must always consider the stage on life’s way at which an actor finds him- or herself. This is something that modern academic disciplines—whether philosophy, economics, or the various “me-search” departments that now occupy significant portions of the humanities wings of universities—have largely failed to do. We have been analyzing human diversity to no end over the past several decades, but the sort of diversity of human experience that emerges from the fact that we are all of different ages, that there are stages on life’s way, has been largely neglected.5 Many have fought for desegregation of country clubs and for integrated classrooms and equal opportunities for women in college athletics. But no one has even thought to insist that, say, an octogenarian should have the right to enroll in first grade. The case has been made (not without tremendous controversy) for a transracialism grounded in the same reasons as transgender identity, but almost no one has attempted to justify “transgenerationalism,” where, say, an elderly person who claims to feel “young at heart” begins to insist that others validate this inward sense of who she is by treating her in every way as if she were young. When a thirty-three-year-old woman was exposed as having falsified her identity to join a high school cheerleading team, the condemnation was swift and universal, and few people considered the jail sentence she received as too harsh.6 She was only fifteen years older than her oldest teammates, a span of time that can seem a mere blink of an eye, and yet that was enough to create the perception of a difference of essence: a high school student is a different species of creature from an adult. I know that now, in my midforties, I am not welcome in most nightclubs frequented by twenty-somethings, and I have no recourse at all to appeal this injustice. We can change, by free choice, the definition of marriage so as to be indifferent to the genders of the two members involved, but with a few narrow exceptions we are not able to change the definition of “adoption” to include a pairing in which the adoptive parent is younger than the adoptive child.

There is a tremendous disparity, in short, between the small amount of interest we pay to age as a factor in defining our diverse roles in society, on the one hand, and the vast amount we pay to gender, ethnicity, sexual orientation, and other vectors of identity on the other.7 Age is different from these others, but in a way that should make it more interesting, and not less: for the most part, though of course with many exceptions, a person will remain in the same gender identity, ethnic identity, and sexual orientation over the course of a life, while all of us, of necessity, pass through several different ages. There is some sort of solidarity in generations, and elderly people in many societies come together to form an organized political block, such as the American Association of Retired Persons in the United States. But retired persons were not born that way, and when they dream, they often find themselves inhabiting earlier phases of their lives: phases that continue to constitute their current identity.

Aging is strange, and singular, and like the death in which it culminates it constitutes a basic parameter of human existence. No model of human agency or rationality that neglects it will tell us much at all about either of these. It is because we are going to die, and because the horizon of death shapes our experience as we age, that we come to prefer what economic models tell us we cannot possibly prefer. We come to prefer, typically, some good other than our own: that of our children, for example, or of our community. Such a preference is most typical of those who have matured enough to understand that no matter how well things are going for them right now, this constant improvement in one’s own lot cannot possibly last forever. Given this basic limit, it comes to seem rational to many to stop looking to maximize their own good, and instead to figure out a way to make a graceful, or perhaps glorious, exit. Our paragon of rationality, Socrates, did just this when he spoke the truth about himself at his trial, and refused to make any persuasive case in his own defense, since “it is not hard, men, to escape death, but it is much harder to escape villainy. For it runs faster than death.”8 It would be villainous, Socrates thinks, to lament and supplicate, as if in so doing he, a seventy-year-old man, might thereby become immortal.

Was Socrates being irrational? My own father justified his smoking habit, until the very end, with this favorite line: “I don’t want another five years in my eighties. I want another five years in my thirties.” Was he being irrational? It is perhaps irrational to desire something that is strictly impossible, to go back in time, but this was not his point. His point was rather closer to that of Socrates: we are all going to die, and this brute fact inevitably conditions our choices, and influences what it is to be rational in a way that the most simple models of human agency, those that are most often deployed in economics and rational-choice theory, fail to comprehend.

Radical Choices

If there were one domain of culture especially devoted to enabling us to see our true plight, as mortal beings whom no rational calculus will save, we might expect it to be found in the therapy industry. There is of course a small tradition of existential psychotherapy that presumably attempts to take stock of our mortality as a central conditioner of our happiness; and there is a recent trend of “philosophical,” though decidedly nonexistential, therapy, which seems to draw from Socrates the valuable lesson that to philosophize is to prepare to die, but also seems to spend most of its energy on training clients to think critically about the choices available to them, and then to make the best choices.9 Most therapy, in fact, tends to presuppose that there is a right course of action, and that the therapist, in his or her expertise, is in a position to help you find it.

What we often experience in these interactions is in fact a sort of witness leading, where the patient faces an apparent dilemma, but has a clear implicit preference, rational or irrational, for one of its two horns, while the therapist simply helps the patient to come to grips with this preference. Thus if someone goes to a therapist with the clear conviction that her marriage is rotten to the core, but also expresses some vestigial love for her husband and some fear of taking the leap into a new, single life, then the therapist will likely seek to emphasize the importance in life of asserting yourself, of breaking off on your own to pursue your destiny, and so on. If the emphasis at the outset is the reverse, where the patient stresses how much she loves her husband and how troubled she is by some recent signs of crisis and of doubt as to the future of the marriage, the therapist will likely seek to emphasize the importance of sticking things out, of taking commitments seriously, and so on. By contrast, a person might at some point in her life encounter an objective dilemma, a situation in which there is no single right solution to her quandary as to how to proceed, but only radically free choice between two incomparable conceptions of the good. Such a case is generally beyond the therapist’s conception of her professional responsibilities.

And if an aggrieved member of a couple turns to her friends to spell out her grievances, we may predict in advance that they will find that the other member of the couple is indeed to blame, and that their distressed friend, who has just divulged her grievances to them, is a beacon of lucidity, rationality, and righteousness. Little does it matter that the same scenario might be playing out at the very same moment, elsewhere in town, with the other member of the couple and his friends. There is little room, in this case as in the case of the therapist, to explore the possibility that no right path exists, that there is no formula for correct action such that, if followed, happiness will ensue. Therapy, whether offered by professionals or by well-meaning friends, seldom entertains true existential dilemmas, in which the agent understands at the outset that whatever choice is made, it is not a choice that is going to be dictated by reason. It is a choice that might become right in the making of it and in one’s subsequent commitment to it, but it is not a choice that can be said to be right in any absolute or a priori way.

It is just this sort of dilemmas that have been of central interest to philosophers such as Kierkegaard, who featured prominently in chapter 5, and who spent some years belaboring the question not of separation or of divorce, but rather of getting married in the first place, only, ultimately, to decide against it in favor of a more severe, ascetic form of life in which a spouse, in this case the long-suffering frøken Regine Olsen, could have no place.10 His life was, we may discern in retrospect, likely significantly shortened by this decision. Living alone, isolated, he had some good years of intense productivity and of ingenious insight into the human condition, and then he died at the age of forty-two. His work, today, may offer a sort of consolation, but not of the therapeutic variety. It will not tell you how to live, or reassure you that you have made the right decisions. There are no right decisions, but only radical choices. The radical choice you might end up making, in turn, might be one that leads to a swifter decline, or that disadvantages you in all sorts of ways that would make the choice appear decisively wrong by the standards laid out in rational-choice theory, or by your solicitous therapist. But this cannot be an argument against that radical choice, any more than wealth and fame in the wake of the opposite choice might be confirmation that it was in fact the right one. Kierkegaard removes himself, and the reader who is prepared to follow him, from this sort of calculus altogether.

In very recent years some Anglophone analytic philosophers have attempted to take on existential choices somewhat similar to those that Kierkegaard scrutinized so profoundly—choices that cannot be assessed in terms of some calculus of expected outcomes, but that are such that the good or the bad consequences of them are so foreign to one’s life at the time the choice is made as to make a before-and-after comparison of the different stages of life impossible. These are choices that send us down a path of what has been called “transformative experience.”11 It may be, however, that those who have contributed to this literature grant too much authority to the rational-choice approach they are ostensibly aiming to move beyond. The preferred example in this literature is the decision to have children, and it is generally taken for granted among these authors, as among other members of their social caste, that having children is a decision, rather than something imposed by circumstances or coercive family members.12 If I do have children, the reasoning goes, I will be so transformed by this experience that what is good for me-with-children simply cannot be placed in comparison with what is good for me unreproduced.

So, then, should I have children? Not surprisingly, analytic existentialism is also domesticated existentialism. Whereas Kierkegaard ultimately chose an ascetic form of life, of which he had already had significant firsthand experience at the time of his radical choice, it seems a foregone conclusion in the recent analytic scholarship on the topic of transformative choices that it is indeed right to make them—that we should fill our lives with a few children at least, as well as with a “partner” who takes equal responsibility in raising them, a generous selection of educational wooden toys, and so forth. As far as I know, all the parents who have contributed to this body of literature are delighted with the choice they made, and seem to want others in their community to know it.

Youth and Risk

Socrates, at the age of seventy, found his loyalty to truth, and to making the right inferences from what he knew, more worthy of his attachment than a few more years of life would have been. Around the same age my father found smoking worthy in the same way. Socrates’s commitment is rational, for how could a commitment to reason be anything other than rational? My father’s commitment seems less rational. But why? It is substantially the same as Socrates’s: a commitment to something other than one’s own continued self-preservation, in light of awareness of one’s own inevitable eventual demise. It seems in my father’s case that the object to which the mortal human attaches himself causes us to revert to the abstract economic model of rationality: what is rational is what maximizes one’s own self-interest, and there is no greater failure to do this than to hasten your own death by smoking. Why we revert in this way on some occasions and not others seems to depend on the way in which we value, or fail to value, the particular thing, or gesture, or ideal, in the name of which we choose to die—given, again, that we are going to die anyway. In truth most such ideals or life choices are plotted somewhere between truth and pleasure, between reason and hedonism.

One common exit strategy, often depicted heroically in stories and movies, is to die for the next generation. An older person is in fact expected, as a matter of protocol, to step forward as a volunteer for death, when volunteers are called for, as when, say, there are limited spots in the lifeboat and numerous young people with potentially bright futures upon the sinking ship. In wars the reverse typically occurs: young people are expected to die for old people. Many who do so attempt to convince themselves of the rationality of this arrangement, not because they would not otherwise have had long prosperous lives ahead of them, but because a long prosperous life is not worth living if it is at the expense of the glory, or even just the continued survival, of the nation. This is a romantic ideal, and one that is also at odds with the abstract economic model of rationality as maximization of self-interest.

Of course it might in fact be an expression of rationality to defend the homeland, if the alternative is brutal occupation. But war seldom, perhaps never, presents such a clear dichotomy. An individual soldier’s contribution to a war, particularly a distant foreign war, cannot be measured and shown to constitute the small effort that made the difference between winning and losing. Nor can it be shown to be the effort that made or preserved the proper conditions within the homeland for what would have been, had he or she returned from war, a long and prosperous life. Nonetheless, the romantic ideal that motivates self-sacrifice for the nation is often inadequately translated into the terms of a rational calculus: “If you value your freedom, thank a soldier.” This formulation cannot stand up to scrutiny, especially if it is understood to mean that any individual soldier is all that stands between me and unfreedom. But it is interesting that the translation is even attempted. In his dedicatory epistle in the Meditations to the faculty of theology at the Sorbonne, Descartes assured its members that he was only trying to rationally argue for the truth of religion for the sake of those who lack the ungrounded but infinite faith of theologians and other true believers. So too, it seems, the bumper stickers and T-shirts admonishing us to thank soldiers seek to articulate for the faithless a defense, in cost-benefit terms, of the martial form of life.

In Wolf Hall, Hilary Mantel’s popular fictional retelling of the life of Thomas Cromwell, the aged Cromwell looks back on his life as a young soldier—fighting, as most soldiers in the sixteenth century did, not for his own nation, but as a foreign mercenary—and reflects on what he thought about on the battlefield as he faced death. What a waste it would be, he had thought, to die now, when there is so much more to do. Youth, generally, is itself a long violent throe, and to be thrown from there into mortal violence, without the calm industriousness of later years, would seem far more pointless than never to have existed at all. Was the experience of war worth it, though, given that he did not die? Some might suppose—quite apart from any defense of the homeland, which, again, was not the reason Cromwell went to war, nor, surely, the reason why the majority of young men have found themselves on battlefields throughout history—that if one survives the risk, and is not too damaged by the trauma, one’s later character may be fortified by the years of soldierdom.

Youth, for deeply ingrained primatological reasons, seems to bear a special relationship to risk: driving too fast, having multiple sex partners, brawling, dueling, in general testing out the limits of one’s own freedom. When life is most valuable, and most full of potential, it is also made most precarious through such trials. Going to war may simply be a further addition to this list. All of these behaviors are irrational from the point of view of short-term individual self-interest. But they may appear different when we zoom out and take stock of the whole life at once, in which early risks help to give shape and meaning to a life that, if it had been foreshortened by any of them, could easily have turned out to be meaningless. If we zoom out even further, to the level of the species, and of its history, the reason of unreasonable risks at a certain stage of life comes even more clearly into focus, as a selective force.

But let us not move to that level, not here. What is important for now is the fact on which we all presumably agree, that some risk in life is compatible with the human good. If the goal in pursuit of which the risk is taken is central to one’s self-conception, then even if the risk is very high, one still might deem that it is worth it. Here we encounter widely varying ideas of what sort of risk one might rationally take, since we have such widely different goals constituting our individual self-conceptions. When in 2017 Alex Honnold climbed Yosemite’s El Capitan slope without ropes, the risk of death was indeed very high, but the prospect of a long life in which Honnold had not climbed El Capitan was too far from his conception of his own life to prevail against the risk.13 From the outside, for those attached to a form of life that requires no such great risk, Honnold’s decision could easily have seemed like the height of irrationality. For those whose life centrally involves seeing to the well-being of young Honnold himself, such as Honnold’s mother, the decision may seem hurtful and unconscionable (evidently he avoided telling her in advance about his plan to make the climb).14

There is no fixed standard in relation to which we may weigh the suitability of this sort of endeavor, as there is no fixed and uniform conception of what would or would not constitute a life worth living. And although Honnold’s incredible accomplishment tends to elicit praise and awe (now that he has succeeded at it, anyway), while other forms of youthful recklessness—that of the speeders, the brawlers, the mercenaries, the duelists—generally elicit condemnation, all may be plotted within a range of attempts, again, to feel out the limits of freedom. These attempts, too, make sense only in light of the fact that our freedom is bounded by mortality, and if Honnold risked only a bump on his head, like Wile E. Coyote whenever he falls from a cliff, his feat would be of little interest. The exercise of freedom always happens in the shadow of death.

Such feelings-out, in turn, are different—at least with respect to the self-understanding of those who make them—from the risks and self-sacrifice of the patriot, who goes to war not to edify himself or in pursuit of individual life goals, but because he believes his individual self-interest must be subordinated to the interests of the nation. But the patriot resembles the mercenary, or the cliff-climber, at least in that he prefers the high risk of imminent death over a likely long life that, had he not taken the risk that presents itself, would not be worth living. He does not seem to be much like the brawler or the speeder, however, who are testing out the limits of their freedom without much thought of any future, whether one worth living or not.

The relationship between freedom and rationality is complicated, and well beyond the scope of our interests in this chapter. It will suffice to note only that irrationality often involves, in part, a failure or a refusal to think of oneself objectively, and thus to think of one’s own plight as determined by the same forces that would govern others in a similar situation. The refusal here can sometimes be laudable, as when one rushes into battle knowing one has next to no chance of surviving, and is subsequently, and postmortem, deemed “brave.” What it takes to rush like that is a capacity to suspend any consideration of the objective probability of a desired outcome. Typically the assessment of such an action, by surviving onlookers, will be a matter of perspective. While some will say the soldier was brave, others will say he was foolhardy, that he gave up his life out of a misguided impulse, but would have done better to stay behind in the foxhole and wait for a moment, or for another day, when his fighting skills could have been more usefully deployed.

But those on both sides will agree on one thing: that what lifted the soldier out of the foxhole was not his faculty of reason, but rather something deeper, something we share with the animals, which the Greeks called thumos and which is sometimes translated as “spiritedness.” It is a faculty that moves the body without any need for deliberation. It is something like what propels us when we are driven by desire, when we dive into a mosh pit or into bed with someone we don’t quite trust. It is something to which we are more prone when we are drunk, or enraged, or enlivened by the solidarity and community of a chanting crowd.

These manifestations of irrationality, it should be clear, are, as the saying goes, beyond good and evil. Life would be unlivable if they were suppressed entirely. But to what precise extent should they be tolerated or, perhaps, encouraged? It will do no good to say flatly that they should be tolerated “in a reasonable balance” or “in moderation.” For the ideal of moderation is one that is derived from reason, and it is manifestly unfair to allow reason to determine what share it should itself have in human life in a competition between it and unreason. So if we can neither eliminate unreason, nor decide on a precise amount of it that will be ideal for human thriving, we will probably just have to accept that this will always remain a matter of contention, that human beings will always be failing or declining to act on the basis of rational calculation of expected outcomes, and that onlookers, critics, and gossipers will always disagree as to whether their actions are worthy of blame or praise.

The speeder and the duelist and the others seem guilty of no failure to correctly infer from what they already know, in order to make decisions that maximize their own interests. Rather, in these cases there is a rejection of the conception of life that it must be a maximization of one’s own individual long-term interests in order to be a life worth living. This rejection may be based on the belief, right or wrong, that one simply has no long-term interests, or none that would justify avoiding current risks or self-destructive behavior. It might be irrational to proceed in this way, but as Socrates already showed us, it is also irrational to pretend you are going to live forever. Between this latter form of irrationality, at the one extreme, and impulsive, reckless self-destruction at the other, there is an infinitely vast gray area, with infinitely many possible courses of action that may be deemed rational by one person and irrational by another. All of this uncertainty, and this perpetual balance between two extremes of irrationality, are a direct implication of the fact that we are mortal, that we feel ourselves to be making all of our decisions, to quote Nabokov, between two eternities of darkness.15

The Impossible Syllogism

Irrationality is not, generally, simple ignorance. If you do not have the relevant information, then you cannot rightly be faulted for not making the correct inferences. Irrationality must rather be, then, some sort of failure to process in the best way information one does already have.

It is, however, difficult to say, often, whether a given failure results from innocently not knowing, or rather from a culpable failure to bring into play what one does know. What we might call “Kansas irrationality,” after Thomas Frank’s popular 2004 book What’s the Matter with Kansas? is a case in point.16 If such a species of irrationality exists in the most strongly imputed form, then average Kansas voters would have to be knowingly making choices that subvert their own interests. Yet how often in the past few years have we heard the special pleading for voters of this sort, whether from Kansas, the Rust Belt, or any other stereotyped red landscapes of Trump’s America, that they are not to be blamed, that they are simply the victims of a manipulative mass media, of a failed public-education system, but not themselves the agents? And if they do not know they are voting against their own interests, then how can they be held to be irrational?

Between the one extreme of action in total ignorance, and the other extreme of knowing what the best thing to do is while instead doing the opposite, there is an enormous middle ground, in which a person may well be in the paradoxical state of knowing without knowing, of knowing something “deep down” but refusing to acknowledge it. For example, a person may know deep down that single-payer health care is rationally preferable, at least according to a model of rationality as the maximization of one’s own interests. That same person may also know that she would be personally significantly disadvantaged by the loss of insurance, yet nonetheless she may shut out those considerations in order to defend the argument that such a health system would amount to a loss of freedom, would be tantamount to authoritarianism, or, worse, a plot of sinister global forces.

In the vernacular English I recall from my adolescent social circles, one often heard the accusation, leveled from one friend to another, “You ain’t trying to hear me!” The strange syntax reveals a complex state on the part of the accused, something like the species of irrationality I am attempting to describe. It is not that one does not know or hear, and not even that one is trying not to know or hear, but something subtly different from this latter possibility: one is not trying to know or hear. One is declining to do the necessary work, to pay the necessary attention. And making the right inferences does take work. The failure to do that work is sometimes both morally blameworthy and cognitively irrational at once.

There is a long tradition in philosophy, associated most closely with Socrates, which has it that all intellectual failure is moral failure, and vice versa: to act immorally is to act from an intellectually unsound judgment, and, conversely, to err is to have failed, in a morally blameworthy way, to seek out the knowledge that would have enabled one to avoid error.17 As Miles Burnyeat sums up, “always the greatest obstacle to intellectual and moral progress with Socrates is people’s unwillingness to confront their own ignorance.”18 This is not to say that someone who fails to answer a question correctly on Jeopardy should be punished, since, for one thing, memorization of trivia of that sort is too, well, trivial, to really count as the work of the intellect at all. We fail in a moral-cum-intellectual way, rather, when we show ourselves unwilling to make the right inferences from what we know.

On a certain understanding of how political commitments are forged and maintained, moreover, we all always know enough, at least concerning the key issues open to public debate in our era, to make the right inferences. It is not that the rejection of the single-payer model by the Trump voter is a simple consequence of having failed to read this or that study by some insurance analyst showing that it would in fact be more economical for the individual citizen and more conducive to long-term health. It is not on the basis of particular facts of this sort that a person decides to reject single-payer health insurance. By the same token, there is also likely no new information that will convince him to change his mind. The rejection plays out at the level of group affiliation and hostility to out-group interests. It is irrational, but it is not ignorant.

Among the rationalizations for their opposition to what they see as socialized health care, Republican voters, when pressed on whether they themselves are adequately insured, have been known to deny that they personally need health care, since good health runs in their family. As mentioned already, others at Tea Party protests in the middle of the Obama era were heard to recommend alternative treatments, such as those practiced by Native Americans and ancient Chinese tradition, as a means of getting by without health insurance. These are bold statements. The second of them is not just antisocialist, but rigorously antimodern: it does not simply reject a socialized system of paying for medical care; it rejects medical care itself as it has come to be understood over the past several centuries. It implicitly denies that the supposed advances in scientific research in the modern period have really brought about any improvement in human health and well-being.

The former claim, that one has no need of health insurance in view of the fact that one is in any case healthy, is even bolder: it asserts a freedom that is not shaped by mortality, a total freedom that is not limited by the body and its eventual, inevitable breakdown. It fails to appreciate that “health” is not an essential property of any living body. No body, no family, is essentially healthy, any more than a day is essentially sunny or the sea is essentially calm. It can be glorious, in the moments in which we are relishing our good health, or the beautiful weather, to imagine that this is just how things are. But it is also infantile, for the mature apprehension of these goods is always permeated by an awareness of their fleeting character. The protester who insists she does not need health insurance, because she is essentially and permanently healthy, is, then, either stunted and delusional in regard to her true condition, or disingenuous. Or perhaps there is a third possibility: that these two states are not dichotomous, but rather represent the two ends of a spectrum with several points in between. The boundary between self-delusion and delusion of others, in other words, may not be perfectly clear. The protester might simply not want to have to face up to a hard truth, either in her speaking or in her thinking, that she is like others in respect of her bodily existence, and therefore is subverting her own interests in seeking to shut down a system that would provide her with health insurance.

Her protest, like that of other members of the Tea Party movement, is ostensibly in defense of freedom: freedom from a variety of forced collectivization. We have already, perhaps too swiftly, judged her denial of her own precarious condition as a variety of moral failure. Yet if it is motivated by a romantic impulse, of the same sort that sends soldiers off to war for the nation, rather than by simple delusion, then perhaps we will find ourselves compelled to take back that judgment. Delusion is the freedom exercised by those who believe falsely that they have broken away from the collectivity and have managed to defy the determinations into which we are born, as members of a particular social class, community, or biological species. The romantic who embraces death, by contrast, or a path toward quicker death, does so with a lucid understanding of what is at stake, but believes that throwing away her life for the sake of some attachment of community, or some vision of the good, is preferable to some other compromise that would extend her individual life but would also attach her to a collective form of life that she finds alien. The ugly iteration of this romantic vision is by now too familiar. It says, in essence, “How can I accept health insurance if it comes from a black president?” The “soft” iteration—soft in the same way that Herder’s nationalism was soft while Hitler’s was “hard”—says, “Why are suited bureaucrats far away trying to tell us how to live, to discourage us from eating fried foods and drinking corn syrup, taking away the things we love and the things that bring us together in love?”

Significantly, the protester’s demand for recognition of her individual freedom from the big government and the society it claims to hold together, a society that includes ethnic minorities and coastal elites with whom she does not identify, is at the same time a demand for recognition of the community with which she does identify, the struggling white working class, or however she may conceive it. The constraints that this community places on its members, in turn, with respect to speech, clothing, and comportment, are to a great extent incompatible with full individual freedom, and would seem particularly unworthy of an individual so exceptional as to not even be subject to bodily demise. The protester’s expression of her individual freedom is at the same time an expression of a desire to be absorbed into a community in which her individuality is dissolved altogether, as the soldier’s individuality is dissolved into his nation when he falls on the battlefield, or when he flies his fighter plane into an enemy battleship (perhaps reciting the romantic poetry of Hölderlin in his last seconds, as we know at least some Japanese kamikaze pilots did).19 So the Obamacare protester is a romantic after all: a delusional romantic.

But could it really happen that an adult human being should fail to grasp his own mortality? In his 1886 novella The Death of Ivan Ilyich, Lev Tolstoy depicts a bourgeois man who has led a thoroughly unexamined life, until in his prime he comes down with a fatal illness. On his deathbed he recalls the well-known syllogism he had been taught in school, which has it that all men are mortal; that Caius is a man; and, therefore, that Caius is mortal. Ivan Ilyich, a dying man, realizes that even as a child he had understood all the terms of the inference, and he had understood that the inference was valid, yet, somehow, he had failed to notice that he could substitute his own name for that of Caius, and that this could have prepared him, early on, for the misfortune that would eventually arrive:

In the depth of his heart he knew he was dying, but not only was he not accustomed to the thought, he simply did not and could not grasp it.

The syllogism he had learnt from Kiesewetter’s Logic: “Caius is a man, men are mortal, therefore Caius is mortal,” had always seemed to him correct as applied to Caius, but certainly not as applied to himself. That Caius—man in the abstract—was mortal, was perfectly correct, but he was not Caius, not an abstract man, but a creature quite, quite separate from all others. He had been little Vanya, with a mama and a papa, with Mitya and Volodya, with the toys, a coachman and a nurse, afterwards with Katenka and with all the joys, griefs, and delights of childhood, boyhood, and youth.20

For most of his life, Ivan Ilyich did not know something he knew. He could not bring himself to make the proper inferences given the facts he already possessed in order, then, to lead the best sort of life possible: one that acknowledges death, is not afraid of it, and that frees one to construct one’s projects in recognition of it. Tolstoy had seen this failure as principally a trait of the bourgeoisie, of members of that class which, in nineteenth-century Russia, was supposed by the intellectuals to cling to a self-image forged from trivial things, from social niceties, from the selection of wallpaper motifs and similar distractions. In more recent years, cultural historians such as Peter Gay21 and economic historians such as Deirdre McCloskey22 have offered, in very different ways, compelling, even loving, accounts of the integrity and internal depth of the lives of the bourgeois classes that emerged across Europe in the eighteenth and nineteenth centuries. One does not have to agree with them on all points in order to see that perhaps Tolstoy is being a bit hard on his own Ivan Ilyich, and perhaps, moreover, that this character’s denial of death had to do not so much with his class affiliation as with, quite simply, a deeply human difficulty in coming to terms with our own impending individual nonexistence.

We have seen the denial of death, more recently, in members of the disenfranchised working class in America, or those who had fallen through the corroded bottom layer of what in the United States is called, not the bourgeoisie (for fear, presumably, of an awakening class consciousness among the proletariat), but rather the “middle class.” An inauthentic life, a life spent in self-delusion about death, is, then, evidently not one that is restricted to any particular social class. It seems intrinsically to be a result neither of privilege nor of desperation. It can remain a private failure, as seems to have been the case for Tolstoy’s protagonist, or it can become public and can affect, or infect, a political movement. Ivan Ilyich’s failure to carry out the syllogism seems to have harmed no one but himself. Those who wished during the Tea Party protests to be freed of their Obamacare, and who were willing to deprive others of it along with them, are considerably more threatening to the general social good, just as anti-vaxxers are to crowd immunity. This species of irrationality is sometimes private, sometimes public, and the same irrational beliefs that in one context may quietly be taken to one’s grave without any social consequences, may in another social context be the seed from which a social climate of unreason and self-destructive politics grows.

Even now, as I write this, I am not certain I have fully grasped the force of the old syllogism about mortality. I too understand that all men are mortal, and I understand that Caius is a man. I understand what all this means … for Caius. I assume that Caius, moreover, is long dead. But what all this has to do with me, exactly, is something that I seem, somehow, to know and not know at once. If full rationality requires me to fully come to terms with my own coming death and to act in accordance with this fact, then I fear full rationality remains, for now, quite out of reach. I can write a book about, among other things, the irrationality of a life spent in refusal to acknowledge one’s own mortality. Yet I cannot acknowledge my own mortality, not fully. I cannot substitute my own name into the syllogism. In moments of great pride, often pride born of panic, I find myself especially prone to thinking about myself as immortal, and about my vain and trivial endeavors as all-important. I find myself alternating between this inflated attitude and its opposite; as William Butler Yeats wrote, “the day’s vanity” is “the night’s remorse.”23 I am not so different from the woman who denies she will be needing health insurance, and I am not so different from Ivan Ilyich.

Ivan Ilyich, having failed to think his way through to something more authentic, makes do with a vision of life constituted by bourgeois comforts and simple, ultimately meaningless pleasures. For Tolstoy, what a more authentic life would look like, though this vision lies beyond the scope of the novella itself, is one that is charged with spiritual depth and structured by a sort of pacifistic, nondenominational Christianity, of the sort Tolstoy himself practiced. At the end of the nineteenth century it was more common, however, to see, as the suitable form of life for those who have abandoned bourgeois complacency, not quiescent spirituality, but rather bold, transgressive action. Thus Charles Baudelaire evokes the thirst for “an oasis of horror in a desert of boredom.”24 Such a vision of a meaningful life would in turn inspire many in the twentieth century who saw in war and violence the only salvation from self-deluding complacency. Pankaj Mishra has portrayed in detail the case of Gabriele D’Annunzio, the Italian aristocrat, poet, and fighter pilot who briefly occupied the town of Fiume in 1919, proclaiming himself il Duce, a title that would of course later be associated with Benito Mussolini. “He invented the stiff-armed salute,” Mishra relates, “which the Nazis later adopted, and designed a black uniform with pirate skull and crossbones, among other things; he talked obsessively of martyrdom, sacrifice and death.”25

The turn toward violent transgression as a solution to the inadequacy of a life spent in small comforts and denial of death is generally understood as an expression of irrationalism. It is a romantic, counter-Enlightenment tendency, and has likely been a significant impediment to the construction of a just and equal global society over the course of the twentieth century. It remains one today, in the era of Trump and of recrudescent nationalism in Turkey, India, and elsewhere. And yet it gets at least something right, something that we fault Ivan Ilyich for failing to see, and that we, seemingly rightly, see in him as irrational. D’Annunzio looked death in the face, while Ivan Ilyich turned away in fear. D’Annunzio recognized the basic condition of human life. Is this not rational? Is this not what Socrates, the paragon of reason, also did when he accepted his own death sentence?

“Do not take others out with you,” might be a fitting corollary to the imperative that we recognize death as the horizon of human life, and that we live our lives accordingly not in the worship of small comforts, but in preparation for life’s end. The Stoics for their part acknowledged that suicide might often be a rational and fitting decision, and that there is no absolute bad in it.26 But they were not so keen on defending murder. How exactly we get from the imperative to recognize our own mortality (as Tolstoy and Socrates argue we must) to the justification or even celebration of bringing about the premature mortality of others (as Baudelaire, D’Annunzio, and so many others have at least flirted with defending) is a difficult question. It is one that is at the heart of the problem of the relationship between irrationality as a shortcoming of an individual mental faculty, on the one hand, and as a political or social phenomenon of masses in motion on the other.

Tie Me Up

Irrationality, we have seen, is often inadequately treated as if it were merely an intellectual failure: a failure to make the right inferences from known facts. If this were all it is, it would not be terribly interesting. People would make inferential mistakes, and if they were to verbalize these mistakes, others would kindly correct them, and that would be that. Things get more complicated when we consider irrationality as a complex of judgment and action. In fact, when we turn our attention to action, we see that much of what is commonly deemed irrational is not based on incorrect inferences at all, is not based so much on a failure to know what we know, as it is on a failure to want what we want.

Many of the irrational things people do are in fact done with full, perspicuous knowledge of their irrationality. Smoking is the classic and most familiar example of this. How many smokers have we heard say, as they light up, that they really should not be doing what they are doing, that they in fact would prefer not to be doing it? This sort of irrationality is commonly called “akratic,” from the Greek akrasia, ordinarily translated as “weakness of will.” It does not involve an incorrect inference from known facts, but rather an action that in no way follows from the correct inference one has made.

We cannot simply assume at the outset that smoking is irrational. A smoker may have gone through a rigorous cost-benefit analysis and chosen to risk the future costs of smoking in order to derive the pleasure now—perhaps not only the pleasure of the nicotine in the bloodstream, but also the more abstract pleasure of being a smoker, of having a social identity as someone who lives for “the now,” or of someone who is, quite simply and undeniably, cool. Or he may be persuaded by the reflection on mortality that yields up a popular bit of folk wisdom: that we ought to “find the thing we love and let it kill us slowly.” This is the wisdom behind an old Soviet joke about smoking. In the USSR there was an ad campaign against smoking, denouncing it as “slow death.” A man looks at the ad, says to himself, “That’s all right, I’m in no big hurry,” and lights up. The joke, if it must be explained, is that the man reads the ad as advising against smoking only because it is an ineffective means of suicide. It takes too long, whereas if you really want to commit suicide you’ll take care of it swiftly. The man, for all his misunderstanding, seems to have a fairly rational disposition with regard to smoking and mortality: life, even in the shadow of death, is not so bad; one might as well do what one enjoys, as long as this does not hasten death too much.

Often, by contrast, a person relates to smoking, or some similar activity, very differently, as something she would very much like not to do, but that she does anyway. How, now, is such a predicament even possible? In a pair of groundbreaking books, the analytic Marxist and rational-choice theorist Jon Elster masterfully analyzed some of the central features of practical irrationality. In 1979’s Ulysses and the Sirens, he focused on the curious phenomenon whereby individual people freely choose their own constraints, on the expectation that they will in the future behave irrationally, in contradiction with what their present selves would want.27 They are like the hero of Homer’s tale, who arranges to be bound to the mast of his ship by his mates, in order not to give in to the temptation of the Sirens’ call. In the follow-up work, 1983’s Sour Grapes, Elster takes on Jean de La Fontaine’s rendering of the traditional fable about a fox who, finding that the most delicious grapes are out of reach, determines that he does not want them anyway, as they are “trop verts … et faits pour des goujats” (too green, … and suitable for suckers).28

Thus in the one work we are confronted with the problem of people who know how they would act in the absence of constraints, and so act preemptively so as not to act; in the other work, we encounter the problem of people who find themselves, already, under constraints, and accordingly modify their preferences to the point where they believe they would not act differently even if they were not under these constraints. This latter sort of behavior does not strike me as obviously irrational. It involves at least initial self-deceit, but only for the purpose of accommodating reality, and not of denying reality. At least one rationalist philosopher, Leibniz, would in fact hold that whatever reality has in store really is, by definition, the best, and moreover that reality cannot be otherwise. If we do not experience it as the best, this is only because we are unwisely unable to appreciate it from the perspective of the entire rational order of the world.

The grapes that are out of my reach might in fact be, in view of their intrinsic properties, the best, and it might in fact be dishonest to myself if I try to convince myself that they are not the best. But it does not follow that either the world, or my own individual life, would be better if I had the grapes; only children and stunted adults believe that life itself improves with the acquisition of sweet morsels and delightful toys. It is good to be able to do the sort of work on oneself that results in a perspective on life that recognizes the nonnecessity of the grapes to my thriving, not to mention to the goodness of the world. There might be a problem for moral philosophy about the resort to dishonesty toward oneself—that is, convincing oneself that the intrinsic properties of the out-of-reach grapes are not as good as they are—and it might in fact be better to come to a state of indifference toward the grapes by honest means. But it still does not seem to be irrationality that is in play here.

In the case of Ulysses, by contrast, we are dealing not with someone who strives to not want what he cannot have, but rather someone who arranges to not have what he wants, and moreover is initially in a position to have. How are we to understand this strange scenario? In fact, it is not at all hard to understand. The perfect fluidity of Homer’s ingenious tale requires, in fact, that we recognize what Ulysses is doing, and that we see ourselves in him: as when we, say, ask our friends in advance to hide our cigarettes on a night of anticipated heavy drinking. The difficulty in understanding, rather, sets in only when we begin to analyze what is happening, when we spell out explicitly the peculiar fact about human beings that they can both want and not want the same thing.

This condition has often been discussed in the literature as the opposition between first-order and second-order desires. My first-order desire is for a cigarette; my second-order desire is to lead a long, healthy life. Ulysses’s first-order desire is to rush forthwith toward the Sirens; his second-order desire is simply to live until tomorrow. One might argue that there is really no problem here, either, that me-at-present and me-in-the-future are sufficiently different that they can want different things, have different interests, have different courses of action that are good for them. There might be some perplexities here about the metaphysics of time and of personal identity over time, but nothing inherently irrational in the recognition of the differences between these two different people sharing the same memories and the same body (along, perhaps, with infinitely many other individuals, each enduring for only a tiny sliver of time). There might be practical problems that arise for political philosophy: for example, whether people should be allowed to make contracts with themselves, such that, if they fail to follow through with a plan or a course of conduct, they consent, now, to their future self being punished.29 But again, these problems do not seem to have to do preeminently with irrationality.

Just as Ivan Ilyich knows something he does not know, Ulysses wants something he does not want. Ivan Ilyich irrationally thinks that he is not going to die—or, more precisely, does not think that he is going to die—and this belief is irrational precisely because, in fact, he knows perfectly well he is going to die. He can perform the syllogism that begins “All men are mortal,” and replace the name of Caius with his own, but he declines to do so. Ulysses in turn knows that he wants what he does not want, and there is no sense in which he does not know that he has this bit of knowledge. He therefore takes the necessary steps to avoid getting this thing that he wants and does not want. On a certain understanding, his approach is consummately rational, even as the very coexistence within him of first- and second-order desires testifies to his irrational nature. His rationality is a matter of developing an effective means of managing his irrationality. Well done, Ulysses.

Cargo Cults

“Sour grapes,” as we have seen, is the phenomenon whereby we come to believe that what we are constrained to have is better than what we cannot have. La Fontaine’s fable, and Elster’s engagement with it, are in fact rather different from what we ordinarily understand by this phrase today. Someone who experiences “sour grapes,” in common parlance, is imagined as having a face puckered with acid resentment, as positively stewing at the thought that things might be better for others elsewhere. La Fontaine’s fox, by contrast, is at ease, in a state of what the Stoics called “ataraxia,” or equanimity, convinced, now, that things are best just where he himself is and nowhere else. The fox might indeed appear to be a paragon of rationality, in contrast with the person who lives according to that other interpretation of “sour grapes” that we have just considered: the one who believes, to invoke another folk saying, that the grass is always greener on the other side of the fence.

The oscillation between these two interpretations, in fact—between the belief that things are just fine as they are here at home, and the belief that we must expand our efforts ever further in order to bring sweeter fruit back to where we live—would seem to offer a fairly comprehensive summary of modern European history, and of the paired, and apparently opposite, motions of blood-and-soil nationalism, on the one hand, and imperialist expansion on the other.

The expansion of Europeans throughout the world since the beginning of the modern period, certainly, whether for commerce, war, or colonization, has seldom been constrained by the perception that this or that fruit is inaccessible. Indeed what we see is perhaps a different species of irrationality altogether: not one wherein existing desires are curtailed in view of limitations, but rather wherein limitations are overcome for the purpose of creating new desires. This is, in sum, the argument of many historians of exploration and trade, notably Sidney Mintz, in his influential 1985 work Sweetness and Power: The Place of Sugar in Modern History.30 Early modern globalization was not, as we might imagine, driven by a dire need among Europeans to go out and find absolutely essential goods of which there had previously been a short supply. Rather, it was driven in no small part by a search for luxury goods: spices, silk, coffee, tobacco, sugar, and many other commodities Europeans naturally did not know they needed until they knew they existed.

The Romans did just fine without intensive production of cane sugar; honey and fruit-based ingredients were quite enough to sweeten their foods. But Europeans have been seeking out new desires since long before they had acquired any self-conception as Europeans. In the tenth century the pagan Scandinavians made deep inroads into eastern Europe, and ultimately into central Asia and the Middle East, in order to trade their furs and soapstone for exotic luxuries.31 In crafting the funeral masks of pharaohs, the ancient Egyptians used lapis lazuli, a precious stone that had to be brought all the way from Afghanistan.32 There is in fact substantial evidence of long-distance transmission of status-conferring luxury commodities as early as the Upper Paleolithic.33 For as long as we have been human, we have not been content to take just what we need from our immediate surroundings, but have traveled far, or relied on others who have traveled far, in search of things we did not know we needed until we got there. We do not, of course, need these things in the sense that we would perish, as individuals, without them. But human cultures seem to need things they do not need, and would likely perish, as cultures, without them.

It is human to need what you do not need, just as it is human, evidently, to know what you do not know, and to want what you do not want.

Culture, in this sense, we might say, is irrational. It depends for its existence on the symbolic value of hard-won commodities that it could perfectly well do without in a material sense. In the current era, this symbolic value is often embodied not by imported goods brought from far-off lands, for there are no such lands any more, but rather by commodities that are manufactured with the explicit purpose of being sold as luxury goods, and often that are luxury goods only to the extent that they are packaged and marketed as such. Many food items are deemed to be high status, and consequently are more highly priced, for reasons that have nothing to do with their relative scarcity. To cite a well-known example from the anthropologist Marshall Sahlins, if we were to price the cuts of beef based on abundance, we could expect the tongue to be the most costly, whereas in fact in our culture it is deemed to be of little value and sold at a low cost.34 We might pretend that this valuation has only to do with the bare gustatory properties of eating tongue as opposed to eating some other more choice cut of meat, but in fact it has everything to do with culture, with the way we carve things up according to our internally meaningful but externally arbitrary standards.

There is of course usually at least something about luxury items that makes them somewhat better, more desirable, and therefore rightly more difficult to acquire: a Lamborghini is truly better than a Chrysler K-Car, from an engineering point of view, and truly more pleasurable to drive. But to appeal to the intrinsic properties of high-valued objects in a culture is almost always only to scratch the surface. Many people now believe that refined cane sugar offers the least desirable means of sweetening food. They are returning to honey and fruit sweeteners, to ingredients that had been available in the old world since antiquity. What, then, were those centuries of forced plantation labor for, the millions dead and displaced, the obesity and diabetes, the tooth decay? What made the sweet but insipid and rather juvenile taste of cane sugar seem, for so long, to be worth such an incalculable toll? Has this not been the height of irrationality?

Again, if we call it irrational, then we must level this accusation not only against modern Europeans, but against humanity, since it is in the end only a further development of what human beings have been doing all along. But to say that it is irrational for human cultures to value things that are not, strictly speaking, necessary for them, seems rather severe, as the only alternative is the sort of bare “animal” existence that takes care of immediate needs and nothing more, a form of existence we also routinely disparage as irrational. Sugar and spice and silks are not in themselves necessary, but the culturally embedded satisfaction they are able to give seems to be essentially human and ineliminable.

Our taste for cane sugar, or for agave-syrup sugar substitute, or for Caspian caviar or Andean quinoa, is all very much conditioned by global economic and historical forces that are generally quite beyond the scope of our immediate perception of these foods’ sensible properties. The failure to think beyond these immediate properties—to understand the commodities we consume as having a history, prior to our contact with them, that involved human labor and likely also human and also animal and environmental suffering—is a variety of irrationality that the Marxists call “false consciousness.” This is, again, a failure to know what one knows. On some accounts, such as that offered by the pathbreaking French Marxist sociologist Pierre Bourdieu in his 1979 Distinction: A Social Critique of the Judgment of Taste,35 perfect overcoming of false consciousness would involve the recognition that every preference we believe we have, as consumers of food, music, furniture, packaged vacations, and the like, is a pure expression of our class identity, while the intrinsic properties of these things, though they may be undeniably pleasant, are strictly irrelevant in the true and exhaustive account of why we seek them out.

Anthropologists have long been interested in what they have called “cargo cults.”36 The term gets its name from a cultural phenomenon first observed in New Guinea during World War II, in which indigenous Melanesians constructed, using available natural materials, semblances of the cargo that the British army delivered to its troops stationed there. The indigenous people went so far as to build duplicate runways, and, on them, they constructed what looked like airplanes, but were in fact elaborate life-sized wood sculptures simulating airplanes. These creations were not intended as decoys for any reasons having to do with wartime strategy, and in fact they served no practical purpose whatsoever. Or at least they served no purpose that outside observers could understand. They were, it was concluded, something like cult objects, symbols in a spontaneous new religious movement among “natives” who were thought to be naturally very impressed with the technological superiority of the British.

In fact it would not be out of line to suggest that the cargo cult is the general model of all culture. I was recently in a restaurant that had salt- and pepper-shakers in the image of anthropomorphic smartphones: they were smiling people, but they were pocket telecommunication devices, but they were condiment dispensers. What is the logic of this? Presumably the novelty and cutting-edge quality of smartphones gave them, in the early to mid-2010s, a sort of cultural prestige that could then be borrowed in the production of a number of other cultural artifacts, including ones like salt- and pepper-shakers that have been around for centuries, simply by giving these artifacts a resemblance to their new sleek descendant. But why then also give them human faces? To remind us, one imagines, as the New Guineans presumably already knew, that human-made objects share in the humanity of their makers.

It is, one may further suggest, this same cargo-cult phenomenon at work when we happen upon a “museum” in a remote village somewhere, housing no more than a few historical objects from the region, plus perhaps some laminated sheets of information or of old photographs that have been printed out from the internet. I have often felt, similarly, when I am visiting what is billed as an upscale restaurant in a small city in a distant, provincial corner of the world, that I am not so much visiting an upscale restaurant as I am visiting a simulation of what the local people think an upscale restaurant in some faraway capital city must be like. Sartre thought that even a Parisian waiter is in a sense imitating a Parisian waiter—that is, he artificially enacts in his gestures and behavior some ideal image he has of what it is that someone such as himself should be doing. He can even overdo it, and be, paradoxically, too much the person he is. This sort of imitation can only be more pronounced, and often more excessive, when one is not in Paris, but, say, in a restaurant with a French name, or a French-sounding name, in Nebraska or Transnistria. The ritual can all the more easily come to seem ridiculous out here on the geographical margins: Why must this waiter pour my water so ceremoniously? And when did he learn that coming with an enormous pepper mill directly to the table and offering to dispense a bit of it was something one does? Everything about upscale restaurants is absurd, but when it is rationalized by those involved by reference to the way things are done—correctly, perfectly, exaltedly—somewhere far away, then this seems to add a further layer of irrationality, as it groundlessly imagines that there is a reason for the way the activity is done elsewhere, and that we can attain to this reason simply by imitation.

One further example is perhaps in order. Friends from my hometown spent considerable time and energy debating, a few years back, the municipal government’s intention to use public funds to acquire a sculpture made by Jeff Koons (or by his employees). More or less all who were in favor of its acquisition noted how valuable the presence of such a work could be for raising the city’s profile. There was little or no discussion of the aesthetic value or meaning of the work in question; there was only an acknowledgment of what was already for them a given fact: that meaning had been created elsewhere. This meaning might be inscrutable, but here in our city we are not the ones charged with determining what it is, let alone with creating works that might be presented to the world as yet other material distillations of meaning. Here, in our city, meaning is imported ready-made from elsewhere, after which our city gains a place on the broader map of meaning, like a town that has been recently connected by a new rail line to the metropolis. Again, however irrational it may be to shell out for Jeff Koons in London or New York, there is an added layer of irrationality to do so in Sacramento, simply on the grounds that one knows it is what one does in London or New York.

But what is this action that so many of us want in on? What is the special power of the Koons sculpture or of the perfect wallpaper? What leads us to declare that we absolutely love kale or our Toyota Prius, or the way the waiter turns the crank on the pepper mill, or that we simply could not do without our annual restorative trip to the French Riviera? Combining the insights of Bourdieu and Tolstoy, we begin to discern that the particular form of not knowing what we in fact know that underlies such declarations has something to do—like Ivan Ilyich’s preoccupation with his choice of wallpaper—with the difficulty of facing up to the fact that we are going to die. Bourdieu and Tolstoy both see the inability to meet this difficulty square on as a great failure to realize the full potential of a human life. Others, including Sahlins, tend to see this failure as itself ineliminably human, to see absorption in the preference of this cut of meat over that one, of this lapel pin or handshake over those of our peculiar neighbors, as part of the wonder of what it is to be a human being embedded in a human culture. How remarkable that we can be so captivated by such things as to forget that we are going to die! That is not a failure, but a victory!

Those who have wanted to jolt us out of this preoccupation with the trivialities of human culture, forcing us to face up to the fact of our death, have often been motivated by big plans for what we will do after our awakening, by a vision of a radically different, and often utopian, rearrangement of society. It is generally recognized that such a rearrangement will come at a high cost, even that lives will be lost, and that people must therefore be prepared to abandon their small comforts for the sake of something greater. Many, indeed, are willing to take up this trade, this exchange of wallpaper for a world-renouncing commune, or of consumer goods for jihad. Those who go in for this deal are correct in recognizing that they are going to die, and they conclude from this, rightly or wrongly, that they would do better to die for something. Others in turn look their own death in the face, and find that they despise the trivialities of death-denying daily life, but still do not wish to join up with anything particularly bold or self-transcending. They might wish only, as Krzysztof Kieślowski announced he was going to do when he retired from filmmaking, to sit in a dark room and smoke. This was in 1994; two years later, he was dead of a heart attack at the age of fifty-four.37

There is no way out of it: every response to the specter of mortality can be criticized for its irrationality. If we absorb ourselves in home decoration, we are failing to know something we in fact know; if we run off and join some glorious cause, we are failing to maximize our own individual interest; if we just sit alone and let the thing we love kill us slowly (or quickly), we are, so our friends and family tell us, failing to do everything we could have done to get the most out of life. When the brutally bitter, astoundingly honest and self-knowing Austrian writer Thomas Bernhard received the Austrian Literature Prize in 1968, he created a scandal with his acceptance speech, and succeeded in doing, through his acceptance, what Piketty and the (surviving) Sex Pistols had hoped to do through their refusal. His speech began, “There is nothing to praise, nothing to condemn, nothing to criticize, but it is all ridiculous [lächerlich]; it is all ridiculous, if you just think about death.”38

In Loving Repetition

We have spoken on several occasions throughout this book of our “ordering” our lives, and we have identified the very concept of “order,” as kosmos, as having a deep historical and conceptual connection to that of reason, as logos. Many have supposed that the universe itself is rational in view of the way in which it is ordered. Many have also thought that human life gains its reason in part from the way we order it, not from the things we believe, but rather from the things we do. For many people, this ordering takes the form of religious ritual, which, as we saw in chapter 4, the poet Les Murray has described, in reference to his own Catholic faith, as “loving repetition.”

It may be that the shift in modern philosophy, and modern thought in general, to language as the locus of significance in human life has in turn caused repetition to appear, against Pina Bausch’s claim (see, again, chapter 4), as “mere repetition.” The philosopher Frits Staal, who immersed himself in Brahminic rituals for many decades, developed a theory of ritual as a system of “rules without meaning,” which, he came to believe, is in fact more primordial than language in giving order and orientation, if not conceptual tools, to human existence.39 Many who are raised in the Protestant world, and trained up to believe that the essence of religion is a personal relationship to God, are surprised when they travel to, say, southern Italy or to the Balkans, and encounter for the first time a conception of religion in which the rituals—the quick sign of the cross when passing a church on the street, the cycles of fasting and feasting, the ex-voto candles lit and the prayers muttered—are presumed to be sine qua non for the survival of religious belief. Some scoff when they encounter this, and insist that such religious people are not religious at all, but superstitious.

Tolstoy would unequivocally reject ritual-bound religion, arguing instead, famously, that “the Kingdom of God is within you,” and therefore that true religiosity consists in recognition, via introspection, of the divine. But if Staal is correct, one hopes in vain to arrive at some pure core of religion through the abolition of its attendant rituals, as in fact the ritual is what defines religion as a sphere of human life. Seen from a different angle, in fact, one might suppose on the contrary that when we strip away the rituals, it is only a superstitious core of belief that remains: rituals themselves cannot be superstitious, since, as Staal notes, they have no meaning at all, while this is not at all the case for beliefs about transcendental powers or about life after death. Thus for Staal ritual is not simply the superstitious chaff we might hope to remove from the wheat of rationalized religion; it is the reason of religion itself, although in a deeper sense of “reason” than we ordinarily understand it, as order rather than conceptual articulation. It is ritual, some have felt, that holds the world together.

Having spent considerable time in the Balkans, I have learned that what early on looked to me like mindless superstitions surrounding death—the way Balkan cultures tend to the graves of their loved ones, have periodic feasts and rituals in commemoration of the dead, often culminating, seven years on, in a digging up and cleaning of the bones—are in fact a complex and effective sort of cultural processing. As one French demographer familiar with the funerary practices of the region has noted, in the Balkans death occupies a place at the center of communal life comparable to that occupied by sex in Western countries.40 By placing death and the intervals of funerary ritual in the center of social life, these cultures have found a way of rendering comprehensible what is in itself irreducibly mysterious, and what is in fact no less mysterious in a culture such as my own, where we euphemistically “celebrate the life” of the ones we have loved and lost, without truly facing up to the full reality of their deaths, not least their ghoulish corpses and bones.

An e. e. cummings poem describes a love so intense that the beloved comes to be seen as “the wonder that’s keeping the stars apart,” that is, holding the stars in their place and preventing them from collapsing together. But in the absence of another person to embody that wonder, many have sought to modulate or process it through their own actions. Nor must one belong to a particular religion with prescribed rituals in order to come to the conclusion that it is ritual that holds the world together, that keeps the stars apart. Thus the protagonist of Andrei Tarkovsky’s 1986 film The Sacrifice, on the brink of a nuclear apocalypse, entertains the idea that if only he had dutifully done some deed each day, even if it were just flushing the toilet at the same time, perhaps the world could have held together: an absurd thought, of course, but one coming from somewhere deeper than fear and desperation. If reason is order, there is no more effective way to enact it in an individual human life than through repetition. Yet there is also nothing more apparently irrational, as Tolstoy, and indeed Martin Luther and most Protestant theologians since have believed, than to find oneself enslaved to the obsessive compulsions of religious ritual. Here, as often, we find that the very same thing can appear as the height of rationality, or as its opposite, depending only on the frame of our judgment.

It is all ridiculous, this choice between opposite expressions of irrationality, in the face of death, but we do the best we can, for as long as we can, to impose a share of order on it, by choosing nice new wallpaper, by respecting feast days and fast days, by honoring the ancestors according to the rhythms and intervals that they themselves devised; by aspiring to understand them better than they understood themselves, and so to honor them in ways they could not have imagined.41