We return now to a question raised earlier in this book. Who is in charge? We seek control over our bodies, our minds, and our lives, but who or what will be doing the controlling? The body can be ruled out because of its tendency to liquefy—or turn into dust—without artful embalming. So the entity we wish to enthrone must be invisible and perhaps immaterial—the mind, the spirit, the self, or perhaps some ineffable amalgam, as suggested by the phrase “mind, body, spirit” or the neologism “mindbody.”

The spectacle of decomposition provides a powerful incentive to posit some sort of immaterial human essence that survives the body. Certainly there is very little talk of “mind-body unity” in the presence of a rotting corpse. In fact, the conversation is likely to take a different turn, to an emphasis on the existence of an immortal essence, or soul, that somehow carries on without the body. Medieval Catholic artists and clerics deployed images of decomposing bodies—sometimes with maggots wiggling in the nostrils and eye sockets—to underscore the urgency of preparing the soul for the disembodied life that awaits it. Buddhist monks practice “corpse meditation” in the presence of corpses, both fresh and rotting, to impress on themselves the impermanence of life. The soul, in both Christian and Islamic philosophy, is the perfect vessel for the immortality that eludes us as fleshly creatures: It’s immortal by virtue of the fact it somehow participates in, or overlaps with, an immortal deity. Even nonbelievers today are likely to comfort themselves with the thought of a “soul,” or spirit, or vague “legacy” that renders them impervious to decay. As Longfellow famously wrote, “Dust thou art, to dust returnest, was not spoken of the soul.”1

But no one has detected this entity. There is in fact much firmer evidence for the existence of “dark matter,” the hypothesized substance that is invoked to explain the shape of galaxies, than there is for any spirit or soul. At least dark matter can be detected indirectly through its gravitational effects. We can talk about someone’s soul and whether it is capacious or shriveled, but we realize that we are speaking metaphorically. Various locations for an immaterial individual essence have been proposed—the heart, the brain, and the liver—but autopsies yield no trace of it, leading some to speculate that it is delocalized like the Chinese qi. In 1901, an American physician reported that the human body loses three-quarters of an ounce, or twenty-one grams, at the moment of death, arguing that this meant the soul is a material substance. But his experiment could not be replicated, suggesting that the soul, if it exists, possesses neither location nor mass. One can’t even find the concept of the “immortal soul” in the Bible. It was grafted onto Christian teachings from the pagan Greeks long after the Bible was written.2

The idea of an immortal soul did not survive the Enlightenment unscathed. The soul depended on God to provide its immortality, and as his existence—or at least his attentiveness—was called into question, the immortal soul gave way to the far more secular notion of the self. While the soul was probably “discovered” by Christians (and Jews) reading Plato, the self was never discovered; it simply grew by accretion, apparently starting in Renaissance Europe. Scholars can argue endlessly about when exactly the idea of the self—or any other historical innovation—arose; precedents can always be claimed. But historians have generally agreed on the vague proposition that nothing like either the soul or the self existed in the ancient world. Ego, yes, and pride and ambition, but not the capacity for introspection and internal questioning that we associate with the self. Achilles wanted his name and his deeds remembered forever; he did not agonize over his motives or conflicted allegiances. That sort of thinking came later.

Lionel Trilling wrote that “in the late 16th and early 17th centuries, something like a mutation in human nature took place,” which he took to be the requirement for what historian Frances Yates called “the emergence of modern European and American man.”3 As awareness of the individual self took hold, the bourgeoisie bought mirrors, commissioned portraits, wrote autobiographies, and increasingly honored the mission of trying to “find” oneself among the buzz of thought engendered by a crowded urban social world. Today we take it for granted that inside the self we present to others, there lies another, truer self, but the idea was still fresh in the 1780s when Jean-Jacques Rousseau announced triumphantly:

I am forming an undertaking which has no precedent, and the execution of which will have no imitator whatsoever. I wish to show my fellows a man in all the truth of nature; and this man will be myself.

Myself alone. I feel my heart and I know men. I am not made like any of the ones I have seen; I dare to believe that I am not made like any that exist. If I am worth no more, at least I am different.4

Megalomania, or the proud claim of a rebellious political thinker? Contemporary thought has leaned toward the latter; after all, Rousseau was a major intellectual influence on the French Revolution, which, whatever its bloody outcome, was probably the first mass movement to demand both individual “Liberté” and “Fraternité,” or solidarity within the collective. There is something bracing about Rousseau’s assertion of his individual self, but the important thing to remember is that it was an assertion—no evidence was offered, not that it is easy to imagine what kind of evidence that might be. As historian John O. Lyons put it, the self was “invented.”5

Another slippery abstraction was taking hold at around the same time as the self, and this was the notion of “society.” Like the self, society is not something you can point to or measure, it is a concept that has to be taught or shared, a ghostly entity that arises from an aggregate of individual selves. In material terms, you can imagine a “super-being” composed of numerous subunits clumsily trying to coordinate their movements. It is no coincidence that the concept of society arose along with that of the self, if only because the newly self-centered individual seemed to be mostly concerned with the opinion of others: How do I fit in? How do I compare to them? What impression am I making? We do not look into mirrors, for example, to see our “true” selves, but to see what others are seeing, and what passes for inner reflection is often an agonizing assessment of how others are judging us.

A psychological “mutation” of this magnitude cries out for a historic explanation. Here, historians have generally invoked the social and economic changes accompanying the increasing dominance of a market economy. As fixed feudal roles and obligations lost their grip, it became easier for people to imagine themselves as individuals capable of self-initiated change, including upward mobility. You might be an artisan and learn to dress and speak like a merchant, or a merchant who takes on the airs of an aristocrat. Traditional bonds of community and faith loosened, even making it possible to assume the identity of another person, as in the famous case of the sixteenth-century adventurer who managed to convince the inhabitants of a village that he was their missing neighbor Martin Guerre. He took over the family inheritance and moved in with the real Guerre’s wife, at least until the ruse was uncovered three years later.6 If you could move from village to village, from village to city, from one social class to another—and surely the disruptions of intra-European wars played a part in the new mobility—you have to constantly monitor the impression you are making on others. At the same time, those others are becoming less trustworthy; you cannot be sure what true “self” lies behind the façade.

Related to the rise of capitalism—though how related has long been a subject of debate—was the religious innovation represented by Protestantism, which midwifed the soul’s transformation into the modern notion of the self. Pre-Reformation Catholics could ensure a blissful postmortem existence by participating in the sacraments or donating large sums to the church, but Protestants and especially Calvinists were assigned to perpetual introspection in an attempt to make their souls acceptable to God. Every transient thought and inclination had to be monitored for the slightest sinful impulse. As science and secularism chipped away at the notion of God, the habit of introspection remained. Psychoanalyst Garth Amundson writes:

People continued to look inward, into the private life of the mind, so as to locate essential truths about their lives, though without the additional notion that these truths are the fruit of a dialogue with God’s presence within the self. Hence, the Deity that Augustine thought that we discover by looking within the self was dethroned, and replaced by an invigorating confrontation with powerful private emotional states, fantasies, hopes, and needs. An authentic and immediate awareness of one’s affective experience became the new center around which to create a life lived truthfully and “fully.” In this way, the development of the private life of the self became something of an object of worship.7

Or, as somewhat more simply put by a Spanish historian, “the modern Rousseauist self, which feels and creates its own existence, would appear to be the heir to attributes previously assigned to God.”8

In our own time, the language of self-regard has taken on a definite religious quality. We are instructed to “believe” in ourselves, “esteem” ourselves, be true to ourselves, and, above all, “love” ourselves, because otherwise how could anyone else love us? The endless cornucopia of “self-help” advice that began to overflow in the twentieth century enjoins us to be our own “best friends,” to indulge ourselves, make time for ourselves, and often “celebrate” ourselves. If words like “believe” do not sufficiently suggest a religious stance, one site even urges us to “worship ourselves” by creating a shrine to oneself, which might include photos (probably “selfies”), favorite items of jewelry, and “nice smelling things such as perfume, candles or incense.”9 The self may seem like a patently false deity to worship, but it is no more—and no less—false than the God enshrined in recognized religions. Neither the self nor God is demonstrably present to everyone. Both require the exertion of “belief.”

In today’s capitalist culture the self has been further objectified into a kind of commodity demanding continual effort to maintain—a “brand.” Celebrities clearly have well-defined “brands,” composed of their talents, if any, their “personalities,” and their physical images, all of which can be monetized and sold. Even lowly aspirants toward wealth and fame are encouraged to develop a brand and project it confidently into the world, and never mind if it is indistinguishable from that of millions of other people—cheerful, upbeat, and “positive-thinking” has been a favorite since the 1950s, both for office workers and CEOs. If some darker self, containing fears, resentments, and doubts, remains under your carefully constructed exterior, it is up to you to keep it under wraps. Internal “affirmations”—“I am confident, I am lovable, and I will be successful”—are thought to do the trick.

What could go wrong? Of course, with the introduction of “self-knowledge” and “self-love,” one enters an endless hall of mirrors: How can the self be known to the self, and who is doing the knowing? If we love ourselves, who is doing the loving? This is the inescapable paradox of self-reflection: How can the self be both the knower and the content of what is known, both the subject and the object, the lover and that which is loved? Other people can be annoying, as Sartre famously suggested, but true hell is perpetual imprisonment in the self. Many historians have argued that the rise of self-awareness starting in roughly the seventeenth century was associated with the outbreak of an epidemic of “melancholy” in Europe at about the same time, and subjective accounts of that disorder correspond very closely with what we now call “depression.”10 Chronic anxiety, taking the form of “neurasthenia” in the nineteenth century, seems to be another disease of modernism. The self that we love and nurture turns out to be a fragile, untrustworthy thing.

Unlike the “soul” that preceded it, the self is mortal. When we are advised to “come to terms with” our mortality, we are not only meant to ponder our decaying corpses, but the almost unthinkable prospect of a world without us in it, or more precisely, a world without me in it, since I can, unfortunately, imagine a world without other people, even those I love most. A world without me, without a conscious “subject” to behold it, seems inherently paradoxical. As philosopher Herbert Fingarette writes:

Could I imagine this familiar world continuing in existence even though I no longer exist? If I tried, it would be a world imagined by me.…Yes, I can imagine a world without me in it as an inhabitant. But I can’t imagine a world as unimagined by me. My consciousness of that world is ineliminable, and so, too, therefore, is my reaction to it. But this falsifies the meaning of my death, since its distinctive feature is that there won’t be consciousness of, or reaction to, anything whatsoever.11

We are, most of the time, so deeply invested in the idea of an individual conscious self that it becomes both logically and emotionally impossible to think of a world without it. A physician who had narrowly escaped death more than once writes:

Whenever I’ve tried wrapping my mind around the concept of my own demise—truly envisioned the world continuing on without me, the essence of what I am utterly gone forever—I’ve unearthed a fear so overwhelming my mind has been turned aside as if my imagination and the idea of my own end were two magnets of identical polarity, unwilling to meet no matter how hard I tried to make them.12

We may all imagine that some trace of ourselves will persist in the form of children and others whom we have influenced, or through the artifacts and intellectual products we leave behind. At the same time I know, though, that the particular constellation of memories, fantasies, and ambitions that is, for example, me will be gone. The unique—or so I like to imagine—thrum of my consciousness will be silenced, never to sound again. “All too often,” wrote philosopher Robert C. Solomon, “we approach death with the self-indulgent thought that my death is a bad thing because it deprives the universe of me” (italics in the original).13 Yet if we think about it, the universe survives the deaths of about fifty-five million unique individuals a year quite nicely.

In the face of death, secular people often scramble to expand their experiences or memorialize themselves in some lasting form. They may work their way through a “bucket list” of adventures and destinations or struggle to complete a cherished project. Or if they are at all rich or famous, they may dedicate their final years and months to the creation of a “legacy,” such as a charitable foundation, in the same spirit as an emperor might plan his mausoleum. One well-known public figure of my acquaintance devoted some of his last months to planning a celebration of his life featuring adulatory speeches by numerous dignitaries including himself. Sadly, a couple of decades later, his name requires some explanation.

So the self becomes an obstacle to what we might call, in the fullest sense, “successful aging.” I have seen accomplished people consumed in their final years with jockeying for one last promotion or other mark of recognition, or crankily defending their reputation against critics and potential critics. This is all that we in the modern world have learned how to do. And when we acquire painful neuroses from our efforts to promote and protect ourselves, we often turn to forms of therapy that require us to burrow even more deeply into ourselves. As Amundson writes, “the psychotherapy patient looks within for the truth, and comes away, not with anything that is considered universally valid or absolute in a metaphysical sense, but with a heightened and intensified devotion to such individualistic creeds as ‘being true to oneself,’ ‘loving oneself,’ and ‘practicing self-care.’”14

There is one time-honored salve for the anxiety of approaching self-dissolution, and that is to submerge oneself into something “larger than oneself,” some imagined super-being that will live on without us. The religious martyr dies for God, the soldier for the nation or, if his mind cannot encompass something as large as the nation, at least for the regiment or platoon. War is one of the oldest and most widespread human activities, and warriors are expected to face death willingly in battle, hoping to be memorialized in epics like the Iliad or the Mahabharata or in one of the war monuments that have sprung up since the nineteenth century. For frightened soldiers or, later, their grieving survivors, dying is reconfigured as a “sacrifice”—the “ultimate sacrifice”—with all the ancient religious connotations of an offering to the gods. And in case thoughts of eventual glory are not enough to banish fear, the US military is increasingly adopting the tools of alternative medicine, including meditation, dietary supplements, and reiki.15 The expectation, though, is that true soldiers die calmly and without regret. As Winston Churchill said of poet and World War I recruit Rupert Brooke:

He expected to die: he was willing to die for the dear England whose beauty and majesty he knew: and he advanced towards the brink in perfect serenity, with absolute conviction of the rightness of his country’s cause and a heart devoid of hate for fellow-men.16

But you don’t have to be a warrior to face death with equanimity. Anyone who lives for a cause like “the revolution” is entitled to imagine that cause being carried on by fresh generations, so that one’s own death becomes a temporary interruption in a great chain of endeavor. Some stumble and fall or simply age out, but others will come along to carry on the work. As an old labor song about Joe Hill, a labor activist who was framed for murder and executed in 1915, tells us, it’s as if death never happened at all:

I dreamed I saw Joe Hill last night

Alive as you or me

Says I, But Joe, you’re ten years dead

I never died, says he

I never died, says he…

 

Where working men are out on strike

…Joe Hill is at their side

Joe Hill is at their side

From San Diego up to Maine

In every mine and mill

Where workers strike and organize

Says he, You’ll find Joe Hill17

The revolutionary lives and dies for her people, secure in her belief that someone else will pick up the banner when she falls. To the true believer, individual death is incidental. A luta continua.

The idea of a super-being that will outlive us as individuals is not entirely delusional. Human beings are among the most sociable of living creatures. Studies of orphaned infants in World War II showed that even if kept warm and adequately fed, infants who were not held and touched “failed to thrive” and eventually died.18 Socially isolated adults are less likely to survive trauma and disease than those embedded in family and community. We delight in occasions for unified, collective expression, whether in the form of dancing, singing, or chanting for a demagogue. Even our most private thoughts are shaped by the structure of language, which is of course also our usual medium of interaction with others. And as many have argued, we are ever more tightly entangled by the Internet into a single global mind—although in a culture as self-centric as ours, the Internet can also be used as a mirror, or a way to rate ourselves by the amount of attention we are getting from others, the number of “likes.”

It is the idea of a continuous chain of human experience and endeavor that has kept me going through an unexpectedly long life. I will stumble and fall; in fact, I already stumble a lot, but others will pick up the torch and continue the race. It’s not only “my work”—forgive the pompous phrase—that I bequeath to my survivors but all the mental and sensual pleasures that come with being a living human: sitting in the spring sunshine, feeling the warmth of friends, solving a difficult equation. All that will go on without me. I am content, in the time that remains, to be a transient cell in the larger human super-being.

But there are flaws in this philosophic perspective. For one thing, it is entirely anthropocentric. Why shouldn’t our “great chain of being” include the other creatures with which we have shared the planet, the creatures we have martyred in service to us or driven out of their homes to make way for our expansion? Surely we have some emotional attachment to them, even if it is hard to imagine passing the figurative torch to dogs or, in one of the worst scenarios, insects and microbes.

Then there is a deeper, more existential problem with my effort to derive some comfort from the notion of an ongoing human super-being: Our species itself appears to be mortal and, in many accounts, imminently doomed, most likely to die by our own hand, through global warming or nuclear war. Some scientists put the chance of a “near extinction event,” in which up to 10 percent of our species is wiped out, at a little over 9 percent within a hundred years.19 Others doubt our species will survive the current century. As environmentalist Daniel Drumright writes—and I can only hope he is an alarmist—with the growing awareness of extinction, “We’re dealing with a discovery of such epic proportion that it simply reduces everything in existence to nothing.” He goes on to say that our emerging circumstances require “a diabolic consciousness to which no living human being has ever had to bear witness. It is an awareness which requires a degree of emotional maturity that’s almost indistinguishable from insanity within western culture.”20

If your imagination is vigorous enough, you may take comfort from the likely existence of other forms of life throughout the universe. Earth-sized planets abound, potentially offering other habitats similar to our own, with reasonable temperatures and abundant water. In addition, sci-fi fans know that our vision of life based on carbon and water is likely to be far too provincial. There may be life forms based on other chemicals, or self-reproducing entities that do not even consist of conventional matter—patterns of energy bursts, oscillating currents, gluttonous black holes; already we have artificial life in the form of computer programs that can reproduce and evolve to meet changing circumstances. And—who knows?—some of these “life” forms may be suitable heirs for our species, capable of questing and loving.

But even here our yearning for immortality runs into a wall, because the universe itself will come to an end if current predictions are borne out, whether in 2.8 or 22 billion years from now, which of course still gives us plenty of time to get our things in order. In one scenario there will be a “big crunch” in which expansionist forces will rip even atoms apart. In another, the night sky will empty out, the huge void spaces now separating galaxies will grow until they swallow everything. Vacuum and perfect darkness will prevail. Both scenarios lead to the ultimate nightmare of a world “without us in it,” and it is infinitely bleaker than a world without our individual selves—a world, if you can call it that, without anything in it, not the tiniest spark of consciousness or wisp of energy or matter. To cruelly paraphrase Martin Luther King, the arc of history is long, but it bends toward catastrophic annihilation.