Philosophically, we have painted ourselves into a corner. On the one hand, we posit a lifeless material world. As the twentieth-century biochemist Jacques Monod put it, in what I can only imagine was a tone of bitter triumph, “Man at last knows he is alone in the unfeeling immensity of the universe.”1 On the other hand, we hold on to the perception of an endlessly fascinating self, bloated now by at least a century of self-love and self-absorption. We live like fugitives, always trying to keep one step ahead of the inevitable annihilation—one more meal, one more dollar or fortune to win, one more workout or medical screening. And we die…Well, we cannot die at all because the death of the self is unthinkable.

The traditional solution to this existential dilemma has been to simply assert the existence of a conscious agency other than ourselves, in the form of a deity, an assertion that has often been backed up by coercion. For about two thousand years, large numbers of people—today a clear majority of the world’s population2—have either insisted that this deity is a single all-powerful individual, or they have at least pretended to go along with the idea. Perhaps to make this remote and solitary god more palatable, the “world religions” also assert that he is all-good and all-loving, although this bit of PR had the effect of making him seem preposterous, since a good and loving god would not unleash earthquakes or kill babies. Belief in such a deity takes considerable effort, as many Europeans discovered after the eighteenth-century earthquake that destroyed Lisbon. But it is an effort that most people are willing to make since the alternative is so ghastly: How can anyone live knowing that they will end up as a pile of refuse? Or, as atheists are often asked, how can we die knowing death is followed only by nothingness?

The rise of monotheism has been almost universally hailed by modern scholars as a great moral and intellectual step forward. In myth, the transition to monotheism sometimes occurred as a usurpation of divine power by a particular polytheistic deity within a larger pantheon: Yahweh, for example, had to drive out the earlier Canaanite gods like Asherah and Baal. Politically, the transition could occur suddenly by kingly decree, as in the cases of the pharaoh Akhenaten, the Hebrew king Saul, and the emperor Constantine. The single God’s exclusive claim to represent perfect goodness (or, in the case of Yahweh, fierce tribal loyalty) proved, in turn, crucial in legitimating the power of the king, who could claim to rule by divine right. The system is ethically tidy: All morally vexing questions can be answered with the claim that the one deity is the perfection of goodness, even if his motives are inscrutable to us.

But the transition to monotheism can also be seen as a long process of deicide, a relentless culling of the ancient gods and spirits until no one was left except an abstraction so distant that it required “belief.” The “primitive”—and perhaps original—human picture was of a natural world crowded with living spirits: animals that spoke and understood human languages, mountains and rivers that encapsulated autonomous beings and required human respect and attention. The nineteenth-century anthropologist Edward Tylor termed this view of an inspirited world “animism,” and to this day, indigenous belief systems that seem particularly disorganized and incoherent compared to the great “world religions” like Islam and Christianity are also labeled—or perhaps we should say libeled—as animism.

Historically, animism was followed by polytheism. How the multitudinous spirits of animism congealed into distinct deities is not known, but the earliest polytheistic religion is thought to be Hinduism, arising in about 2500 BCE and still bearing traces of animism in the form of animal deities like Ganesh and Hanuman, as well as in rural shrines centered on rocks. The religions of the ancient Mediterranean world, the Middle East, and the southern part of the Western Hemisphere were all polytheistic, made possible by stratified societies capable of erecting temples and supporting a nonproductive priestly caste.

Not everyone went along cheerfully with the imposition of monotheism, which required the abandonment of so many familiar deities, animal gods, and spirits, along with their attendant festivities. The Egyptians reverted to polytheism as soon as Akhenaten died, while the Hebrew kings fought ruthlessly to suppress their subjects’ constant backsliding to the old Canaanite religion. Within the monotheistic religions too, there was a steady drift back toward polytheism. The Christian God divided himself into the Trinity; saints proliferated within Christianity and Islam; the remnants of animism flourish alongside Buddhism (which, strictly speaking, shouldn’t be considered a form of theism at all).

In the last five hundred years “reform” movements rushed in to curb these deviations. In Europe the Reformation cracked down on the veneration of saints, downplayed the Trinity, and stripped churches of decoration, incense, and other special effects. Within Islam, Wahhabism suppressed Sufism, along with music and artistic depictions of living creatures. The face of religion became blank and featureless, as if to discourage the mere imagining of nonhuman agencies in the world.

It was the austere, reformed version of monotheism that set the stage for the rise of modern reductionist science, which took as its mission the elimination of agency from the natural world. Science did not set out to destroy the monotheistic deity; in fact, as Jessica Riskin explains, it initially gave him a lot more work to do. If nature is devoid of agency, then everything depends on a “Prime Mover” to breathe life into the world.3 But science pushed him into a corner and ultimately rendered him irrelevant. When an iconic 1966 Time magazine cover echoed Nietzsche by asking, “Is God Dead?” the word was out: We humans are alone in a dead universe, the last conscious beings left. This was the intellectual backdrop for the deification of the “self.”

It is too late to revive the deities and spirits that enlivened the world of our ancestors, and efforts to do so are invariably fatuous. But we can begin to loosen the skeletal grip of the old, necrophiliac science on our minds. In fact, for the sake of scientific rationality, we have to. As Jackson Lears has written recently, the reductionist science that condemns the natural world to death “is not ‘science’ per se but a singular, historically contingent version of it—a version that depends on the notion that nature is a passive mechanism, the operations of which are observable, predictable, and subject to the law-like rules that govern inert matter.”4

Only grudgingly, science has conceded agency to life at the cellular level, where researchers now admit that “decisions” are made about where to go and what other cells to kill or make alliances with. This gradual change of mind about agency at the microscopic level is analogous to the increasing scientific acceptance of emotion, reasoning, and even consciousness in nonhuman animals—which was belatedly acknowledged at an international conference of neuroscientists in 2012.5 As for myself, I am not entirely satisfied with the notion of cellular decision making and would like to know more about how cells arrive at their decisions and how humans could perhaps intervene. But I no longer expect to find out that these decisions are “determined”—in the old Newtonian sense of, say, a rock falling in response to gravity, or by perhaps any forces or factors outside of the cell.

The question I started with has to do with human health and the possibility of our controlling it. If I had known that this is just part of a larger question about whether the natural world is dead or in some sense alive, I might have started in many other places, for example with fruit flies, viruses, or electrons that, according to the scientists who study them, appear to possess “free will” or the power to make “decisions.” Wherever we look, if we look closely enough, we find nature defying the notion of a dead, inert universe. Science has tended to dismiss the innate activities of matter as Brownian motion or “stochastic noise”—the fuzziness that inevitably arises when we try to measure or observe something, which is in human terms little more than a nuisance. But some of these activities are far more consequential, and do not even require matter to incubate them. In a perfect void, pairs of particles and antiparticles can appear out of nowhere without violating any laws of physics. As Stephen Hawking puts it, “We are the product of quantum fluctuations in the very early universe. God really does play dice.”6 Most of these spontaneously generated particle pairs or “quantum fluctuations” are transient and flicker quickly out of existence. But every few billion years or so, a few occur simultaneously and glom together to form a building block of matter, perhaps leading, in a few billion more years, to a new universe.

Maybe then, our animist ancestors were on to something that we have lost sight of in the last few hundred years of rigid monotheism, science, and Enlightenment. And that is the insight that the natural world is not dead, but swarming with activity, sometimes perhaps even agency and intentionality. Even the place where you might expect to find quiet and solidity, the very heart of matter—the interior of a proton or a neutron—turns out to be animated with the ghostly flickerings of quantum fluctuations.7 I would not say that the universe is “alive,” since that might invite misleading biological analogies. But it is restless, quivering, and juddering, from its vast vacant patches to its tiniest crevices.

I have done my feeble best here to refute the idea of dead matter. But the other part of the way out of our dilemma is to confront the monstrous self that occludes our vision, separates us from other beings, and makes death such an intolerable prospect. Susan Sontag, who spent her last couple of years “battling” her cancer, as the common military metaphor goes, once wrote in her journal, “Death is unbearable unless you can get beyond the ‘I.’”8 In his book on her death, her son, David Rieff, commented, “But she who could do so many things in her life could never do that,”9 and devoted her last years and months to an escalating series of medical tortures, each promising to add some extra months to her life.

Just a few years ago, I despaired of any critical discussion of the self as an obstacle to a peaceful death without getting mired in the slippery realm of psychoanalysis or the even more intimidating discourse of postmodern philosophy. But a surprising new line of scientific inquiry has opened up in an area long proscribed, and in fact criminalized—the study of psychedelic drugs. Reports of their use in treating depression, in particular the anxiety and depression of the terminally ill, started surfacing in the media about a decade ago. The intriguing point for our purposes here is that these drugs seem to act by suppressing or temporarily abolishing the sense of “self.”

The new research has been masterfully summarized in a 2015 article by science writer Michael Pollan.10 In a typical trial, the patient—usually someone suffering from cancer—receives a dose of psilocybin, the active ingredient in “magic mushrooms,” lies on a couch in a soothingly appointed room, and “trips” for several hours under the watchful eye of a physician. When the drug wears off, the patient is asked to prepare a detailed chronicle of his or her experience and submit to frequent follow-up interviews. Pollan quotes one of the researchers, a New York University psychiatrist, on the preliminary results:

People who had been palpably scared of death—they lost their fear. The fact that a drug given once can have such an effect for so long [up to six months] is an unprecedented finding. We have never had anything like it in the psychiatric field.11

When the subjective accounts of patients are supplemented with scans to localize brain activity, it turns out that the drug’s effect is to suppress the part of the brain concerned with the sense of self, the “default-mode network.” The more thoroughly this function of the brain is suppressed, the more the patient’s reported experience resembles a spontaneously occurring mystical experience, in which a person goes through “ego dissolution” or the death of the self—which can be terrifying—followed by a profound sense of unity with the universe, with which the fear of death falls away. And the more intense the psychedelic trip or mystical experience, the more strikingly anxiety and depression are abolished in the patient. A fifty-four-year-old TV news director with terminal cancer reported during his medically supervised psilocybin trip, “Oh God, it all makes sense now, so simple and beautiful.” He added later, “Even the germs were beautiful, as was everything in our world and universe.”12 He died, apparently contently, seventeen months later. This sense of an animate universe is confirmed by the subjective account of a psilocybin experience from a British psychologist who was otherwise well and not part of a laboratory study:

At a certain point, you are shifted into an animate, supernormal reality.…Beauty can radiate from everything that one sets one’s eyes on, as though one had suddenly woken up more. Everything appears as if alive and in fluidic connection.13

In some ways, the ego or self is a great achievement. Certainly it is hard to imagine human history without this internal engine of conquest and discovery. The self keeps us vigilant and alert to threats; vanity helps drive some of our finest accomplishments. Especially in a highly competitive capitalist culture, how would anyone survive without a well-honed, highly responsive ego? But as Pollan observes:

The sovereign ego can become a despot. This is perhaps most evident in depression, when the self turns on itself and uncontrollable introspection gradually shades out reality.14

The same sort of thing can be said of the immune system. It saves us time and again from marauding microbes, but it can also betray us with deadly effect. The philosopher/immunologist Alfred Tauber wrote of the self as a metaphor for the immune system, but that metaphor can be turned around to say that the immune system is a metaphor for the self. Its ostensible job is the defense of the organism, but it is potentially a treacherous defender, like the Praetorian guard that turns its swords against the emperor. Just as the immune system can unleash the inflammations that ultimately kill us, the self can pick at a psychic scar—often some sense of defeat or abandonment—until a detectable illness appears, such as obsessive-compulsive disorder, depression, or crippling anxiety.

So what am I? Or, since individual personality is not the issue here, I might as well ask, what are you? First, the body: It is not a clumsy burden we drag around with us everywhere, nor is it an endlessly malleable lump of clay. Centuries of dissection and microscopy have revealed that it is composed of distinct organs, tissues, and cells, which are connected to form a sort of system—first conceived of as a machine, and more recently as a harmonious interlocking “whole.” But the closer we look, the less harmonious and smooth-running the body becomes. It seethes with cellular life, sometimes even warring cells that appear to have no interest in the survival of the whole organism.

Then, the mind, the conscious mind, and here I am relying, appropriately I think, solely on subjective experience: We may imagine that the mind houses a singular self, an essence of “I-ness,” distinct from all other selves and consistent over time. But attend closely to your thoughts and you find they are thoroughly colonized by the thoughts of others, through language, culture, and mutual expectations. The answer to the question of what I am, or you are, requires some historical and geographical setting.

Nor is there at the core of mind some immutable kernel. The process of thinking involves conflict and alliances between different patterns of neuronal activity. Some patterns synchronize with and reinforce each other. Others tend to cancel each other, and not all of them contribute to our survival. Depression, for example, or anorexia or compulsive risk taking, represent patterns of synaptic firing that carve deep channels in the mind (and brain), not easily controlled by conscious effort, and sometimes lethal for the organism as a whole, both body and mind. So of course we die, even without help from natural disasters or plagues: We are gnawing away at ourselves all the time, whether with our overactive immune cells or suicidal patterns of thought.

I began this book at a point where death was no longer an entirely theoretical prospect. I had reached a chronological status that could not be euphemized as “middle-aged” and the resulting age-related limitations were becoming harder to deny. Three years later, I continue to elude unnecessary medical attention and still doggedly push myself in the gym, where, if I am no longer a star, I am at least a fixture. In addition, I retain a daily regimen of stretching, some of which might qualify as yoga. Other than that, I pretty much eat what I want and indulge my vices, from butter to wine. Life is too short to forgo these pleasures, and would be far too long without them.

Two years ago, I sat in a shady backyard around a table of friends, all over sixty, when the conversation turned to the age-appropriate subject of death. Most of those present averred that they were not afraid of death, only of any suffering that might be involved in dying. I did my best to assure them that this could be minimized or eliminated by insisting on a nonmedical death, without the torment of heroic interventions to prolong life by a few hours or days. Furthermore, we now potentially have the means to make the end of life more comfortable, if not actually pleasant—hospices, painkillers, and psychedelics, even, in some places, laws permitting assisted suicide. At least for those who are able to access these, there is little personal suffering to fear. Regret, certainly, and one of my most acute regrets is that I will not be around to monitor scientific progress in the areas that interest me, which is pretty much everything. Nor am I likely to witness what I suspect is the coming deep paradigm shift from a science based on the assumption of a dead universe to one that acknowledges and seeks to understand a natural world shot through with nonhuman agency.

It is one thing to die into a dead world and, metaphorically speaking, leave one’s bones to bleach on a desert lit only by a dying star. It is another thing to die into the actual world, which seethes with life, with agency other than our own, and, at the very least, with endless possibility. For those of us, which is probably most of us, who—with or without drugs or religion—have caught glimpses of this animate universe, death is not a terrifying leap into the abyss, but more like an embrace of ongoing life. On his deathbed in 1956, Bertolt Brecht wrote one last poem:

When in my white room at the Charité

I woke towards morning

And heard the blackbird, I understood

Better. Already for some time

I had lost all fear of death. For nothing

Can be wrong with me if I myself

Am nothing. Now

I managed to enjoy

The song of every blackbird after me too.15

He was dying, but that was all right. The blackbirds would keep on singing.