Well before the twentieth-century triumph of the molecular biologists, the whole field of immunology had begun with the close observation of individual macrophages. Observation is usually the job of naturalists, who crouch patiently in the bushes to study, for example, the behavior of wild animals. Laboratory scientists are more prone to aggressive interventions, which might involve chopping up the animals’ brains and studying their biochemical composition. Fortunately, the “father” of cellular biology had both the patience of a naturalist and the intellectual impatience of an ambitious lab researcher.

The brilliant and obsessive Russian zoologist Elie Metchnikoff, who was described in Paul de Kruif’s 1926 book Microbe Hunters as being “like some hysterical character out of one of Dostoyevski’s novels,”1 had been attracted to macrophages while studying flatworms and sponges. Macrophages are eye-catching enough—large cells capable of moving among other body cells—and as he was the first to discover, they had an even better trick: They could ingest particles (such as microbes) by enfolding them and exposing them to powerful digestive enzymes, the process that Metchnikoff named phagocytosis. The question was how the macrophages knew what to attack and what to leave alone, which cells or particles were “normal” and which were deserving of destruction. Metchnikoff’s answer was essentially that macrophages, enjoying the “most independence” of any body cells, could decide this on their own2—protecting cells they recognized as belonging to the “self” and devouring anything else.

This explanation was instantly rejected by most of Metchnikoff’s contemporaries. As philosopher Alfred Tauber writes, “The phagocyte [macrophage] as possessor of its own destiny and mediator of the organism’s selfhood was received as too vitalistic a conception,”3 meaning that it was almost mystical. How could a microscopic cell make decisions? On the one hand it was too small and, of course, utterly lacking anything resembling a nervous system. On the other hand it was too large, at least compared to the individual molecules increasingly favored by twentieth-century molecular biologists as the arbiters of everything that goes on in the body. And what is a cell anyway except a collection of proteins, lipids, nucleic acids, and other chemicals enclosed in a lipid-based membrane? When, during a phone interview in 2016, I asked Alberto Mantovani, one of the early researchers on the role of macrophages in tumor development, what he thought of the growing interest in “cellular decision making,” he asked me to repeat the phrase. Then he guffawed.

But there it is: A little over a century after Metchnikoff’s dismissal for the scientific crime of “vitalism,” the forbidden phrase began to gain respectability. I say “a little over a century” because I cannot locate its first appearance in the scientific literature. By 2005 the term “cellular decision making”—without the quotation marks—was showing up in the titles of articles; five years later it was the subject of international conferences. Why Mantovani was unaware of it I do not know and was perhaps too polite to ask. I will admit, though, that even to me the notion still carries a whiff of whimsy.

Officially, cellular decision making is “the process whereby cells assume different, functionally important and heritable fates without an associated genetic or environmental difference.”4 One translation would be “a process that we do not understand and cannot predict.” For mobile cells like macrophages and amoebae, one of the most common decisions is about where to go next, and here we humans can only make large generalizations—such as that they will move toward edible or otherwise attractive material. But this is a very general observation. New techniques, like intravital microscopy, have made it possible to track the behavior of individual cells in living tissue, and the resulting images reveal striking degrees of individuality. If you calculate the bulk average of movements within a sample group of cells, most cells turn out to be going their own way, on paths far from the average.5 Cancer cells within a tumor exhibit “extreme diversity.”6 NK, or “natural killer,” cells, which, like macrophages, attack targets like microbes, do not always kill. A 2013 article reports that about half of the NK cells sit out the fight, leaving a minority of them to become what their human observers call “serial killers.”7 Another type of immune cell, the T cells or thymus-derived lymphocytes that are attacked by the HIV virus, are especially frustrating to observers because they move about

by a series of repetitive lunges, repeatedly balling-up and then extending. This cycle appears to be driven by an intrinsic rhythmicity, with a period of about 2 min.…T cells travel in a fairly consistent direction during each “lunge” and may even continue in a consistent direction over several cycles. However, following each pause, there is a high probability that a cell will take off in another direction.8

The fact that something is not completely predictable does not mean that it is unexplainable. As one explanation for cellular motion, scientists offer “stochastic noise,” meaning that cells are being randomly jostled by other cells (or particles in the extracellular fluid). In any fluid or gas, the molecules are in motion at speeds determined by the temperature. Sometimes they collide with each other and rebound in new directions, which can create the impression of self-determined motion. The other type of explanation is that some of the particles or molecules colliding with cells are not entirely “random,” because they contain information encoded as chemical messages. For example, macrophages and other immune cells use small proteins called cytokines to summon others of their kind for help at a site of inflammation. So, a determined determinist might say, cells are not “deciding” what to do, they are being told what to do.

But the experiences of being randomly jostled and, perhaps at the same time, receiving intelligible messages are also common to the only creatures who insist on possessing “free will”—ourselves. While walking on a sidewalk, I may collide, apparently randomly, with other pedestrians, and in ways that cause me to walk closer to or farther from the curb. At the same time, I may be receiving text messages on my phone advising me to hurry up or to not forget to pick up some groceries. All of this incoming data—the crowded sidewalk, the shopping list—must be processed by my mind before I can decide on the best direction and speed at which to walk. There may be additional factors that would make me alter my path. If, for example, I am trying to avoid someone else in the crowd, I might suddenly speed up and slip around a corner. The differences between a human being negotiating a busy sidewalk and a single cell are of course almost unfathomably large. A cell is a cell; a human is composed of trillions of cells—enough so that a trillion or so can be dedicated to collecting and parsing information from the environment. But second by second, both the individual cell and the conglomeration of cells we call a “human” are doing the same thing: processing incoming data and making decisions.

I learned an important lesson in nonhuman decision making from my own crude informal form of birdwatching. While living on the Gulf side of the lower Florida Keys, I became intrigued by the group behavior of ibises. As the sun set, they would flock to a nearby mangrove island to roost for the night; sometime around sunrise they would take off again for their feeding grounds. I assumed that both events were driven by the angle and intensity of the sunlight or perhaps by some ibis leaders or central committee. How else would the birds know what to do? But further observations revealed that the morning liftoff could be the coordinated action of up to a hundred birds at a time, or it could be messy and anarchic, with individuals and small groups taking off at slightly different times. When I asked an animal behaviorist—an old friend at Cornell—what was controlling their behavior, he did not rule out the effect of the sun or the possible existence of trendsetters among the ibises, but suggested that there was a lot of early morning jostling and nudging. In other words, nothing was “controlling” them in the determinist sense I was looking for, no on/off switch telling the birds to stay put or get up and forage. Inadvertently, I had stumbled across what has been called the Harvard law of animal behavior, which is related to Murphy’s law: “You can have the most beautifully designed experiment with the most carefully controlled variables, and the animal will do what it damn well pleases.”9

It had not occurred to me, with my PhD in cell biology, that a truly “bird-brained” creature like an ibis could be making any decisions at all, either individually or collectively, just as it had not occurred to me that the actions of individual cells were not fully determined by the cells’ environments and genes. But much smaller biological entities than bird brains have been credited with “decision making.” In 2007, a German team discovered what they called “free will” among, of all things, fruit flies. The flies were immobilized by tethering them and gluing them to the inside of an all-white drum that offered no sensory clues. Still, the tormented flies struggled desperately to fly, with the humans all the while recording their motions and subjecting them to a variety of mathematical analyses. The result was that the flies’ motions were not random, as mathematically defined; they were spontaneous and originated in the insects themselves.10 And why would a fruit fly want to generate nonrandom but completely unpredictable patterns of motion? According to Bjorn Brembs, the leader of the team, unpredictability could confer a survival advantage: A more “deterministically” designed type of creature—say, one that always moves to the right when alarmed—would be far more susceptible to predators.

One criticism Brembs reported from a neurobiologist colleague was that fruit flies are simply “too small” to engage in decision making, much less anything as exalted as “free will.” But they are far from the smallest specks of living or lifelike material to exhibit autonomous behavior. Perhaps one of the examples best known to biologists is the phage lambda, a virus that preys on that familiar resident of our guts—the bacterium E. coli. A virus is a strand or two of nucleic acid, usually DNA, coated with protein, visible only through an electron microscope, yet in the course of their development phage have a crucial choice to make: When one of them penetrates an E. coli cell, it can either remain there in a state of dormancy, passively reproducing its nucleic acid when the cell divides, or it can immediately lyse the cell—splitting it open and releasing a swarm of progeny to invade other E. coli. Acres of paper have been filled with differential equations in an effort to predict which way an encounter between phage and E. coli will go, with the result that the outcome seems to depend on decision making by individual phage.11

As we proceed down the scale—from cells to molecules and from molecules to atoms and subatomic particles—the level of spontaneity only increases until we reach the wild dance party that goes on at the quantum level. Quantum physics has shown that the behavior of subatomic particles is inherently unpredictable. For example, when a beam of electrons is passed through a pair of slits, each electron gets to “choose” which one to enter. It is impossible, in the case of an atom or subatomic particle, to simultaneously know where it is and how fast it is going. As the renowned physicist Freeman Dyson has put it, “There is a certain kind of freedom that atoms have to jump around, and they seem to choose entirely on their own without any input from the outside, so in a certain sense atoms have free will.”12

Such statements come with an implicit disclaimer: No one is implying that cells or viruses or subatomic particles possess consciousness, desires, or personalities. What they possess is agency, or the ability to initiate an action. If even that seems like a reckless statement, it is because we are so unused to thinking of agency as an attribute of anything other than humans, or God, or perhaps some of the larger “charismatic” animals, like elephants or whales. I am using the word in the generous philosophical sense employed by Jessica Riskin in her brilliant book The Restless Clock as “something like consciousness but more basic, more rudimentary, a primitive, prerequisite quality. A thing cannot be conscious without having agency, but it can have agency without being conscious.”13 Agency, she goes on, is “simply an intrinsic capacity to act in the world, to do things in a way that is neither predetermined nor random.”14 We routinely and colloquially ascribe agency to things we know are not conscious or even alive, as in, “This car just doesn’t want to start,” fully realizing that the car doesn’t “want” anything. Riskin’s point is that the mission of science—the determinist science that arose in the middle of the seventeenth century—has been the elimination of the last vestiges of agency from the natural world. Lightning, we are told, is an electrical charge, not an expression of divine displeasure. The amoeba does not move because it “wants” to, but because it is driven by chemical gradients in its environment. Tell a scientifically trained person that something is unpredictable and she will do her best to find a way to predict and control it. But agency is not concentrated in humans or their gods or favorite animals. It is dispersed throughout the universe, right down to the smallest imaginable scale.

Science has an answer to Riskin’s thesis. According to the latest from the field of cognitive science, humans have an innate tendency to see agency, whether in the form of gods or spirits, where it does not exist because there was once a survival advantage in doing so. A prehistoric person or hominid would be wise to imagine that every stirring in the tall grass meant that a leopard—or some such potentially hazardous life form—was closing in for an attack. If you decide the stirring is a leopard, you can run away, and if you were wrong, nothing is lost except perhaps some temporary peace of mind. But if you decide that it is just a breeze and it turns out to be a leopard, you become a leopard’s lunch. So our brains evolved to favor the scarier alternative and the choice of running. We have become what the cognitive scientists call “hyperactive agency detection devices”: We see faces in clouds, hear denunciations in thunder, and sense conscious beings all around us even when there is nothing there. This has become a key part of the scientific argument against religion, and one of the best-known books on the subject is entitled Why Would Anyone Believe in God?

If it seems a rather precipitous leap from imagined leopards to monotheistic deities, this may be because the cognitive scientists made it too quickly. The point is not that many of the leopards turned out not to be there, but that in the world occupied by hominids and early humans, they often were. Quite possibly many of our ancestors knew full well they were erring on the side of caution and made the error anyway, which is a choice we can understand. What may be harder for us to understand today is that we evolved on a planet densely occupied by other “agents”—animals that could destroy us with the slash of a claw or the splash of a fin, arbitrarily and in seconds. The hypothesized transition from suspected predators to the morally ambiguous “spirits” believed in by early humans makes more sense when we recall that, before humans could hunt for themselves, they seem to have relied on the bones and scraps of meat left by nonhuman predators. That is, the predator was also a provider, more or less like the gods that came later.

The scientific argument, in other words, is that the attribution of agency to the natural word was a mistake, although a useful one in an evolutionary sense. I am suggesting, to the contrary, that it was the notion of nature as a passive, ultimately inert mechanism that was the mistake, and perhaps the biggest one that humans ever made. The “death of nature,” as Carolyn Merchant put it, turned the natural world from a companionable, though often threatening place, into a resource to be exploited.15 Determined to reduce biology to chemistry, twentieth-century biologists tended to skip right over life at the cellular level; molecules were much more manageable and far more predictable than living cells. In doing so, though, biology generated paradoxes and mysteries, like the problem of immune cells that abet cancer or foment autoimmune diseases. Through the lens of reductionist science, even life itself became a kind of mystery, resolvable only as an incredibly complex concatenation of molecular events. Today we tend to accord the status of “mystery” to something that should be as intimately familiar as life, and that is consciousness.

If there is a lesson here it has to do with humility. For all our vaunted intelligence and “complexity,” we are not the sole authors of our destinies or of anything else. You may exercise diligently, eat a medically fashionable diet, and still die of a sting from an irritated bee. You may be a slim, toned paragon of wellness, and still a macrophage within your body may decide to throw in its lot with an incipient tumor. Elie Metchnikoff understood this as well as any biologist since his time ever has. Rejecting the traditional—and continuing—themes of harmony and wholeness, he posited a biology based on conflict within the body and carried on by the body’s own cells as they compete for space and food and oxygen. We may influence the outcome of these conflicts—through our personal habits and perhaps eventually through medical technologies that will persuade immune cells to act in more responsible ways—but we cannot control it. And we certainly cannot forestall its inevitable outcome, which is death.