Chapter Four
Abandoning the Concept of Free Will

THE HUMAN INTERPRETER HAS SET US UP FOR A FALL. IT has created the illusion of self and, with it, the sense we humans have agency and “freely” make decisions about our actions. In many ways it is a terrific and positive capacity for humans to possess. With increasing intelligence and with a capacity to see relationships beyond what is immediately and perceptually apparent, how long would it be before our species began to wonder what it all meant—what was the meaning of life? The interpreter provides the storyline and narrative, and we all believe we are agents acting of our own free will, making important choices. The illusion is so powerful that there is no amount of analysis that will change our sensation that we are all acting willfully and with purpose. The simple truth is that even the most strident determinists and fatalists at the personal psychological level do not actually believe they are pawns in the brain’s chess game.

Puncturing this illusionary bubble of a single willing self is difficult to say the least. Just as we know but find it difficult to believe that the world is not flat, it too is difficult to believe that we are not totally free agents. We can begin to understand the illusion about free will when we ask the question, What on earth do humans want to be free from? Indeed, what does free will even mean? However actions are caused, we want them to be carried out with accuracy, consistency, and purpose. When we reach for the glass of water, we don’t want our hand suddenly rubbing our eye, or grasping so hard that the glass shatters, or the water to spurt upward from the faucet or turning into mist. We want all the physical and chemical forces in the world to be on our side, serving our nervous and somatic systems so that whatever the job, it gets done right. So we don’t want to be free from the physical laws of nature.

Think about the problem of free will on a social level. While we believe we are always acting freely, we commonly want none of that in others. We expect the taxi driver to take us to our destination and not where he thinks we ought to go. We want our elected politicians to vote on future issues the way we have decided (probably erroneously) they think. We don’t like the idea they are freely wheelin’ and dealin’ when we send them off to Washington (though they probably are). We intensely desire reliability in our elected officials and indeed in our family and friends.

When all the great minds of the past dealt with the question of free will, the stark reality and clarity that we are big animals, albeit with unique attributes, was not fully appreciated and accepted. The powerful idea of determinism, however, was apparent and appreciated. At the same time, and prior to the startling advances in neuroscience, explanations of mechanisms were unknown. Today they are. Today we know we are evolved entities that work like a Swiss clock. Today, more than ever before, we need to know where we stand on the central question of whether not we are agents who are to be held accountable and responsible for our actions. It sure seems like we should be. Put simply: The issue isn’t whether or not we are “free.” The issue is that there is no scientific reason not to hold people accountable and responsible.

As we battle through this, I will attempt to make two main points:

First—and this has to do with the very nature of brain-enabled conscious experience itself—we humans enjoy mental states that arise from our underlying neuronal, cell-to-cell interactions. Mental states do not exist without those interactions. At the same time, they cannot be defined or understood by knowing only the cellular interactions. Mental states that emerge from our neural actions do constrain the very brain activity that gave rise to them. Mental states such as beliefs, thoughts, and desires all arise from brain activity and in turn can and do influence our decisions to act one way or another. Ultimately, these interactions will only be understood with a new vocabulary that captures the fact that two different layers of stuff are interacting in such a way that existing alone animates neither. As John Doyle at Caltech puts the issue, “[T]he standard problem is illustrated with hardware and software; software depends on hardware to work, but is also in some sense more ‘fundamental’ in that it is what delivers function. So what causes what? Nothing is mysterious here, but using the language of ‘cause’ seems to muddle it. We should probably come up with new and appropriate language rather than try to get into some Aristotelian categories.” Understanding this nexus and finding the right language to describe it represents, as Doyle says, “the hardest and most unique problem in science.”1 The freedom that is represented in a choice not to eat the jelly donut comes from a mental layer belief about health and weight, and it can trump the pull to eat the donut because of its yummy taste. The bottom-up pull sometimes loses out to a top-down belief in the battle to initiate an action. And yet the top layer does not function alone or without the participation of the bottom layer.

The second point is how to think about the very concept of personal responsibility in a mechanistic and social world. It is a given that all network systems, social or mechanical, need accountability in order to work. In human societies this is generally referred to as members of a social group possessing personal responsibility. Now is personal responsibility a mechanism that resides in the individual brain? Or is its existence dependent on the presence of a social group? Alternatively, does the concept have meaning only when considering actions within a social group? If there were only one person in the world, would the concept of personal responsibility have any meaning? I would suggest it would not and in that truth, one can see that the concept is wholly dependent on social interactions, the rules of social engagement. It is not something to be found in the brain. Of course, some concepts that would lack meaning if nobody else were around are not wholly dependent on social rules or interactions. If there were only one person, it would be meaningless to say that he is the tallest person or taller than everyone else, but the concept of “taller” is not wholly dependent on social rules.

One cannot emphasize enough how all of this seems like crazy academic intellectual talk. It seems like when I go to a restaurant, my meal selection is a free choice. Or when the alarm goes off in the morning, I can go exercise or roll over, but it is my free choice. Or on the other hand, I can walk into a store and choose not to slip something into my pocket without paying for it. In traditional philosophy, free will is the belief that human behavior is an expression of personal choice that is not determined by physical forces, Fate, or God. YOU are calling the shots. YOU, a self with a central command center, are in charge, are free from causation, and are doing things. You can be free from outside control, coercion, compulsion, delusion, and inner lack of restraint over your actions. From what we learned in the last chapter, however, the modern perspective is that brains enable minds, and that YOU is your vastly parallel and distributed brain without a central command center. There is no ghost in the machine, no secret stuff that is YOU. That YOU that you are so proud of is a story woven together by your interpreter module to account for as much of your behavior as it can incorporate, and it denies or rationalizes the rest.

We have seen that our functionality is automatic: We putter along perceiving, breathing, making blood cells, and digesting without so much as a thought about it. We also automatically behave in certain ways: We form coalitions, share our food with our children, and pull away from pain. We humans also automatically believe certain things: We believe incest is wrong and flowers aren’t scary. Our left-brain interpreter’s narrative capability is one of the automatic processes, and it gives rise to the illusion of unity or purpose, which is a post hoc phenomenon. Does this mean we are just along for the ride, cruising on autopilot? Our whole life and everything that we do or think is determined? Oh my. As I already said, with what we now know about how the brain operates, it seems that we need to reframe the question about what it means to have free will. What on earth are we really talking about anyway?

Newton’s Universal Laws and My House

In 1975, perhaps not deliberating long enough on my decision, I chose to build my house and I did. Notice that I did not say I chose to have my house built, which, perhaps, would have yielded a better result. For years I was the brunt of jokes concerning the fact that a ball placed on the floor of the living room would roll, unaided, across the dining room and into the kitchen. Similar phenomena were observed on the kitchen counter. Those who were bothered by lines that were not straight would also comment upon the windows across the front of the house. My house was a house that a physicist should have loved, for not only did it readily illustrate Newton’s laws of motion and some principles of chaos theory, but they could also point to it and laugh that obviously it was built by someone from the biological side of science, someone who was comfortable with inexact measurements, obviously not an engineer.

First of all, my house demonstrated a basic principle of experimental science: No real measurement is infinitely precise; it always includes a degree of uncertainty in the value—wiggle room. Uncertainty is present because no matter what measuring device is used, it has a finite precision and, therefore, imprecision, which can never be eliminated completely, even as a theoretical idea. In fact, in some cases the actual action of measuring something can change its measurement. Physicists know this but don’t like it. That is why they keep inventing more and more precise measuring apparatuses, and I should have used more of them. I admit it, in building my house there were some imprecise measurements initially. While physicists will nod here that, yes, deplorable as it is, it is to be expected, my son-in-law, who is a contractor, would be rolling his eyes. And so would Isaac Newton, because thanks to that seventeenth-century scientist, physicists for some two centuries thought it would be possible to finally get the perfect measurement, and, once you had it, everything should fall neatly into place. Plug in a number at the beginning of an equation, and you will always get the same answer at the end.

Newton was no slacker as a student. While he was attending Cambridge University, it was hit by the plague and closed for two years. Instead of sitting by the fire reading novels (maybe Chaucer), playing billiards, and drinking beer to while away the time until school opened again, he read Galileo and Kepler and invented calculus. This turned out to be a good thing, because it came in handy a few years later. The Italian astronomer Galileo Galilei, who had died in 1643, the year Newton was born, was the original “just do it” guy. Instead of sitting around talking about how he thought the universe was constructed (Plato’s modus operandi), he decided to back up his ideas and observations with measurements and mathematics. It was Galileo who came up with the big ideas that objects retain their velocity and straight-line trajectories unless a force (often friction) acts upon them, as opposed to Aristotle’s hypothesis that objects naturally slow and stop unless a force acts upon them to keep them going. He also came up with the idea of inertia (the natural resistance of an object to changes in motion) and identified friction as a force.

Newton put these ideas all together in one tidy package. After scrutinizing the experimental observations and data of Galileo, Newton wrote down Galileo’s laws of motion as algebraic equations and realized these equations also described Kepler’s observations about planetary motion. This had not dawned on Galileo. Newton came up with the notion that the physical matter of the universe—that would be everything—operated according to a set of fixed, knowable laws, mathematical relationships that he had just jotted down. His three laws of motion, which governed the balls in my living room, have stood the test of more than three centuries of experimentation and practical application, from clocks to skyscrapers. Newton, however, rocked the world with his laws, not just the hallowed halls of physics. Why, you may wonder, did some guy messing around with calculus, Galileo’s data, and apples create such a stir? If you were like me, physics class didn’t really put you into any existential crisis.

Determinism

If the topic of determinism were to be brought up at dinner, the finger would most likely be pointed at Newton and his universal laws, although the idea had been floating around since the time of those inquisitive Greeks. Newton had reduced the machinations of the universe into a set of mathematical formulas. If the universe’s machinations followed a set of determined laws, then, well, everything is determined at the get-go. As I said earlier, determinism is the philosophical belief that all current and future events, actions, including human cognition, decisions, and behavior are causally necessitated by preceding events combined with the laws of nature. The corollary, then, is that every event, action, et cetera, is predetermined and can in principle be predicted in advance, if all parameters are known. Newton’s laws also work in reverse. That means that time does not have a direction. So you can also know something’s past by looking at its present state. (As if the free will and determinism issue were not numbing enough, some serious philosophers and physicists believe that time itself does not exist. The argument is that it too is an illusion. All of this plays out on the phenomenological backdrop that humans feel free in real time.) Determinists believe that the universe, and everything in it, is completely governed by causal laws. Have their left-hemisphere interpreters run amok and made it to prime time? After we get a little more physics under our belt, we will come back to this idea of causation.

Now the ramifications of this idea are disturbing to just about everyone. If the universe and everything in it are following predetermined laws, then that seems to imply that individuals are not personally responsible for their actions. Go ahead and eat the Death by Chocolate cake, it was preordained about two billion years ago. Cheat on the test? You have no control over that—go ahead. Not getting along with your husband? Slip him some poison and say the universe made you do it. This is what caused such a stir when Newton presented his universal laws. I call this the Bleak View, but many scientists and determinists think this is the way things are. The rest of us just don’t believe it. “The universe made me buy that dress!” or “The universe made me buy that Boxster!”* just isn’t going to fly well at the dinner table. If we were to be logical neuroscientists, however, shouldn’t it?

A Post Hoc World?

We accept the idea that our bodies are humming along, being run by automatic systems that follow deterministic laws. Luckily, we don’t have to consciously digest our food, keep our heart beating, and our lungs oxygenating. When it comes to our thoughts and actions, however, we don’t like to think of those as being nonconscious, following a set of predetermined laws. But the fact remains, and you can show this experimentally, that actions are over, done, kaput, before your brain is conscious of them. Your left-hemisphere interpretive system is what pushes the advent of consciousness back in time to account for the cause of the action. The interpreter is always asking and answering the question, WHY? In fact, Hakwan Lau, now at Columbia University, can mess with this misconception of timing in your brain. He was looking to see whether he could prove or disprove whether conscious control of actions was illusory or real by using transcranial magnetic stimulation (TMS).

TMS does what the name implies. Plastic-enclosed coils of wire are placed on the outside of the head. When activated, a magnetic field is produced that passes through the skull and induces a current in the brain that locally activates the nerve cells. This can be applied to specific cells or to an area generally and thus the functions and connections of different parts of the brain can be studied. Activity of parts of the brain can also be inhibited, and so one may study what a specific area does when it is disconnected from the processes of other areas. The area of the frontal cortex called the supplemental motor area (SMA) is involved with the planning of motor actions that are sequences of action done from memory, such as playing a memorized piano prelude. The pre-SMA is the area that is involved with acquiring new sequences. Lau knew, from the work of others, that stimulation of the medial frontal cortex gives one the feeling of the urge to move2 and that lesions in this area in macaque monkeys abolish their self-initiated movements.3 He, himself, had previously found that there is activation in this area when subjects generated actions of their own free choice.4 The pre-SMA was, thus, his area of interest. Lau found that when TMS is applied over the pre-SMA after the execution of a spontaneous action, the perceived onset of the intention to act, that moment when you become conscious that you intend to act, is shifted backward in time on the temporal map,** and the perceived time of the actual action, the moment when you are conscious that you are acting, is shifted forward in time.5 What I think he has done is actually mess with the interpreter module.

While the idea that there is a temporal map that intentions and actions are mapped onto, but not necessarily as they actually happened, seems crazy, it happens to you all the time. Think about when you smash your finger with a hammer and pull it away. Your explanation will be that you smashed your finger, it hurt, and you pulled it away. Actually, however, what happens is you pull it away before you feel the pain. It takes a few seconds for you to perceive, or be conscious of, the pain and your finger has long since gotten out of Dodge. What has happened is the pain receptors in your finger send a signal along the nerve to the spinal cord, and immediately a signal is sent back along motor nerves to your finger, triggering the muscles to contract and pull away without involving the brain, a reflexive action. You move first. The pain receptor signal is also sent up to the brain. Only after the brain processes the signal and interprets it as pain do you become conscious of the pain. Consciousness takes time, and it was not consciousness of the pain followed by a conscious decision that moved your finger: Pulling your finger back was a reflex and was done automatically. The signal that produces awareness of pain originates in your brain after the injury and is referred to the finger, but your finger has already moved. Your interpreter has to put all the observable facts together (pain and moved finger) in a makes-sense story to answer the WHY? question. It makes sense that you pulled your finger away because of the pain, so it just fudges the timing. In short, the interpreter makes the story fit with the pleasing idea one actually willed the action.

The belief that we have free will permeates our culture, and this belief is reinforced by the fact that people and societies behave better when they believe that is the way things work. Is a belief, a mental state, constraining the brain? Kathleen Vohs, a psychology professor at the Carlson School of Management in Minnesota, and Jonathan Schooler,6 a psychology professor at the University of California–Santa Barbara, have shown in a clever experiment that people act better when they believe they have free will. Curious that in a huge survey of people in 36 countries, more than 70 percent agreed that their life was in their own hands, and also knowing that other studies have shown that changing people’s sense of responsibility can change their behavior,7 Vohs and Schooler set about to see empirically whether people work better when they believe that they are free to function. College students were given a passage from Francis Crick’s book The Astonishing Hypothesis which has a deterministic bias, to read before taking a computerized test. They were told that there was a glitch in the software, and that the answer to each question would pop up automatically. They were instructed that, to prevent this from happening, they had to push one of the computer keys and were asked to do so. Thus it took extra effort not to cheat. Another group of students read an uplifting book with a positive outlook on life, and they also took the test. What happened? The students who read about determinism cheated, while those who had read the positive attitude book did not. In essence, one mental state affected another mental state. Vohs and Schooler suggested that disbelief in free will produces a subtle cue that exerting effort is futile, thus granting permission not to bother.

People prefer not to bother, because bothering, in the form of self-control, requires exertion and depletes energy.8 Further investigation along these lines by Florida State University social psychologists Roy Baumeister, E. J. Masicampo, and C. Nathan DeWall found that reading deterministic passages increased tendencies of the people they studied to act aggressively and to be less helpful toward others.9 They suggest that a belief in free will may be crucial for motivating people to control their automatic impulses to act selfishly, and a significant amount of self-control and mental energy is required to override selfish impulses and to restrain aggressive impulses. The mental state supporting the idea of voluntary actions had an effect on the subsequent action decision. It seems that not only do we believe we control our actions, but it is good for everyone to believe it.

At the level of university life, however, there has been an assault for the last several centuries on the idea of free will from the determinists. Stirring things up in the sixteenth century, Copernicus declared that the Earth was not the center of the universe, followed up as we know by Galileo and Newton. Later, René Descartes, although more famous for a dualist stance, proposed that the bodily functions followed biological rules; Charles Darwin put forth his evolutionary theory of natural selection; and Sigmund Freud promoted the unconscious world. These ideas, taken together, provided ammunition from the biological world, and seemed to be topped off by Einstein with his theory of relativity and beliefs in a strictly deterministic world. As if that were not enough, along comes neuroscience with all sorts of findings that continue to point us in that direction. The underlying contention is that free will is just happy talk. And just when you would think that the epicenter for such ideas is the physics department—after all, they got us into this mess—they are shaking their heads and have sneaked out the back door, along with many of the biologists, sociologists, and economists. The ones left sitting at the “hard” determinist table are the neuroscientists and Richard Dawkins, who said, “But doesn’t a truly scientific, mechanistic view of the nervous system make nonsense of the very idea of responsibility?”10 What happened? Why is the standard textbook understanding of determinism in trouble?

Physics’ Dirty Little Secret

My son-in-law would say that the cause of the ball rolling across my floor is the floor isn’t level. Then my three-year-old grandson would ask why it isn’t level. Both Newton and my son-in-law would say I had made inaccurate measurements and then would point out that if my initial measurements had been more accurate, then my floor would be level. Defending myself by stretching a point, I could point out that because there is uncertainty in every measurement, the initial conditions could not be measured with complete accuracy. If the initial measurement is uncertain, then the results derived from that measurement are also uncertain. Maybe my floor would have been level, and maybe not. But Newton would have disagreed. Up until 1900, when a pesky Frenchman shook things up, physicists assumed that by making better and better initial measurements the uncertainties in the predictions would be less and that it was theoretically possible to obtain nearly perfect predictions for the behavior of any physical system. Well, of course, Newton would have been right about the physical universe as it pertains to my floor, but, as usual, things aren’t so simple.

Chaos Theory

In 1900 Jules Henri Poincaré, a French mathematician and physicist, threw a fly in the ointment when he made a major contribution to what had become known as “the three-body problem” or “n-body problem” that had been bothering mathematicians since Newton’s time. Newton’s laws when applied to the motion of planets was completely deterministic, thus implying that if you knew the initial position and velocity of the planets, you could accurately determine their position and velocity in the future (or the past for that matter). The problem was that the initial measurement, no matter how carefully done, was not infinitely precise, but had a small degree of error. This didn’t bother anyone very much because they thought the smaller the imprecision of the initial measurement, the smaller the imprecision of the predicted answer.

Poincaré found that while simple astronomical systems follow the rule that reducing the initial uncertainty always reduced the uncertainty of the final prediction, astronomical systems consisting of three or more orbiting bodies with interactions between all three did not. Au contraire! He found that even very tiny differences in initial measurements, over time, would grow at quite a clip, producing substantially different results, far out of proportion with what would be expected mathematically. He concluded that the only way to obtain accurate predictions for these complex systems of three or more astronomical bodies would be to have absolutely accurate measurements of the initial conditions, a theoretical impossibility. Otherwise, over time, any minuscule deviation from an absolutely precise measurement would result in a deterministic prediction with scarcely less uncertainty than if the prediction had been made randomly. In these types of systems, now known as chaotic systems, extreme sensitivity to initial conditions is called dynamical instability or chaos, and long-term mathematical predictions are no more accurate than random chance. So the problem with a chaotic system is that using the laws of physics to make precise long-term predictions is impossible, even in theory. Poincaré’s work, however, simmered in the background for many decades until a weatherman got curious.

During the 1950s, mathematician-turned-meteorologist Edward Lorenz wasn’t happy with the models that were being used for weather prediction (he had probably been blamed for too many ruined picnics). Weather depends on a number of factors such as temperature, humidity, airflow, and so on, and these are to a certain extent interdependent but nonlinear, that is, they are not directly proportional to one another. The models that were being used, however, were linear models. Over the course of the next few years he gathered data and began to put it together. He worked up a mathematical software program (which included twelve differential equations) to study a model of how an air current would rise and fall while being heated by the sun. One day, after obtaining some initial results from running his program, he decided that he would extend his calculations further. Because this was 1961, not only was his computer cumbersome, weighing in at 740 pounds, it was slow. He made a decision to restart the program in the middle of the calculation to save time, and this serendipitous lack of patience and his perceptive brain made him famous. After inputting the data the machine had calculated at that middle point in the previous run, he went out for coffee as the computer chugged along.

Lorentz expected he would get the same result as when he had last run the program—after all, computer code is deterministic. When he came back with his coffee, however, the results were completely different! No doubt exasperated, at first he thought it was a problem with the hardware, but eventually he traced it to the fact that instead of inputting the original number .506127, he had rounded off to the third decimal and only typed .506. Because Poincaré’s chaotic systems had not seen the light of day for more than a half century, so small a difference was considered to be insignificant. For this system, however, a complex system with many variables, it wasn’t! Lorenz had rediscovered chaos theory.

Weather is now understood to be a chaotic system. Long-term forecasts just are not feasible because there are too many variables that are impossible to measure with any degree of accuracy, and even if you could, the tiniest amount of imprecision in any one of the initial measurements would cause a tremendous variation in the end result. In 1972 Lorenz gave a talk about how even tiny uncertainties would eventually overwhelm any calculations and defeat the accuracy of a long-term forecast. This lecture, with the title, “Predictability: Does the Flap of a Butterfly’s Wings in Brazil Set off a Tornado in Texas?” sired the term butterfly effect11 and captured the imagination and fueled the fire of determinists. Chaos doesn’t mean that the system is behaving randomly, it means that it is unpredictable because it has many variables, it is too complex to measure, and even if it could be measured, theoretically the measurement cannot be done accurately and the tiniest inaccuracy would change the end result an enormous amount. To determinists, it just means weather is a huge system with many variables but still follows deterministic behavior to such an extreme that something as minute as the flap of a butterfly’s wing affects it.

Weather is an unstable system that exists far from thermodynamic equilibrium, as do most of nature’s systems. These types of systems caught the eye of physical chemist Ilya Prigogine. As a child, Prigogine was drawn to archeology and music, and later as a university student he became interested in science. The comingling of these interests made Prigogine question Newtonian physics, which treated time as a reversible process. This didn’t make sense to someone who had had these early interests in subjects where time proceeds in one direction. So while weather presented a problem for Newtonian physics because it is irreversible, it interested Prigogine. He called these types of systems “dissipative systems” and in 1977 won the Nobel Prize in Chemistry for the work that he pioneered on them. Dissipative systems do not exist in a vacuum but are thermodynamically open systems that exist in an environment where they are constantly sharing matter and energy with other systems. Hurricanes and cyclones are dissipative systems. They are characterized by the spontaneous appearance of symmetry breaking (emergence) and the formation of complex structures. Symmetry breaking is where small fluctuations acting on a system cross a critical point and determine which of several equally likely outcomes will occur. A well-known example is a ball sitting at the top of a symmetrical hill, where any disturbance will cause it to roll off in any direction, thus breaking its symmetry and causing a particular outcome. We will come back to this idea of emergence of complex systems in a bit.

So we now understand that weather forecasts can be accurate only in the short run. Even with the most super computer possible, long-term forecasts will always be no better than guesses. Well, hasn’t it always been said that only a fool predicts the weather? And although weather is traditionally one of those safe topics to discuss, it may no longer be so at some dinner parties. If the presence of chaotic systems in nature, Poincaré’s fly in the ointment, limits our ability to make accurate predictions with any degree of certainty using deterministic physical laws, it presents a quandary for physicists. It seems to imply that either randomness lurks at the core of any deterministic model of the universe or we will never be able to prove that deterministic laws apply in complex systems. Some physicists, because of this fact, are scratching their heads and thinking that it is meaningless to say that the universe is deterministic in its behavior. Now maybe at your house this is not a big deal, but imagine you are at a dinner party with, well, errr, how about Mr. Determinism himself, Baruch Spinoza, who said, “There is no mind absolute or free will, but the mind is determined for willing this or that by a cause which is determined in its turn by another cause, and this one again by another, and so on to infinity.” Or maybe Albert Einstein, who said, “In human freedom in the philosophical sense I am definitely a disbeliever. Everybody acts not only under external compulsion but also in accordance with inner necessity.” Hmmm, put a few other physicists in there and that would not be a good digestive environment. It turns out that Einstein was fighting his own determinism battles centered on quantum mechanics.

Quantum Mechanics Stirs Up a Hornet’s Nest

During the five decades or so that chaos theory was simmering in the background, it was quantum mechanics that had grabbed the headlines, and most physicists were focused on the microscopic: atoms, molecules, and subatomic particles, not the balls in my living room or in Poincaré’s sky. What they were finding out sent the world of physics into a tailspin. After three centuries, just when everyone was complacently assuming that Newton’s laws were totally universal, they had found that atoms didn’t obey the so-called universal laws of motion. How could Newton’s laws be fundamental laws if the stuff of which objects are made, atoms, doesn’t obey the same laws as the objects themselves? As Richard Feynman once pointed out, exceptions prove the rule . . . wrong.12 Now what was going on? Atoms, molecules, and subatomic particles don’t act like the balls in my living room. In fact, they are not balls at all, but waves! Waves of nothing! Particles are packets of energy with wavelike properties.

Crazy stuff happens in the quantum world. For instance, photons don’t have mass, but angular momentum. Quantum theory was developed to explain why an electron stays in its orbit, which could neither be explained by Newton’s laws nor Maxwell’s laws of classical electromagnetism. It has successfully described particles and atoms in molecules, and its insights have led to transistors and lasers. But a philosophical problem lurks within quantum mechanics. Schrodinger’s equation, which describes in a deterministic way how the wave function changes with time (and is reversible), cannot predict where the electron is in its orbit at any one state in time: that is a probability. If one actually measures the position, the act of measuring it distorts what the value would have been had it not been measured. This is because certain pairs of physical properties are related in such a manner that both cannot be known precisely at the same time: The more precisely one knows one property (by measuring it), the less precisely the other is known. In the case of the electron in orbit, the paired properties are position and momentum. If you measure the position, then it changes the momentum and vice versa. The theoretical physicist Werner Heisenberg presented this as the uncertainty principle. And uncertainty was not a happy thought for physicists and their determinist views but forced them to a different way of thinking. More than half a century ago, Niels Bohr, in his Gifford Lectures spanning 1948–1950, and even earlier in a 1937 article, was already pulling in the reins on determinism when he said, “The renunciation of the ideal of causality in atomic physics . . . has been forced upon us . . .”13 and Heisenberg went even further when he said, “I believe that indeterminism, that is, is necessary, and not just consistently possible.”14

Another lurking problem is the issue of time and causation. Time and semantics are two bugaboos that present themselves when you think about causation. When one recklessly and willy-nilly uses the word causes one can be thrown into an endless regression of questions and answers, as if being interviewed by a two-year-old who has just learned the word (with its attendant punctuation) why? Eventually in this regression of whys, as many determinists and reductions will point out, you will get down to atoms and subatomic particles. But this presents a fundamental problem, as systems theorist Howard Pattee, an emeritus professor at the State University of New York–Binghamton, points out:

    [T]he microscopic equations of physics are time-symmetric and therefore conceptually reversible. Consequently the irreversible concept of causation is not formally supportable by microphysical laws, and if it is used at all it is a purely subjective linguistic interpretation of the laws. . . . Because of this time symmetry, systems described by such reversible dynamics cannot formally (syntactically) generate intrinsically irreversible properties such as measurement, records, memories, controls, or causes. . . . Consequently, no concept of causation, especially downward causation, can have much fundamental explanatory value at the level of microscopic physical laws.15

And for the semantic problem, Pattee adds, “[T]he concepts of causation have completely different meanings in statistical or deterministic models,” and gives the following example: If you were to ask “What is the cause of temperature?” a determinist will assume that cause refers to a microscopic event and say it is caused by the molecules exchanging their kinetic energy by collisions. But the skeptical observer, scratching his head, will note that the measuring device averages this exchange, and does not measure the initial conditions of all the molecules and that averaging, my dear sir (or madam), is a statistical process. An average cannot be observable in a microscopic, determinist model. We have a case of apples and oranges. Pattee wags his finger at those who champion one model over the other and instead champions the idea that they are both needed and are complementary to each other. “I am using complementary here in Boltzmann’s and Bohr’s sense of logical irreducibility. That is, complementary models are formally incompatible but both necessary. One model cannot be derived from, or reduced to, the other. Chance cannot be derived from necessity, nor necessity from chance, but both concepts are necessary. . . . It is for this reason that our concept of a deterministic cause is different from our concept of a statistical cause. Determinism and chance arise from two formally complementary models of the world. We should also not waste time arguing whether the world itself is deterministic or stochastic since this is a metaphysical question that is not empirically decidable.” I love that you get to tell everyone to hush when you are an emeritus professor.

Of course many determinists are anxious to point out that the chain of causes according to determinism is a chain of events not particles, so it never gets down to atoms or subatomic particles. Instead, it traces back to the big bang. In Aristotelian terms, the chain is a series of efficient causes rather than material causes.

Emergence

Smugly I point out to my son-in-law that the floor doesn’t affect the atoms in the ball. Unfortunately, he is a voracious reader with an endlessly inquisitive mind. He points out that Newton’s laws only seem to fail at the level of the atom, one of those things that the physicists with their super measuring devices stumbled upon. “We are not dealing with atoms, but balls. You are talking about another level of organization that doesn’t apply here.” The smart aleck brings up the topic of emergence. Emergence is when micro-level complex systems that are far from equilibrium (thus allowing for the amplification of random events) self-organize (creative, self-generated, adaptability-seeking behavior) into new structures, with new properties that previously did not exist, to form a new level of organization on the macro level.16 There are two schools of thought on emergence. In weak emergence, the new properties arise as a result of the interactions at an elemental level and the emergent property is reducible to its individual components, that is, you can figure out the steps from one level to the next, which would be the deterministic view. Whereas, in strong emergence, the new property is irreducible, is more than the sum of its parts, and because of the amplification of random events, the laws cannot be predicted by an underlying fundamental theory or from an understanding of the laws of another level of organization. This is what the physicists stumbled upon, and they (and their left-brain interpreters) didn’t like the inexplicable idea much, but many have come to accept that this is the way things are. Ilya Prigogine, however, was happy about one thing. He could identify the “arrow of time” as an emergent property that appears at a higher, macro, organizational level. Time does matter on the macro level as is obvious in biological systems. Emergence doesn’t just apply to physics. It applies to all organized systems: cities emerge out of bricks, Beatlemania out of what? Calling a property emergent does not explain it or how it came to be, but rather puts it on the appropriate level to more adequately describe what is going on.

You may not know it, but authors do not have full jurisdiction over the titles of their books and the ultimate choice emerges (inexplicably?) from the publisher. I wanted to call my last book Phase Shift. A phase shift in matter, say from water to ice, is a change in the molecular organization resulting in different properties. I liked the analogy that the difference between the human brain and the brains of other animals is a change in the neuronal organization with resulting new properties. The publisher wasn’t impressed. He called it Human. What has become obvious to most physicists (and apparently my son-in-law) is that at different levels of structure, there are different types of organization with completely different types of interactions governed by different laws, and one emerges from the other but does not emerge predictably. This is even true for something as basic as water turning to ice, as physicist Robert Laughlin has pointed out: Ice has so far been found to have eleven distinct crystalline phases, but none of them were predicted by first principles!17

The balls in my living room are made up of atoms that behave as described by quantum mechanics, and when those microscopic atoms come together to form macroscopic balls, a new behavior emerges and that behavior is what Newton observed and described. It turns out that Newton’s laws aren’t fundamental, they are emergent; that is, they are what happens when quantum matter aggregates into macroscopic fluids and objects. It is a collective organizational phenomenon. The thing is, you can’t predict Newton’s laws from observing the behavior of atoms, nor the behavior of atoms from Newton’s laws. New properties emerge that the precursors did not possess. This definitely throws a wrench into the reductionist’s works and also throws a wrench into determinism. If you recall, the corollary to determinism was that every event, action, et cetera, are predetermined and can be predicted in advance (if all parameters are known). Even when the parameters of the atom are known, however, they cannot predict Newton’s laws for objects. So far they can’t predict which crystalline structure will occur when water freezes in different conditions.

So in some part because of chaos theory and perhaps more so because of quantum mechanics and emergence, physicists are sneaking out the determinism back door, with their tails between their legs. Richard Feynman, in his 1961 lectures to Caltech freshmen, famously declared: “Yes! Physics has given up. We do not know how to predict what would happen in a given circumstance, and we believe now that it is impossible—that the only thing that can be predicted is the probability of different events. It must be recognized that this is a retrenchment in our earlier ideal of understanding nature. It may be a backward step, but no one has seen a way to avoid it. . . . So at the present time we must limit ourselves to computing probabilities. We say ‘at the present time,’ but we suspect very strongly that it is something that will be with us forever—that it is impossible to beat that puzzle—that this is the way nature really is.”18

The big question hovering over the head of the phenomenon of emergence is whether this unpredictability is a temporary state of affairs or not. Just because we don’t know it yet, doesn’t necessarily mean that it is unknowable, although it could be. Albert Einstein believed we considered things to be random merely out of ignorance of some basic property, whereas Niels Bohr believed that probability distributions were fundamental and irreducible. In some cases that have seemingly been explained, Adelphi University professor Jeffrey Goldstein, who studies complexity science, points out that it wasn’t emergence that was the problem, but rather that the example used was not really an example of emergence, whereas in the case of a strange attractor, “mathematical theorems support the inviolable unpredictability of this particular emergent. . . .” But as McGill University philosopher and physicist Mario Bunge points out, “Explained emergence is still emergence”19 and even if one level can be ultimately derived from another “to dispense altogether with classical ideas seems sheer fantasy, because the classical properties, such as shape, viscosity, and temperature, are just as real as the quantum ones, such as spin and nonseparability. Shorter: the distinction between the quantum and classical levels is objective, not just a matter of levels of description and analysis.”

Meanwhile back at the neuroscience department, however, hard determinism still reigns. Hard determinists have difficulty accepting that there is more than one level. They have a hard time accepting the possibility of the radical novelty that accompanies the emergence of a higher level. And why is this? It is because there is so much evidence that the brain functions automatically and that our conscious experience is an after-the-fact experience. At his point, let’s once again remember what a brain is for. This is something that neuroscientists don’t tend to think about much, but the brain is a decision-making device. It gathers information from all sorts of sources to make decisions from moment to moment. Information is gathered, computed, a decision is made, and then you get the sensation of conscious experience. Now you can actually do a little experiment for yourself that demonstrates that consciousness is a post hoc experience. Touch your nose with your finger and you will feel the sensation on your nose and your finger simultaneously. However, the neuron that carries the sensation from your nose to the processing area in the brain is only about three inches long, while the neuron from your hand is about three and a half feet long, and the nerve impulses travel at the same velocity. There is a difference of a few hundred (250–500) milliseconds in the amount of time that it takes for the two sensations to reach the brain, but you are not conscious of this time differential. The information is gathered from the sensory input and computed, a decision is made that both have been touched simultaneously even though the brain did not receive the impulses simultaneously, and only after that do you get the sensation of conscious experience. Consciousness takes time, but it arrives after the work is done!

Consciousness: A Day Late and a Dollar Short

These time lapses have been documented repeatedly beginning more than twenty-five years ago. Benjamin Libet, a physiologist at the University of California–San Francisco, shook things up when he stimulated the brain of an awake patient during the course of a neurosurgical procedure and found that there was a time lapse between the stimulation of the cortical surface that represents the hand and when the patient was conscious of a sensation in the hand.20 In later experiments, brain activity involved in the initiation of an action (pushing a button), occurred about five hundred milliseconds before the action, and that made sense. What was surprising was there was increasing brain activity related to the action, as many as three hundred milliseconds before the conscious intention to act, according to subject reports. The buildup of electrical charge within the brain that preceded what were considered conscious decisions was called Bereitschaftspotential or, more simply, readiness potential.21

Since the time of Libet’s original experiments, as predicted by earlier psychologists, testing has become more sophisticated. Using fMRI, we now no longer think of the brain as a static system, but as a dynamic, ever-changing system that is constantly in action. Using these techniques, John-Dylan Haynes22 and his colleagues expanded Libet’s experiments in 2008 to show that the outcomes of an inclination can be encoded in brain activity up to ten seconds before it enters awareness! The brain has acted before its person is conscious of it. Not only that, from looking at the scan, they can make a prediction about what the person is going to do. The implications of this are rather staggering. If actions are initiated unconsciously, before we are aware of any desire to perform them, then the causal role of consciousness in volition is out of the loop: Conscious volition, the idea that you are willing an action to happen, is an illusion. But is this the right way to think about it? I am beginning to think not.

Hard Determinists: The Causal Claim Chain Gang

So the hard determinists in neuroscience make what I call the causal chain claim: (1) The brain enables the mind and the brain is a physical entity; (2) The physical world is determined, so our brains must also be determined; (3) If our brains are determined, and if the brain is the necessary and sufficient organ that enables the mind, then we are left with the belief that the thoughts that arise from our mind also are determined; (4) Thus, free will is an illusion, and we must revise our concepts of what it means to be personally responsible for our actions. Put differently, the concept of free will has no meaning. The concept of free will was an idea that arose before we knew all this stuff about how the brain works, and now we should get rid of it.

There is no disagreement among the neuroscientists about the first claim, that the brain enables the mind in some unknown way and the brain is a physical entity. Claim 2, however, has become a loose link and is under attack: Many physicists are no longer sure that the physical world is predictably determined because the nonlinear mathematics of complex systems does not allow exact predictions of future states. Now we have claim 3 (that our thoughts are determined) on shaky ground. Although some neuroscientists think we may prove that specific neuronal firing patterns will produce specific thoughts and that they are predetermined, none has a clue about what the deterministic rules would be for a nervous system in action. I think that we are facing the same conundrum that physicists dealt with when they assumed Newton’s laws were universal. The laws are not universal to all levels of organization; it depends which level of organization you are describing, and new rules apply when higher levels emerge. Quantum mechanics are the rules for atoms, Newton’s laws are the rules for objects, and one couldn’t completely predict the other. So the question is whether we can take what we know from the micro level of neurophysiology about neurons and neurotransmitters and come up with a determinist model to predict conscious thoughts, the outcomes of brains, or psychology. Or even more problematic is the outcome with the encounter of three brains. Can we derive the macro story from the micro story? I do not think so.

I do not think that brain-state theorists, those neural reductionists who hold that every mental state is identical to some as-yet-undiscovered neural state, will ever be able to demonstrate it. I think conscious thought is an emergent property. That doesn’t explain it; it simply recognizes its reality or level of abstraction, like what happens when software and hardware interact, that mind is a somewhat independent property of brain while simultaneously being wholly dependent upon it. I do not think it possible to build a complete model of mental function from the bottom up. If you do think this is possible, oddly enough, a spiny crustacean and a biologist have given us all pause on how it might all work.

The Spiny Lobster Problem

Eve Marder has been studying the simple nervous system and the resulting motility patterns of spiny lobster guts. She has isolated the entire pattern of the network with every single neuron and synapse worked out, and she models the synapse dynamics to the level of neurotransmitter effects. Deterministically speaking, from knowing and mapping all this information, she should be able to piece it together and describe the resulting function of the lobster gut. Her laboratory simulated more than 20 million possible network combinations of synapse strengths and neuron properties for this simple little nervous system.23 By modeling all those combinations, it turned out that about 1–2 percent could lead to appropriate dynamics that would create the motility pattern observed in nature. Even though it is a small percent, it still turns out to be 100,000 to 200,000 different tunings that will result in the exact same behavior at any given moment (and this is a very simple system with few parts)! The philosophical concept of multiple realizability—the idea that there are many ways to implement a system to produce one behavior—is alive and well in the nervous system.

The enormous diversity of network configurations that could lead to an identical behavior leads one to wonder if it is possible to figure out, with single-unit analysis and very molecular approaches, what is going on to produce a behavior. This is a profound problem for the neuroscientist reductionist, because it shows that analyzing nerve circuits may be able to inform how the thing could work but not how it actually does work. On the surface, it seems to reveal how hard it is going to be to get a specific neuroscientific account of a specific behavior. Her work almost comes off as supporting the idea of emergence—that studying neurons won’t get us to the right level of explanation. There are too many different states that can lead to one outcome. Should neuroscientists despair?

John Doyle doesn’t think so and sees no need for that kind of talk at all. He points out that when considering multiple components of anything, it simply follows that as the number of circuit components and parameters grows there is a more than exponentially hugely growing set of possible circuits. Further, there is a smaller but still exponentially huge growing set of functional circuits. Importantly, the functional set is an exponentially vanishing fraction of the whole set. So even though the possible combinations are huge, the actual number of functional combinations is only a small percentage of that huge number.

Well, that is what Eve Marder and her colleagues discovered, and those relationships hold over many kinds of things not just lobsters. For example, as Doyle says, “there are a huge number of English words, something like more than 105 words. But take the word organized. . . . It has 9 different letters, so there are 362,880 sequences with just those nine letters, but only one of them is a functional English word. So any long random string of letters is vanishingly unlikely to be a real word (e.g., roaginezd), yet there are still a huge number of words.” As Doyle points out, this is a good thing, because it is consistent with the idea that the brain is a layered system. Being layered buys a lot. It gets down to the idea of robustness. The layer below creates a very robust yet flexible platform for the emergent layer above.

Marder’s work has revealed the problem for neuroscientists. The task is to further understand how the various layers of the brain interact, indeed how to even think about it and develop concepts and a vocabulary for those interdependent interactions. Working from this perspective has the possibility of not only demystifying what is truly meant by concepts such as emergence, but also allows for insights on how layers actually communicate with one another.

Even if we assume that claim 3—thoughts arising from our minds are determined—is true, then we are led to claim 4, that free will is an illusion. Putting aside the long history of compatibilism—the idea that people are free to choose an idea, more or less by assertion, in a determined universe—what does it really mean to talk about free will? “Ah, well, we want to be free to make our own decisions.” Yes, but what do we want to be free from? We don’t want to be free from our experience of life, we need that for our decisions. We don’t want to be free from our temperament because that also guides our decisions. We actually don’t want to be free from causation, we use that for predictions. A receiver trying to catch a football does not want to be free from all the automatic adjustments that his body is making to maintain his speed and trajectory as he dodges tackles. We don’t want to be free from our successfully evolved decision-making device. What do we want to be free from? This topic draws quite a bit of attention, as you can well imagine. I would like, however, to talk about the system in a different light.

You’d Never Predict the Tango If You Only Studied Neurons

For literally thousands of years, philosophers and nearly everyone else have argued about whether the mind and body are one entity or two. The belief that people are more than just a body, that there is an essence, a spirit or mind, whatever it is that makes you “you” or me “me,” is called dualism. Descartes is perhaps most famous for his dualistic stance. The idea that we have an essence beyond our physical selves comes so easily to us that we would think it odd if you were to resort to a mere physical description to describe someone. A friend of mine who recently met the retired Supreme Court justice Sandra Day O’Connor did not describe her height, hair color, or age, but said, “She is spunky and sharp as a tack.” She described her mental essence. While determinism has supplanted dualism in the brain sciences, it falls short of explaining behavior and our sense of personal responsibility and freedom.

I think that we neuroscientists are looking at these capacities from the wrong organizational level. We are looking at them from the individual brain level, but they are emergent properties found in the group interactions of many brains. Mario Bunge makes a point that we neuroscientists should heed: “[W]e must place the thing of interest in its context instead of treating it as a solitary individual.” The idea, which was difficult for physicists to swallow, but swallow most of them have, is that something happens that can’t be captured from a bottom-up approach. Reductionism in the physical sciences has been challenged by the principle of emergence. The whole system acquires qualitatively new properties that cannot be predicted from the simple addition of those of its individual components. One might apply the aphorism that the new system is greater than the sum of its parts. There is a phase shift, a change in the organizational structure, going from one scale to the next. Why do we believe in this sense of freedom and personal responsibility? “The reason we believe them, as with most emergent things, is because we observe them.” Although the physicist Robert Laughlin was commenting about phase transitions such as changing from water to ice, he may as well have been talking about our feelings of responsibility and freedom.

In speaking about the phenomenon of emergence in 1972, Nobel Prize–winning physicist Philip W. Anderson in his seminal paper, “More Is Different,” reiterated the idea that we can’t get the macro story from the micro story: “The main fallacy in this kind of thinking is that the reductionist hypothesis does not by any means imply a ‘constructionist’ one: The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. In fact, the more the elementary particle physicists tell us about the nature of the fundamental laws, the less relevance they seem to have to the very real problems of the rest of science, much less to those of society.”24 He later waggles his finger at biologists, and no doubt at us neuroscientists, too, “The arrogance of the particle physicist and his intensive research may be behind us (the discoverer of the positron said ‘the rest is chemistry’), but we have yet to recover from that of some molecular biologists, who seem determined to try to reduce everything about the human organism to ‘only’ chemistry, from the common cold and all mental disease to the religious instinct. Surely there are more levels of organization between human ethology and DNA than there are between DNA and quantum electrodynamics, and each level can require a whole new conceptual structure.”

In his wonderful book A Different Universe, Robert Laughlin, who won the Nobel Prize in Physics in 1998, said about the dawning of the understanding of emergence, “What we are seeing is a transformation of worldview in which the objective of understanding nature by breaking it down into ever smaller parts is supplanted by the objective of understanding how nature organizes itself.”

Physicists have realized that a complete theoretical understanding of the microscopic constituents does not suggest a new set of general theories for how they are put together into interesting macromolecular structures and how the processes work that make it what it is. That nature does it is in no way in question, but whether we can theorize, predict, or understand this process is to Richard Feynman highly improbable, and Philip Anderson and Robert Laughlin believe it is impossible. The upwardly causal constructionist view that understanding the nervous system will allow us to understand all the rest of it is not the way to think about the problem.

Emergence is a common phenomenon that is accepted in physics, biology, chemistry, sociology, and even art. When a physical system does not demonstrate all the symmetries of the laws by which it is governed, we say that these symmetries are spontaneously broken. Emergence, this idea of symmetry breaking, is simple: Matter collectively and spontaneously acquires a property or preference not present in the underlying rules themselves. The classic example from biology is the huge, towerlike structure that is built by some ant and termite species. These structures only emerge when the ant colony reaches a certain size (more is different) and could never be predicted by studying the behavior of single insects in small colonies.

Yet, emergence is mightily resisted by many neuroscientists, who sit grimly in the corner and continue to shake their heads. They have been celebrating that they have finally dislodged the homunculus out of the brain. They have defeated dualism. All the ghosts in the machine have been banished and they, as sure as shootin’, are not letting any back in. They are afraid that to put emergence in the equation may imply that something other than the brain is doing the work and that would let the ghost back into the deterministic machine that the brain is. No emergence for them, thank you! I think this is the wrong way for neuroscientists to look at the problem. Emergence is not a mystical ghost but the going from one level of organization to another. You, alone on the proverbial desert island, or for that matter, alone in your house on a rainy Sunday afternoon, follow a different set of rules than you do at a cocktail party at your boss’s house.

The key to understanding emergence is to understand that there are different levels of organization. My favorite analogy is that of the car, which I have mentioned before. If you look at an isolated car part, such as a cam shaft, you cannot predict that the freeway will be full of traffic at 5:15 P.M. Monday through Friday. In fact, you could not even predict the phenomenon of traffic would ever occur if you just looked at a brake pad. You cannot analyze traffic at the level of car parts. Did the guy who invented the wheel ever visualize the 405 in Los Angeles on Friday evening? You cannot even analyze traffic at the level of the individual car. When you get a bunch of cars and drivers together, with the variables of location, time, weather, and society, all in the mix, then at that level you can predict traffic. A new set of laws emerge that aren’t predicted from the parts alone.

The same holds true for brains. Brains are automatic machines following decision pathways, but analyzing single brains in isolation cannot illuminate the capacity of responsibility. Responsibility is a dimension of life that comes from social exchange, and social exchange requires more than one brain. When more than one brain interacts, new and unpredictable things begin to emerge, establishing a new set of rules. Two of the properties that are acquired in this new set of rules that weren’t previously present are responsibility and freedom. They are not found in the brain, just as John Locke declared when he said, “the will in truth, signifies nothing but a power, or ability, to prefer or choose. And when the will, under the name of a faculty, is considered, as it is, barely as an ability to do something, the absurdity in saying it is free, or not free, will easily discover itself.”25 Responsibility and freedom are found, however, in the space between brains, in the interactions between people.

How to Rile a Neuroscientist

Modern neuroscience is happy to accept that human behavior is the product of a probabilistically determined system, which is guided by experience. But how is that experience doing the guiding? If the brain is a decision-making device and is gathering information to inform those decisions, then can a mental state that is the result of some experience or the result of some social interaction affect or constrain future mental states? If we all were French we would, in exasperation, jut out our upper lip and let out an expiration, shrug, and say, “but of course,” unless you were a neuroscientist or perhaps a philosopher. This means top-down causation. Suggesting top-down causation to a group of neuroscientists are fightin’ words. It is to your peril to invite a group of them to your house and bring it up at dinner. Better we invite the physicist Mario Bunge, who will tell us that we “should supplement every bottom-up analysis with a top-down analysis, because the whole constrains the parts: just think of the strains in a component of a metallic structure, or the stress in a member of a social system, by virtue of their interactions with other constituents of the same system.”

If we invite our systems control expert, Howard Pattee, he will be happy to tell us that while causation has no explanatory value at the level of physical laws, it certainly does at higher levels of organization. For instance, it is helpful to know that iron deficiency causes anemia. Pattee suggests that the everyday meaning of causation is pragmatic and is used for events that are controllable. Controlling the iron level will fix the anemia. We cannot change the laws of physics, but we can change the iron level. When a car rear-ends another car at the bottom of a hill, we say that the accident was caused by the worn-out brakes, something we can point our finger at and control. We do not, however, blame the laws of physics or all the chance circumstances that are not under our control (the fact that there was another car stopped at the light at the bottom of the hill, all the reasons that led to that driver being there, the timing of the traffic lights, and so on). Pattee sees this tendency to identify a single controllable cause “that, by itself, might have prevented the accident but maintained all other expected outcomes,” rather than as the result of a complex system as “one reason that downward causation is problematic. In other words, we think of causes in terms of the simplest proximal control structures in what would otherwise turn into an endless chain or network of concurrent, distributed causes.” That is to say, downward causation is chaotic and unpredictable.

And where does control enter into the picture, Pattee asks? Not at the micro level, because by definition physical laws describe only those relations between events which don’t vary from one observer to the next. When a parent sternly asks, “Why did you cheat on your test?” and they receive the answer, “It was just atoms following the laws of physics,” which is the universal cause for all events, the child will be labeled a smart aleck and be duly punished, probably even by the most reductionist of parents. The kid’s explanation needs to come up a few levels of behavior to where control can be exerted. Control implies some form of constraint. Control is not eating the jelly donut because you know it is not healthy, and not cheating on the test because, well, if you get caught you get in some kind of trouble. Control is an emergent property.

In neuroscience when you talk about downward causation you are suggesting that a mental state affects a physical state. You are suggesting that a thought at the Macro A level can affect the neurons on the Micro B physical level. The first question is, how do we get from the level of neurons (Micro B) to the emergent thought (Macro A)? David Krakauer, a theoretical biologist at the Santa Fe Institute, emphasizes that “the trick, for any level of analysis, is to find the effective variables containing all the information from below required to generate all the behavior of interest above. This is as much an art as a science. Now, ‘bottom-up causality’ (going from a B Micro level, a neuron, to an A macro level, a thought) can be both intractable and incomprehensible. ‘Top-down causality’ refers to the description of Macro A causing Micro B when A is expressed in higher-level effective variables and dynamics, and B in terms of the microscopic dynamics. Physically, all the interactions are microscopic (B–B) but not all the microscopic degrees of freedom matter.”26 That is, B can generate A, but A is still made up of B.

For example, Krakauer points out that when we program a computer, or control the computer in Pattee’s world, “we interface with a complex physical system that performs computational work. We do not program at the level of electrons, Micro B, but at a level of a higher effective theory, Macro A (for example, Lisp programming) that is then compiled down, without loss of information, into the microscopic physics. Thus, A causes B. Of course, A is physically made from B, and all the steps of the compilation are just B with B physics. But from our perspective, we can view some collective B behavior in terms of A processes.”

If we go back to my living room, the atoms come together and can generate the ball rolling across the floor, but the ball is still made up of atoms. We view the collective behavior of the atoms, Micro B, at the higher organizational level of the ball, Macro A, and we see it doing ball behavior following Newton’s laws, but the atoms are there at the core doing their own thing and following a different set of laws. In brain science we use concepts like anger, tone, and perspective for our Macro A states. These are the A coarse-grained variable states that we view standing in for the B micro states. Krakauer continues: “We work well with the A level, due to limitation of our own introspective awareness. Internally, something does the compiling before it reaches consciousness. So maybe either A or the compiler can be thought of as a language of thought. We are not separate from the machine, that Micro B level, but we understand ourselves at suitable A levels.

“The deeper point is that without these higher levels, there would be no possibility of communication, as we would have to specify every particle we wish to move in the utterance, rather than have the mind-compiler do the work.” There is an absolute necessity for emergence to occur to control this teeming, seething system that is going on at another level. The overall idea is that we have a variety of hierarchical emerging systems erupting from the level of particle physics to atomic physics to chemistry to biochemistry, to cell biology to physiology emerging into mental processes.

Complementarity Si, Downward Causation No

Once a mental state exists, is there downward causation? Can a thought constrain the very brain that produced it? Does the whole constrain its parts? This is the million-dollar question in this business. The classic puzzle is usually put this way: There is a physical state, P1, at time 1, which produces a mental state, M1. Then after a bit of time, now time 2, there is another physical state, P2, which produces another mental state, M2. How do we get from M1 to M2? This is the conundrum. We know that mental states are produced from processes in the brain so that M1 does not directly generate M2 without involving the brain. If we just go from P1 to P2 then to M2, then our mental life is doing no work and we are truly just along for the ride. No one really likes that notion. The tough question is, does M1, in some downward-constraining process, guide P2, thus affecting M2?

We may get a little help with this question from geneticists. They used to think gene replication was a simple, upwardly causal system: Genes were like beads on a string that make up a chromosome that replicates and produces identical copies of itself. Now they know that genes are not that simple, that there is multiplicity of events going on. Our systems control guy, Howard Pattee, finds that a good example of upward and downward causation is the genotype-phenotype mapping of description to construction. It “requires the gene to describe the sequence of parts forming enzymes, and that description, in turn, requires the enzymes to read the description. . . . In its simplest logical form, the parts represented by symbols (codons) are, in part, controlling the construction of the whole (enzymes), but the whole is, in part, controlling the identification of the parts (translation) and the construction itself (protein synthesis).” And once again Pattee wags his finger at extreme positions that champion which is more important, upward or downward. They are complementary.

It is this sort of analysis that finds me realizing the reasoning trap we can all too easily fall into when we look to Benjamin Libet’s kind of fact, that the brain does something before we are consciously aware of it. With the arrow of time all moving in one direction, with the notion that everything is caused by something before it, we lose a grip on the concept of complementarity. What difference does it make if brain activity goes on before we are consciously aware of something? Consciousness is its own abstraction on its own time scale and that time scale is current with respect to it. Thus, Libet’s thinking is not correct. That is not where the action is, any more than a transistor is where the software action is.

Setting a course of action is automatic, deterministic, modularized, and driven not by one physical system at any one time but by hundreds, thousands, and perhaps millions. The course of action taken appears to us as a matter of choice, but the fact is, it is the result of a particular emergent mental state being selected by the complex interacting surrounding milieu.27 Action is made up of complementary components arising from within and without. That is how the machine (brain) works. Thus, the idea of downward causation may be confusing our understanding. As John Doyle says, “Where is the cause?” What is going on is the match between ever-present multiple mental states and the impinging contextual forces within which it functions. Our interpreter then claims we freely made a choice.

It gets more complicated. We are now going to have to consider the social context and the social constraints on individual actions. There is something going on at the group level.


* Thanks to Flip Wilson’s famous album The Devil Made Me Buy That Dress!

** Brain maps are neuronal representations of the world, one of which is for time, or the temporal map.

An attractor is a set (a collection of distinct objects) toward which a dynamical system evolves over time. A complicated set with a fractal structure is known as a strange attractor. (Wikipedia)