7

The Ghost in the Machine

‘Regarding the nature of the very foundation of mind, consciousness, we know as much as the Romans did: nothing.’

– Werner Loewenstein1

Thirteen years after delivering his Dublin lectures, Erwin Schrödinger returned to the subject of life in a series of presentations at Cambridge University entitled ‘The physical basis of consciousness’.2 Focusing on the question ‘What kind of material process is directly associated with consciousness?’, he proceeded to give a physicist’s eye-view of this most extraordinary of phenomena. Among life’s many baffling properties, the phenomenon of consciousness leaps out as especially striking. Its origin is arguably the hardest problem facing science today and the only one that remains almost impenetrable even after two and a half millennia of deliberation. If Schrödinger’s question ‘What is life?’ has proved hard enough to answer, ‘What is mind?’ is an even tougher nut to crack.

An explanation of mind, or consciousness, is more than an academic challenge. Many ethical and legal questions hinge on whether, or how much, consciousness is present in an organism. For example, opinions about abortion, euthanasia, brain death, vegetative states and locked-in syndrome may depend on the extent to which the subject is conscious.fn1 Is it right to artificially prolong the life of a permanently unconscious human being? How can we tell if an unresponsive stroke victim might actually be aware of their surroundings and in need of care? Animal rights involving definitions of cruelty are often based on very informal arguments about whether and when an animal can suffer or ‘feel pain’.fn2 Added to these concerns there is the emerging field of non-biological intelligence. Can a robot be conscious, and if so does it have rights and responsibilities? If we had an accepted definition of ‘degree of consciousness’ based on a sound scientific theory, then perhaps we could make better judgements about such contentious matters.

What is lacking is a comprehensive theory of consciousness. In Western societies there is a popular notion that the conscious mind is an entity in its own right. It is a view often traced back to the seventeenth-century French philosopher-scientist René Descartes, who envisaged human beings as made of two sorts of things: bodies and minds. He referred to res extensa (roughly speaking, material stuff) and res cogitans (wispy mind-stuff). In popular Christian culture the latter concept has sometimes become conflated with the soul, an immaterial extra ingredient that believers think inhabits our bodies and drifts off somewhere when we die. Modern philosophers (and theologians, for that matter) generally take a dim view of ‘Cartesian dualism’ as Descartes’ ‘separation of powers’ is known, preferring to think of human beings as unitary entities. In 1949 the Oxford philosopher Gilbert Ryle coined the pejorative phrase ‘the ghost in the machine’ to describe Descartes’ position (which he called ‘the official view’ of mind). He derisorily drew an analogy between our immaterial minds controlling our mechanical bodies with, say, a car under the control of a driver.3 Ryle argued that this mystical ‘dogma’ was not only wrong in fact but deeply flawed conceptually. Yet in the popular imagination, the mind is still regarded as some sort of nebulous ghost in the machine. In this book I have argued that the concept of information can explain the astonishing properties of living matter. The supreme manifestation of biological information processing is the brain, so it is tempting to suppose that some aspect of information will form a bridge between mind and matter, as it does between life and non-life. Swirling patterns of information do not constitute a ‘ghost’ any more than they constitute a ‘life force’. Yet the manipulation of information by demon-like molecular structures is perhaps a faint echo of the dualism that Ryle derided. It is, however, a dualism rooted, not in mysticism, but in rigorous physics and computational theory.

IS ANYONE AT HOME?

To get started, let’s consider what we mean when we talk about consciousness in daily life. Most of us have a rough and ready definition: consciousness is an awareness of our surroundings and our own existence. Some people might throw in a sense of free will. We possess mental states consisting of feelings, thoughts and sensations, and somehow our mental world couples to the physical world through our brains. And that’s about as far as it goes. Attempts to define consciousness more precisely run into the same problems as attempts to define life but are far more vexing. The mathematician Alan Turing, famous for his work on the foundations of computing, addressed this question in a paper published in 1950 in Mind.4 Asking the loaded question ‘Can machines think?’, Turing pre-figured much of today’s hand-wringing over the nature of artificial intelligence. His main contribution was to define consciousness by what he called ‘the imitation game’,fn3 often referred to as ‘the Turing test’. The basic idea is that if someone interrogates a machine and cannot tell from the answers whether the responses are coming from a computer or another human being, then the computer can be defined as conscious.

Some people object that just because a computer may convincingly simulate the appearance of consciousness doesn’t mean it is conscious; the Turing test attributes consciousness purely by analogy. But isn’t that precisely what we do all the time in relation to other human beings? Descartes famously wrote, ‘I think, therefore I am.’ But although I know my own thoughts, I cannot know yours without being you. I might infer from your behaviour, by analogy with mine, that ‘there’s somebody at home’ inside your body, but I can never be sure. And vice versa. The best I can say is ‘you look like you are thinking so you look like you exist’. There is a philosophical position known as solipsism that denies the existence of other minds. I won’t pursue it here because if you, the reader, don’t exist, then you won’t be interested in my arguments for solipsism and I will be wasting my time.

Philosophers have spent centuries trying to link the worlds of mind and matter, a conundrum that sometimes goes by the name of the ‘mind–body problem’. For thousands of years a popular view of consciousness, or mind, has been that it is a universal basic feature of all things. Known as panpsychism, this doctrine had many variations, but the common feature is the belief that mind suffuses the cosmos as an elementary quality; human consciousness is just an expression, focused and amplified, of a universal mental essence. In this respect it has elements in common with vitalism. Such thinking persisted well into the twentieth century; aspects of it can be found in Jung’s psychology, for example. However, panpsychism doesn’t sit comfortably with modern neuroscience, which emphasizes electrochemical complexity. In particular, higher brain functions are clearly associated with the collective organization of the neural architecture. It would make little sense to say that every neuron is ‘a little bit’ conscious and thus a collection of many neurons is very conscious. Only when millions of neurons are integrated into a complex and highly interconnected network does consciousness emerge. In the human brain, a conscious experience is made up of many components present simultaneously. If I am conscious of, say, a landscape, the momentary experience of the scene includes visual and auditory information from across the field of view, elaborately processed in different regions of the brain, then integrated into a coherent whole and (somehow!) delivered to ‘the conscious self’ (whatever that is) as a meaningful holistic experience.

All of which prompts the curious question, where precisely are minds? The obvious answer is: somewhere between our ears. But again, we can’t be completely sure. For a long while the source of feelings was associated not with the brain but with other organs, like the gut, heart and spleen. Indeed, a vestige of this ancient belief lives on when angry people are described as ‘venting their spleen’ or we refer to a ‘gut feeling’ to mean intuition. And use of terms like ‘sweetheart’, ‘heartthrob’ and ‘heartbroken’ in the matter of romantic love are very common. It’s unlikely that the endearment ‘you are my sweetbrain’ (still less ‘my sweetamygdala’) would serve to ‘win the heart’ of a lady, even though it is scientifically more accurate.

More radically, how can we be sure that the source of consciousness lies within our bodies at all? You might think that because a blow to the head renders one unconscious, the ‘seat of consciousness’ must lie within the skull. But there is no logical reason to conclude that. An enraged blow to my TV set during an unsettling news programme may render the screen blank, but that doesn’t mean the news reader is situated inside the television. A television is just a receiver: the real action is miles away in a studio. Could the brain be merely a receiver of ‘consciousness signals’ created somewhere else? In Antarctica, perhaps? (This isn’t a serious suggestion – I’m just trying to make a point.) In fact, the notion that somebody or something ‘out there’ may ‘put thoughts in our heads’ is a pervasive one; Descartes himself raised this possibility by envisaging a mischievous demon messing with our minds. Today, many people believe in telepathy. So the basic idea that minds are delocalized is actually not so far-fetched. In fact, some distinguished scientists have flirted with the idea that not all that pops up in our minds originates in our heads. A popular, if rather mystical, idea is that flashes of mathematical inspiration can occur by the mathematician’s mind somehow ‘breaking through’ into a Platonic realm of mathematical forms and relationships that not only lies beyond the brain but beyond space and time altogether.5 The cosmologist Fred Hoyle once entertained an even bolder hypothesis: that quantum effects in the brain leave open the possibility of external input into our thought processes and thus guide us towards useful scientific concepts. He proposed that this ‘external guide’ might be a superintelligence in the far cosmic future using a subtle but well-known backwards-in-time property of quantum mechanics in order to steer scientific progress.6 Even if such wild notions are dismissed, extended minds could become the norm in the future. Humans may enjoy enhanced intelligence by outsourcing some of their mental activity to powerful computational devices that might be located in the cloud and coupled to their brains via wi-fi, thus repurposing brains as part receivers and part producers of consciousness.

An extreme version of the conjecture that our thoughts are generated outside our brains is the simulation argument, currently fashionable among certain philosophers and popularized by movies like The Matrix. The general idea is that what we take to be ‘the real world’ is actually a fancy virtual-reality show created inside a super-duper computer in the really real world. In this scheme, we human beings are modules of simulated consciousness.fn4 Nothing can be said about the simulators – who or what they are, or what it is – because we poor simulations are stuck inside the system and so can never access the transcendent world of the simulator/s. In our fake simulated world, we have (fake) simulated bodies that include simulated brains, but the actual thoughts, sensations, feelings, and so on, that go along with consciousness don’t arise in the fake brains at all but in the simulating system in another plane of existence altogether.

It’s fun to speculate about these outlandish scenarios, but from here on I’m going to stick to the conservative view that consciousness is indeed produced, somehow, in the brain and ask what sort of physical process can do that. Don’t be disappointed with this narrow agenda: there are still plenty of challenging problems to grapple with.

MIND OVER MATTER

Even non-solipsists – those who accept that other humans are conscious – cannot agree about which non-human organisms are conscious. Most people seem to be comfortable with the assumption that their pets have minds, but sliding down the tree of life towards its primitive trunk reveals no sharp boundary, no behavioural clues that ‘there is something in there’. Is a mouse conscious? A fly? An ant? A bacterium? If we want to argue by analogy, an important feature of consciousness is awareness of surroundings and an ability to respond appropriately to changes. Well, bacteria move towards food with what seems like purposeful agency. Yet it’s hard to imagine that a bacterium can really ‘feel hungry’ in the same manner as you or I. But who can say?

Sometimes appeal is made to brain anatomy. It is clear that most of what the brain and associated nervous system does is performed unconsciously. Basic housekeeping functions – sensory signal processing and integration, searching memory, controlling motor activity, keeping the heart beating – proceed without our being aware of it. Many regions of the brain tick over just fine when someone loses consciousness (for example, in deep sleep, or when anaesthetized), which suggests that not all the brain is conscious or, more precisely, that generating consciousness is a function confined to only part of the brain, often taken to be a region called the corticothalamus. But it is difficult to determine exactly what properties this region possesses that other, unconscious yet still stupendously complex parts of the brain do not possess. Furthermore, some animals that display intelligent behaviour, such as birds, have very differently organized brain anatomy, so either consciousness and intelligence don’t go hand in hand or attributing consciousness to a particular brain region is misconceived.

One thing isn’t contentious: the brain processes information. It is therefore tempting to seek ‘the source of consciousness’ in the patterns of information swirling inside our heads. Neuroscientists have made huge strides in mapping what is going on in the brain when the subject experiences this or that sensation, emotion or sensory input. It isn’t easy. The human brain contains 100 billion neurons (about the same as the number of stars in the galaxy) and each neuron connects with hundreds, maybe thousands, of others to form a vast network of information flow. Billions of rapid-firing neurons send elaborate cascades of electrochemical signals coursing through the network. Somehow, out of this electrical melee coherent consciousness emerges.

Distilling the problem down to basics, what we would like to know are the answers to the following two questions:

  1. What sort of physical processes generate consciousness? This was what Schrödinger asked. For example, swirling electrical patterns of the sort that occur in brains would seem to, but what about swirling electrical patterns in the national power grid? If you answer yes to the first example and no to the second, then the question arises of whether it’s all down to the patterns, as opposed to the electricity as such. Is there a pattern complexity threshold, so that brains are complex enough but electricity grids aren’t? And if it’s patterns that count, must it be done with electricity, or would any complex shimmering pattern do? Turbulent fluids, perhaps? Or interlocking chemical cycles? Alternatively, could it be that some other ingredient is needed – what one might call the ‘electricity plus’ theory of consciousness? And if so, what is the ‘plus’ bit? Nobody knows.
  2. Given that minds exist, how are they able to make a difference in the physical world? How do minds couple to matter to give them causal purchase over material things? This is the ancient mind–body problem. If I choose to move my arm and my arm moves, something in the physical universe has changed (the position of my arm). But how does that happen? How is ‘choice’ or ‘decision’ transduced into movement of atoms? It’s no good telling me that my desire to move my arm is nothing but swirling electrical patterns which then trigger electrical signals that travel through the nerves to my arm and cause muscle contraction, because that just purports to explain mystery 2 by appealing to mystery 1.

Running through my description is a hidden assumption always implicit in discussions of consciousness, namely, that there exists an agent or person or entity that ‘possesses’ consciousness. A mind ‘belongs’ to someone. I’m referring of course to the sense of self. Strictly, we must differentiate between being conscious of the world and being conscious of oneself (‘self-consciousness’); perhaps a fly is conscious of the world but not of its own existence as an agent. But humans undeniably have a deep sense of self,fn5 of being some sort of ‘ghost in a machine’. Whatever the philosophical shortcomings of such dualism, it seems safe to say that almost everyone regards minds as real. But what are they? Not material or etherial substances. Information, perhaps? Not just any old information, but very specific patterns of information swirling in the brain. The general notion that information flow in neural circuitry somehow generates consciousness seems obvious, but a full explanation for mind needs to go much further. If the informational basis of mind is right, then minds exist in the same sense that information exists. But we cannot disconnect mind from matter. As Rolf Landauer taught us, ‘information is physical’, so minds must perforce also be tied to the material goings-on in the brain.

But how?

THE FLOW OF TIME

‘The past, present and future is only a stubbornly persistent illusion.’

– Albert Einstein7

One clue to the link between neural information and consciousness comes from the most elementary aspect of human experience: our sense of the flow of time. Even under sensory deprivation people retain a sense of self and their continuing existence, so time’s passage is an integral part of self-awareness. In Chapter 2 I described the existence of an arrow of time that can be traced back to the second law of thermodynamics and, ultimately, to the initial conditions of the universe. There is no disagreement about that. However, many people conflate the physical arrow of time with the psychological sense of the flow of time. Popular-science articles commonly use phrases like ‘time flowing forwards’ or the possibility of ‘time running backwards’.

It’s obvious that everyday processes possess an inbuilt directionality in time, so that if we witnessed the reverse sequence – like eggs unbreaking and gases unmixing all by themselves – we would be flabbergasted. Note that I am careful to describe sequences of physical states in time, yet the standard way of discussing it is to refer to an arrow of time. This misnomer is seriously misleading. The arrow is not a property of time itself. In this respect, time is little different from space. Think of the spin of the Earth, which also defines an asymmetry (between north and south). We sometimes denote that by an arrow too: a compass needle points north, and on a map it is conventional to show an arrow pointing north. We would never dream of saying, however, that Earth’s north–south asymmetry (or the arrow on a map) is an ‘arrow of space’. Space cares nothing for the spinning Earth, or north and south. Similarly, time cares nothing for eggs breaking or reassembling, or gases mixing or unmixing.

To call the sensation of a temporal flow ‘the arrow of time’, as is so often done, clearly conflates two distinct metaphors. The first is the use of an arrow to indicate spatial orientation (as in a compass needle), and the second is by analogy with an arrow in flight, symbolizing directed motion. When the arrow on a compass needle points north it doesn’t indicate that you are moving north. In the same way, it is fine to attach an arrow to sequences of events in the world in order to distinguish past from future in the sequence, but what is not fine – what is absurd, in fact – is to then say that this arrow of asymmetry implies a movement towards the future along the timeline of events, that is, a movement of time.

My argument is further strengthened by noting that the alleged passage of time can’t be measured. There is no laboratory instrument that can detect the flow of time. Hold on, you might be thinking, don’t clocks measure time’s passage? No, actually. A clock measures intervals of time between events. It does this by correlating the positions of the clock hands with a state of the world (for example, the position of a ball, the mental state of an observer). Informal descriptions like ‘gravity slows time’ and ‘time runs faster in space than on Earth’ really mean that the hands of clocks in space rotate slower relative to the hands of identical clocks on Earth. (They do. It is easy to test by comparing clock readings.) The most abusive terminology of all is talk about ‘time running backwards’. Time doesn’t ‘run’ at all. A correct rendition of the physics here is the possible reversal in (unchanged) time of the normal directional sequence of physical states, for example, rubble spontaneously assembling itself into buildings during an earthquake, Maxwell demons creating order out of chaos. It is not time itself but the sequence of states which ‘goes backwards’.

In any case, it’s obvious that time can’t move. Movement describes the change of state of something (for example, the position of a ball) from one time to a later time. Time itself cannot ‘move’ unless there was a second time dimension relative to which its motion could be judged. After all, what possible answer can there be to the question ‘How fast does time pass?’ It has to be, ‘One second per second’ – a tautology! If you are not convinced, then try to answer the question ‘How would you know if the rate of passage of time changed?’ What would be observably different about the world if time speeded up or slowed down? If you woke up tomorrow and the rate of the flow of time had doubled, along with the rate of your mental processes, then nothing would appear to have changed, for the same reason that if you woke up and everything in the world was twice as big but so were you, nothing would look any different. Conclusion: the ‘flow of time’ makes no sense as a literal flow.

Although the foregoing points have been made by philosophers for over a century, the flow-of-time metaphor is so powerful that normal discourse is very hard without lapsing into it. Hard, but not impossible. Every statement about the world that makes reference to the passage of time can be replaced by a more cumbersome statement that makes no reference whatever to time’s passage but merely correlates states of the world at various moments to brain/mind states at those same moments. Consider for example the statement ‘With great anticipation we watched enthralled as the sun set over the ocean at 6 p.m.’ The same basic observable facts can be conveyed by the ungainly statement: ‘The clock configuration 5.50 p.m. correlates with the sun above the horizon and the observers’ brain/mental state being one of anticipation; the clock configuration 6.10 p.m. correlates with the sun being below the horizon and the observers’ brain/mental state being one of enthralment.’ Informal talk about flowing or passing time is indispensable for getting by in daily life but makes no sense when traced back to the physics of time itself.

It is incontestable that we possess a very strong psychological impression that our awareness is being swept along on an unstoppable current of time, and it is perfectly legitimate to seek a scientific explanation for the feeling that time passes. The explanation of this familiar psychological flux is, in my view, to be found in neuroscience, not in physics. A rough analogy is with dizziness. Twirl around a few times and suddenly stop: you will be left with a strong impression that the world is rotating about you, even though it obviously isn’t. The phenomenon can be traced to processes in the inner ear and brain: the feeling of continuing rotation is an illusion. In the same way, the sense of the motion of time is an illusion, presumably connected in some way to the manner in which memories are laid down in the brain.

To conclude: time doesn’t pass. (I hope the reader is now convinced!)

Well, what does pass, then? I shall argue that it is the conscious awareness of the fleeting self that changes from moment to moment. The misconception that time flows or passes can be traced back to the tacit assumption of a conserved self. It is natural for people to think that ‘they’ endure from moment to moment while the world changes because ‘time flows’. But as Alice remarked in Lewis Carroll’s story, ‘It’s no use going back to yesterday, because I was a different person then.’8 Alice was right: ‘you’ are not the same today as you were yesterday. To be sure, there is a very strong correlation – a lot of mutual information, to get technical about it – between today’s you and yesterday’s you – a thread of information made up of memories and beliefs and desires and attitudes and other things that usually change only slowly, creating an impression of continuity. But continuity is not conservation. There are future yous correlated with (that is, observing) future states of the world, and past yous correlated with (observing) past states of the world. At each moment, the you appropriate to that world-state interprets the correlation with that state as ‘now’. It is indeed ‘now’ for ‘that you’ at ‘that time’. That’s all!

The flow-of-time phenomenon reveals ‘the self’ as a slowly evolving complex pattern of stored information that can be accessed at later times and provide an informational template against which fresh perceptions can be matched. The illusion of temporal flow stems from the inevitable slight mismatches.

DEMONS IN THE WIRING

So much for the elusive self. What about the brain? Here we are on firmer ground. Even on rudimentary inspection, it is clear that the brain is a ferment of electrochemical activity. First, some mind-blowing statistics. Recall that the human brain has about 100 billion neurons. These brain cells are powerhouses of information processing.9 Each has a fibre called an axon sprouting from its body; it can be as long as one metre or more. Axons serve as wires that link neurons together to form a network. And what a dense network it is. Each neuron can be connected to up to 10,000 others; axons can branch hundreds of times. An axon doesn’t just patch straight into another cell. Instead, neurons are decorated with a dense thicket of hair, or dendrites, and the axons clamp on to one of them. Other axons can attach to other dendrites of the same neuron, offering the opportunity to combine the incoming signals from many axons at once. It has been estimated that there could be as many as 1,000 trillion connections in the human brain as a whole, amounting to an astonishing level of complexity. Neurons can ‘fire’ (send pulses down axons) at a frenetic rate – maybe fifty times a second. All that adds up to the brain executing about 1015 logical operations per second, faster than the world’s fastest supercomputer. Most arresting of all is that a supercomputer generates many megawatts of heat, whereas the brain does all that work with the same thermal output as a single low-wattage light bulb! (Impressive though that may be, brains still operate many orders of magnitude above the Landauer limit – see here.)

The brain is often compared to an electrical circuit, and it’s correct that the flow of electricity underlies its operation. But whereas the electrical signals in a computer (or the power grid) consist of electrons flowing down wires, the analogue of the wires in the brain – the axons – operate very differently. All along the axon are tiny holes in its outer membrane that can be opened and closed to let through one particle at a time, very much like Maxwell’s original conception of a demon operating a shutter.10 In this case, specialized proteins select different ions – charged atoms – rather than molecules. The holes are in fact narrow tubes, called ‘voltage gated ion channels’; they can open and close a gate to let the right ions through and shut out the wrong ones. The way in which this set-up creates an electrical signal propagating down the axon is as follows. When the neuron is inert, the axon has a negative charge inside and a positive charge outside, creating a small voltage, or polarity, across the membrane. The membrane itself is an insulator. In response to the arrival of a signal from the body of the neuron, the gates open and allow sodium ions to flow from the outside to the inside, thereby reversing the voltage. Next, a different set of ion channels open to allow potassium ions to flow the other way – from the inside to the outside – restoring the original voltage. The polarity reversal typically lasts for only a few thousandths of a second. This transient disturbance triggers the same process in an adjacent section of the axon’s membrane, and that in turn sets off the next section, and so on. The signal thus ripples down the axon towards another neuron. So although neurons signal each other electrically, it takes place via a travelling wave of polarity and not via a flow of electrical current as such.

To achieve this feat, the proteins need an astounding level of discrimination. In particular, they need to tell the difference between sodium ions and potassium ions (potassium ions are very slightly bigger) so as to let only the correct one through in the respective direction. The proteins stick through the membrane and provide a passage from the inside to the outside of the axon via an interior channel. The channel has a narrow bottleneck that allows one ion at a time to traverse it. Electric fields crafted by electrically polarized proteins maximize the efficiency; very little work is needed to push the ions through, and typical currents are millions of ions per second when the channel is open. The sorting precision is very high: less than one in 1,000 ions of the wrong species gets through. To decide when to open and close their gates, the protein clusters have sensors that can detect changes in the membrane potential nearby as the polarity wave approaches.

The upshot of all this demonic activity is that pulses, or spikes, of electric current travel down axons in groups, or trains, until they reach another neuron (sometimes another axon), where they can cause either excitation or inhibition of its activity. Neurons are not just passive relays that hand the signal on to the next neuron in line. They possess an internal structure that plays a critical role in processing the signal. Specifically, the axons are separated from the dendrites to which they attach by gaps about 20 nanometres wide called synapses, across which the signal may jump if the circumstances are right. The gap, known as a ‘synaptic cleft’, is mostly bridged not by an electric current as such but by a large variety of molecules called neurotransmitters. Some, like serotonin and dopamine, are familiar; others less so. These molecules are released from tiny vesicles (like mini-cells enclosed by a membrane) and diffuse across the cleft, where they bind to receptors on the far side. As a result of this binding, electrical changes are initiated in the body of the target neuron. For example, in its resting state, the neuron will have a negative charge relative to the outside of about 70 millivolts, maintained by pumping out ions through the cell membrane. The binding of neurotransmitters can cause the membrane to let through ions (for example, sodium, potassium, chloride) to alter that voltage. If the voltage drops below a certain threshold (that is, the inside of the cell is less negative), then the neuron will fire, sending a pulse down its axon to other neurons, and so on. Some neurotransmitters trigger an increase in membrane voltage (giving the interior of the cell an increased negative potential), which inhibits firing. Because converging incoming signals from many neurons can be amalgamated, the system acts rather like a logic circuit, with the neuron being either on (firing) or off (quiescent), according to the state of combination of the incoming signals.

How about the wiring architecture itself? Many of the details remain unknown, but the neural circuitry isn’t static; it changes according to the individual’s experiences. New memories, for example, are embedded by actively reorganizing the wiring. Thus, a baby is not born with a fixed ‘circuit diagram’ hard-wired in place but with a dense thicket of interconnections that can be pruned as well as rearranged as part of the growing and learning process.

HOW TO BUILD A MIND METER

If consciousness is an emergent, collective product of an organized whole, how can it be viewed in terms of information? It makes perfect sense to say that each neuron processes a few bits of information and a bundle of many neurons processes much more, but treating information arithmetically – just a head-count of bits bundled together – is merely another form of panpsychism. It fails to address the all-important property that information from across an extended region of the brain becomes integrated into a whole. An attempt to define a type of ‘integrated information’ as a measure of consciousness has been made by Giulio Tononi and his co-workers at the University of Wisconsin in Madison. The central idea is to capture in precise mathematical terms the intuitive notion that, when it comes to the brain, the whole is greater than the sum of its parts.

The concept of integrated information is clearest when applied to networks. Imagine a black box with input and output terminals. Inside are some electronics, such as a network with logic elements (AND, OR, and so on) wired together. Viewed from the outside, it will usually not be possible to deduce the circuit layout simply by examining the cause–effect relationship between inputs and outputs, because functionally equivalent black boxes can be built from very different circuits. But if the box is opened, it’s a different story. Suppose you use a pair of cutters to sever some wires in the network. Now rerun the system with all manner of inputs. If a few snips dramatically alter the outputs, the circuit can be described as highly integrated, whereas in a circuit with low integration the effect of some snips may make no difference at all. To take a trivial example, suppose the box contains two separate self-contained circuits, each with its own input and output terminals. There could be wires cross-linking the two circuits that are totally redundant on account of the fact that no signals are ever directed down them. These wires can be severed with impunity.fn6

Tononi and his colleagues specify a way to calculate the irreducible interconnectedness of a general circuit by examining all possible decompositions of the circuit into fragments and working out how much information would be lost as a result. Highly integrated circuits lose a lot of information from the surgery. The precise degree of integration calculated this way is denoted by the Greek letter Ф. According to Tononi, systems with a big value of Ф, like the brain, are (in some sense) ‘more conscious’ than systems with small Ф, such as a thermostat. I should say that the precise definition of Ф is very technical; I won’t get into it here.11 Generally speaking, if the elements in the box constrain each other’s activity a great deal, then Ф is large; this will be the case if there are a lot of feedback loops and substantial ‘cross-talk’ – information transfer via cross-links. But if the system involves an orderly one-way flow of information from input to output (a feed-forward system), then Ф = 0: what from outside the black box may appear as a unitary system is in fact just a conjunction of independent processes. Biology favours integrated systems – the brain being the supreme example – because they are more economical in terms of elements and connections, and more flexible than functionally equivalent systems with a purely feed-forward architecture. Larissa Albantakis, a member of Tononi’s group, points out that the appearance of autonomy in a living organism (or a robot) goes hand in hand with high Ф: ‘Being a causally autonomous entity from the intrinsic perspective requires an integrated cause–effect structure; merely “processing” information does not suffice.’12 And there are surprises in store. The researchers find that, using their definition of Ф as a measure of consciousness, ‘some simple systems can be minimally conscious, some complicated systems can be unconscious, and two different systems can be functionally equivalent, yet one is conscious and the other one is not’.13

In case the reader is lost in the technicalities here, let me offer an analogy. Imagine a twenty-member committee charged with the confidential task of awarding the annual Smith Prize for Scientific Excellence. The input data to the committee is the list of nominees and supporting documents; the output is the name of the winner. To the public, the committee seems like a ‘black box’: nominations go in, a recommendation comes out (‘the committee has decided’). But now look at it from the internal perspective. If the members are independent and vote without consultation, the committee is not integrated: it has Ф = 0. But suppose there are factions – one group favours positive discrimination, another thinks the prize has gone to too many chemists, and so on. Because of their group affiliations, these members constrain each other’s decisions; there is a measure of integration represented by the ‘cross-links’ within each faction. If, further, there is extensive discussion within the committee (lots of feedback and cross-talk) following which a unanimous decision is made, then Ф is maximized. In the case that one member of the committee is a designated stenographer who records the proceedings but is not involved in the discussions, the committee has a lower value of Ф because it is not fully integrated.

There are inevitably many unanswered questions arising from identifying integrated information with consciousness, not least the extent to which actual neural function resembles the activities of logic circuits. Although computer comparisons are commonplace, most neuroscientists do not regard the brain as a souped-up digital computer. To be sure, the brain processes information, but using very different principles from the PC on which I am typing this. It is not even clear that digital is the way to go. Many neural functions may operate more like an analog computer. Nevertheless, integrated information is a laudable attempt to get to grips with consciousness in a quantitative way and to provide a theoretical underpinning based on causality and information flow.

FREE WILL AND AGENCY

‘And so I say,

The atoms must a little swerve at times.’

– Titus Lucretius Carus14

A familiar property of human consciousness is a sense of freedom – a feeling we have that the future is somehow open, enabling humans to determine their own destiny, bending the arc of history according to their wills. Freedom means you may stop reading this chapter if you want (I hope you don’t). In short, humans behave as agents.

A century ago free will seemed to be on a collision course with science. Brains are made of atoms, and atoms gotta do what atoms gotta do, that is, obey the laws of physics. For minds to influence the future by changing the activity in our brains (and thereby dictating our actions), they would have to exert physical forces in such a way that, crudely speaking, a brain atom happily moving to the left suddenly swerves to the right. This conundrum has been known since antiquity and was dubbed ‘the atomic swerve’ by Lucretius. A fully deterministic, mechanistic universe has no room for free will; the future is completely determined by the state of the universe today, right down to brains, neurons and the brain’s atoms themselves. If the world is a closed mechanical system, then invoking a physical role for mind looks like a lost cause because it would imply over-determinism.

That was the situation in the late nineteenth century. Then along came quantum mechanics, with its inherent uncertainty. An atom that is moving to the left can indeed swerve to the right, all on its own, in the right circumstances, due to quantum weirdness. In the 1930s it seemed as if quantum indeterminism might rescue human free will. However, it’s not that simple. To get free will we don’t really want indeterminism: we want our wills to determine our actions. So a more subtle idea was floated. Maybe consciousness can indirectly affect atoms by ‘loading the quantum dice’, so that, although atoms may have an inherent propensity to behave capriciously, a type of bias or nudge here and there might creep in. This would give the mind a portal into the physical world, allowing it to inveigle its way by stealth into the quantum interstices of the causal chain. Unfortunately, even the inveigling would still amount to a violation of the laws of quantum mechanics in a statistical sense. Quantum physics may accommodate uncertainty, but it doesn’t imply anarchy. Quantum mechanics involves very precise probabilistic rules, amounting to the equivalent of ‘fair dice’. Mind-loaded dice would violate the quantum rules.

So what else is on offer? Scientists and philosophers have long wrestled with the problem of trying to reconcile the existence of agency with the underlying behaviour of the atoms and molecules that make up an agent. The agent doesn’t have to be anything as complicated as a human being with considered motives. It may be a bacterium homing in on food. There is still a disconnect between the purposive behaviour of the agent and the blind, purposeless activities of the agent’s components. How does purpose, or, if that word scares you, goal-oriented behaviour, emerge from atoms and molecules that care nothing about goals?

Information theory may have the answer. The first thing to notice is that agents are not closed systems. The very phenomenon of agency involves responding to changes in the system’s environment. Living organisms are of course coupled to their environment in many ways, as I have been at pains to point out in earlier chapters. But even non-living agents, such as robots, are programmed to gather information from their surroundings, process it and effect an appropriate physical response. A truly closed system could not act (in a unitary manner) as an agent. So this provides a loophole in the problem of over-determinism. There is room for parallel narratives, one at the atomic level and another at the agent level, without contradiction, so long as the system is open.

Consider how the human brain is compartmentalized into many regions (left and right hemispheres, the thalamus, the cortex, the amygdala, and so on). Within this overall structure not all neurons are the same. Instead they are organized into various modules and clusters according to their different functions. A loosely defined unit is a ‘cortical column’, a module consisting of several thousand neurons with similar properties which can be treated as a single population. For example, neuroscientists treat cortical columns as individual units when considering stimulation–response relationships. There is a well-mapped region of the brain corresponding to the sense of touch on the surface of the skin. Neurons hooked up to the thumb lie close to those for the index finger, for example. If someone pricks your thumb, a module of neurons ‘lights up’ in that specific region of your brain and may initiate a motor response (you might say, ‘Ouch!’). A neuroscientist can give an account of this scenario in terms of cause and effect, involving the ‘thumb module’ as a kind of simplified unitary agent.

Everyone agrees that, as a practical matter, it is sensible to refer to higher-level modules in explanations of brain activity rather than resort to an inconceivably complicated description of every neuron. However, Tononi’s integrated information theory shows that not only is a higher-level description simpler, but higher-level systems can actually process more information than their components. This counter-intuitive claim has been investigated by Erik Hoel, a former member of Tononi’s research group now working at Columbia University. Hoel carried out a quite general mathematical analysis to investigate the effects of aggregating microscopic variables in some way (such as by black boxing – see here), using something called ‘effective information theory’.15 He set out to find how agents, with their associated intentions and goal-oriented behaviour, can emerge from the underlying microscopic physics, which lacks those properties. His conclusion is that there can be causal relationships that exist solely at the level of agents. Counter to most reductionist thinking, the macroscopic states of a physical system (such as the psychological state of an agent) that ignore the small-scale internal specifics can actually have greater causal power than a more detailed, fine-grained description of the system, a result summed up by the dictum: ‘macro can beat micro’.

In spite of these careful analyses, a hard-nosed reductionist may point out that in principle a complete description of the stimulus–response story will nevertheless be present at the atomic level of the system. But there is an obvious flaw in this tired old argument, because it fails to take into account the openness of ‘the system’. Let me explain. Response times (to pricked thumbs, say) are typically of the order of one-tenth of a second. Now consider that the stimulus–response system may consist of thousands of neurons networked by millions of axons, with neurons firing at fifty times a second. Recall the discussion about the demonic regulation of sodium and potassium ions that enter and leave the axon to drive the propagation of the signal. A neuron firing at fifty times a second will send a signal down an axon that entails the exchange of millions of ions. So, during the tenth of a second that the thumb drama plays out, ‘the system’ will exchange trillions of atomic particles with the extra-neuronal environment. The exiting particles quit the organized causal chain of the system to be lost amid the random thermal noise of the milieu and replaced by others that swarm in. It thus makes no sense to try to locate the bottom level of information about the thumb-pricking episode at the atomic scale. Even in principle, the cause–effect chain we are trying to explain simply does not exist at that level.

Well, counters the die-hard reductionist: what if one takes into account the environment of the system too, and the environment of that system, and so on, until our purview encompasses the entire cosmos? In principle (the argument goes), everything that happens, including the activity of brain modules, could then be accounted for at the atomic or subatomic level. Thus (says the reductionist), invoking the openness of an agent in order to rescue free will is to appeal to a pseudo-loophole. In my opinion, however, the reductionist’s argument (which is often made by distinguished scientists) is absurd. There is no evidence that the universe is a closed deterministic system; it could be infinite. And even if it isn’t, it’s an indeterministic quantum system anyway.

QUANTUM BRAINS

Although quantum indeterminism can’t explain deterministic wills, the perceived link between quantum mechanics and the mind is nevertheless deep and enduring. The nexus between the shadowy quantum domain and the world of concrete daily experience is an arena where one might expect mind and matter to meet. In quantum physics this is referred to as the ‘measurement problem’. Here is why it is a problem. I explained in Chapter 5 how, at the atomic level, things get weird and fuzzy. When a quantum measurement is made, however, the results are sharp and well defined. For example, if the position of a particle is measured, a definite result is obtained. So what was previously fuzzy is suddenly focused, uncertainty is replaced by certainty, many contending realities are replaced by a single specific world. The difficulty now arises that the measuring system, which may consist of a piece of apparatus, a laboratory, a physicist, some students, and so on, is itself made of atoms subject to quantum rules. And there is nothing in the rules of quantum mechanics as formulated by Schrödinger and others to project out a particular, single, concrete reality from the legion of ghostly overlapping pseudo-realities characteristic of the quantum micro-world. So vexatious is this problem that a handful of physicists, including John von Neumann of universal constructor fame, suggested that the ‘concretizing factor’ (often called ‘the collapse of the wave function’) might be the mind of the experimenter. In other words, when the result of the measurement enters the consciousness of the measurer – wham! – the nebulous quantum world out there abruptly gels into commonsense reality. And if mind can do that, surely it does have a kind of leverage over matter, albeit in a subtle manner? It has to be admitted that today there are only a few adherents of this mentalist interpretation of quantum measurement, although there is still no consensus on a better explanation of just what happens when a quantum measurement takes place.

A new twist in the relationship between quantum fuzziness and human consciousness was introduced about thirty years ago by the Oxford mathematician Roger Penrose.16 If consciousness somehow influences the quantum world, then, by symmetry, one might expect quantum effects to play a role in generating consciousness, and it’s hard to see how that could happen unless there are quantum effects in the brain. In Chapter 5 I described how the field of quantum biology might explain photosynthesis and bird navigation, so a priori it seems not unreasonable that the behaviour of neurons might be influenced by quantum processes too. And that’s what Penrose suggests is the case. More precisely, he claims that some microtubules threading through the interior of neurons might process information quantum mechanically, thus greatly boosting the processing power of the neural system and, somehow, generating consciousness on the way.17 In arriving at this conclusion, Penrose and his colleague, the anaesthesiologist Stuart Hameroff, took into account the effects of anaesthesia, which occurs when a variety of molecules seeping into the neuronal synapses eliminate consciousness while leaving much of the routine functions of the brain unaffected – a process still not fully understood.

It has to be said that the Penrose–Hameroff theory has attracted a great deal of scepticism. Objections hinge on the problem of decoherence, which I explained in Box 11. Simple considerations imply that, in the warm and noisy environment of the brain, quantum effects would decohere very much faster than the speed of thought. Nevertheless, precise conclusions are hard to come by, and quantum mechanics has sprung surprises before.

I earlier described how Giulio Tononi and his colleagues have defined a quantity called integrated information, denoted Ф, which they offer as a mathematical measure of the degree of consciousness. Their ideas provide another way to link quantum mechanics to consciousness. Recall that integrated information quantifies the extent to which the whole may be greater than the sum of its parts when a system is complex. It thus depends on the state of the system as a whole – not just on its size or complexity but on the organization of its components and their relationship to the totality. A simple quantum system like an atom has a very low Ф but, if the atom is coupled to a measuring device, then the Ф of the whole system might be large, depending on the nature of the device. It would certainly be very large if a conscious human being were included in the system as part of the ‘device’, but the human element is not necessary. What if the way the quantum system changes in time depends on the value of Ф? Then, left alone, the atom would simply obey the normal rules of quantum physics applied to atoms that were presented by Schrödinger in the 1920s. But for a sufficiently complex system with significant integrated information (for example, a human observer), Ф would become important, eventually bringing about the wave function’s collapse – that is, projection into a single concrete reality. What I am proposing is another example of top-down causation,18 where the system as a whole (in this case precisely defined in terms of integrated information) exercises causal purchase over a lower-level component (the atom). In my example it is top-down causation defined in terms of information and so provides a clear example of an informational law entering into fundamental physics.19

Whatever the merits of these speculative ideas, I think it fair to say that if consciousness is ever to be fitted into the framework of physical theory, then it needs to be incorporated in some fashion into quantum mechanics, because quantum mechanics is our most powerful description of nature. Either consciousness violates quantum mechanics or it is explained by it.

Consciousness is the number-one problem of science, of existence even. Most scientists just steer clear of it, thinking it too much of a quagmire. Those scientists and philosophers who have dived in have usually become stuck. Information theory offers one way forward. The brain is an information-processing organ of stupendous complexity and intricate organization. Looking back at the history of life, each major transition has involved a reorganization of the informational architecture of organisms; the brain is the most recent step, creating information patterns that think.

Not everyone agrees, however, that cracking the information architecture problem will ‘explain’ consciousness, even if one buys into the thesis that conscious experiences are all about information patterns in the brain. David Chalmers, an Australian philosopher at New York University, divides the topic into ‘the easy problem’ and ‘the hard problem’.20 The easy part – very far from easy in practice – is to map the neural correlates of this or that experience, that is, determine which bit of the brain ‘lights up’ when the subject sees this or hears that. It’s a doable programme. But knowing all the correlates still wouldn’t tell us ‘what it is like’ to have this or that experience. I’m referring to the inner subjective aspect – the redness of red, for example – what philosophers call ‘qualia’. Some people think the hard problem of qualia can never be settled, partly for the same reason that I can’t be sure that you exist just because you behave more or less like I do. If so, the question ‘What is mind?’ will lie forever beyond our ken.