‘One can best feel in dealing with living things how primitive physics still is.’
– Albert Einstein1
When Schrödinger delivered his Dublin lectures in 1943 he threw down a challenge that still resonates today. Can life be explained in terms of physics or will it always be a mystery? And if physics can explain life, is existing physics up to the job, or might it require something fundamentally new – new concepts, new laws even?
In the past few years it has become increasingly clear that information forms a powerful bridge between physics and biology. Only very recently has the interplay of information, energy and entropy been clarified, a century and a half after Maxwell introduced his notorious demon. Advances in nanotechnology have enabled incredibly delicate experiments to be performed to test foundational issues at the intersection of physics, chemistry, biology and computing. Though these developments have provided useful clues, so far the application of the physics of information to living systems has been piecemeal and ad hoc. Still lacking is a comprehensive set of principles that will explain all the puzzles in the magic box of life within a unitary theory.
While it is the case that biological information is instantiated in matter, it is not inherent in matter. Bits of information chart their own course inside living things. In so doing, they don’t violate the laws of physics, but nor are they encapsulated by those laws: it is impossible to derive the laws of information from the known laws of physics. To properly incorporate living matter into physics requires new physics. Given that the conceptual gulf between physics and biology is so deep, and that existing laws of physics already provide a perfectly satisfactory explanation of the individual atoms and molecules that make up living organisms, it is clear that a full explanation of living matter entails something altogether more profound: nothing less than a revision of the nature of physical law itself.
Physicists have traditionally clung to a very restrictive notion of laws, dating from the time of Newton. Physics as we know it developed in seventeenth-century Europe, which was in thrall to Catholic Church doctrine. Although Galileo, Newton and their contemporaries were influenced by Greek thought, their notion of physical laws owed much to monotheism, according to which an omnipotent deity ordered the universe in a rational and intelligible manner. Early scientists regarded the laws of physics as thoughts in the mind of God. Classical Christian theology held that God is a perfect, eternal, unchanging being, transcending space and time. God made a physical world that changes with time, but God remains immutable. Creator and creature are thus not in a symmetrical relationship: the world depends utterly on God for its continued existence, but God does not depend on the world. Since it was held that the laws of the universe reflect the divine nature, it followed that the laws must also be unchanging. In 1630 Descartes expressed this very point explicitly:
It is God who has established the laws of nature, as a King establishes laws in his kingdom … You will be told that if God has established these truths, he could also change them as a King changes his laws. To which it must be replied: yes, if his will can change. But I understand them as eternal and immutable. And I judge the same of God.2
For these essentially theological reasons, physics was founded three centuries ago with a corresponding asymmetry between fixed laws and a changing world. That idea has been around so long we scarcely notice what a huge assumption it is. But there is no logical requirement it must be so, no compelling argument why the laws themselves have to be fixed absolutely. Indeed, I have already discussed one well-known example from fundamental physics in which the laws do change according to circumstance: the act of measurement in quantum mechanics. Measuring or observing a quantum system brings about a dramatic change in its behaviour, often called ‘the collapse of the wave function’. To recap, it goes like this. Left alone, a quantum system (for example, an atom) evolvesfn1 according to a precise mathematical law provided by Schrödinger. But when the system is coupled to a measuring device and a measurement of a quantity is performed – for example, the energy of an atom – the state of the atom suddenly jumps (‘collapses’). Significantly, the former evolution is reversible, but the latter is irreversible. So there are two completely different types of law for quantum systems: one when they are left alone and another when they are probed. Note a clue here linking to information. By performing a measurement of a quantum system the experimenter gains information about it (for example, which energy level an atom is in), but the entropy of the measured system jumps: we know less about its prior state after the measurement than we did before because of the irreversible ‘collapse’.fn2 So something has been gained and something lost.
Turning to biology, it is obvious that the notion of immutable laws is not a good fit. Darwin himself stressed the difference long ago in the closing passage of On the Origin of Species: ‘… whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved’.3 Biological evolution, with its open-ended variety and novelty and its lack of predictability, stands in stark contrast to the way that non-living systems evolve. Yet biology is not chaos: there are many examples of ‘rules’ at work, but these rules mostly refer to the informational architecture of organisms. Take the genetic code: the triplet of nucleotides CGT, for example, codes for the amino acid arginine (see Table 1). Although there are no known exceptions to that rule, it would be wrong to think of it as a law of nature, like the fixed law of gravity. Almost certainly the CGT → arginine assignment emerged a long time ago, probably from some earlier and simpler rule. Biology is full of cases like this; some rules are widespread, like Mendel’s laws of genetics, others more restrictive. When we consider the great drama of evolutionary history, the game of life must be seen as a game of quasi-rules that change over time.
More relevant is that the rules often depend on the state of the system concerned. To make this crucial point clear, let me give an analogy. Chess is a game with fixed rules. The rules don’t determine the outcome of the game, the players do. There are a vast number of possible games, but a close inspection of all games would reveal that the pieces move across the board in accordance with the same rules. Now imagine a different type of chess game – call it chess-plus – in which the rules can change as the game progresses. In particular – to pursue the analogy with living systems – the rules could change depending on the state of play. One example might be this: ‘if white is winning, then black is henceforth permitted to move the king up to two squares instead of one’. Here’s another: ‘if black has two more pawns than white, then white can move pawns backwards as well as forwards’. (These are silly suggestions, but some less drastic examples might pass muster as a popular game. Playing chess-plus, a novice might even beat a chess Grand Master.) The two examples I just gave involve ‘rules of rule-change’, or meta-rules, which are themselves fixed. But that’s just for ease of exposition. The meta-rules don’t have to be fixed: they could obey a meta-meta-rule or, to avoid an infinite regress, they could change randomly, perhaps decided by a coin toss. In the latter case chess-plus would become partly a game of skill and partly a game of chance. Either way, it is clear that chess-plus would be more complex and less predictable than conventional chess and would lead to states of play – that is, patterns of pieces on the board – that would be impossible to attain by following the conventional fixed rules of chess. We see here an echo of biology: life opens up regions of ‘possibility space’ that are inaccessible to non-living systems (see here).
Laws that change as a function of the state are a generalization of the concept of self-reference: what a system does depends on how a system is. Recall from Chapter 3 that the notion of self-reference, following the work of Turing and von Neumann, lies at the core of both universal computation and replication. Relaxing the stringent requirement that laws have to be fixed and taking into account self-reference demands a whole new branch of science and mathematics, still largely unexplored. The physicist Nigel Goldenfeld of the University of Illinois is one of a handful of theorists who recognizes the promise of this approach: ‘Self-reference should be an integral part of a proper understanding of evolution, but it is rarely considered explicitly,’ he writes.4 Goldenfeld contrasts biology with standard topics in physics like condensed matter theory, where ‘there is a clear separation between the rules that govern the time evolution of the system and the state of the system itself … the governing equation does not depend on the solution of the equation. In biology, however, the situation is different. The rules that govern the time evolution of the system are encoded in abstractions, the most obvious of which is the genome itself. As the system evolves in time, the genome itself can be altered, and so the governing rules are themselves changed. From a computer science perspective, one might say that the physical world can be thought of as being modelled by two distinct components: the program and the data. But in the biological world, the program is the data, and vice versa.’5
In Chapter 3 I described a simple attempt by my colleagues Alyssa Adams and Sara Walker to incorporate self-referential state-dependent rules in a cellular automaton (see here). Sure enough, their computer model displayed the key property of open-ended variety that we associate with life. However, it was just a cartoon. To make the analysis realistic it would be necessary to apply self-referential state-dependent rules to information patterns in real complex physical systems. This hasn’t been done – I’m throwing it out here as a challenge.6 The resulting rules will differ from conventional laws of physics by applying at the systems level as opposed to individual components, such as particles, an example of top-down causation.7 To be compatible with the laws of physics that we already know and love, any effects at the particle level would need to be small, or we would have noticed them already. But that is no obstacle. Because most molecular systems are inherently chaotic, inconspicuous, minute changes are able to accumulate and result in very profound effects. There is plenty of room at the bottom for novel physics to operate in a manner hitherto undetected and, indeed, that would be very hard to detect at the level of individual molecules anyway. But the cumulative impact on the information flow within an entire system, deriving from the combined effect of many tiny, disseminated influences, might come to dominate and yet appear inexplicable because the underlying causal mechanism has been overlooked.
The possibility that there may be new laws, or at least systematic regularities, hidden in the behaviour of complex systems, is by no means revolutionary. Several decades ago it was discovered that subtle mathematical patterns were buried in a wide range of chaotic systems (‘chaotic’ here means such systems are unpredictable even with a very precise knowledge of the forces and starting conditions, the weather being a classic example). Physicists began to talk about ‘universality in chaos’. What I am proposing here is universality in informational organization, in the expectation that common information patterns will be found in a large class of certain complex systems – patterns that capture, at least in part, something of the features of living organisms.
So much for theory, which has barely scratched the surface of these new ideas. What are the prospects for experiment? Here we run up against the overwhelming complexity of biology. If the new informational state-dependent laws I am proposing operated only in living matter, it would be just another version of vitalism. The whole purpose of a theory that unifies physics and biology is to remove any barrier separating them, in which case the new informational laws might be expected to bleed from the living world into the non-living world. Several decades ago a claim to have discovered just such an effect was made by Sidney Fox, a biochemist based in Alabama who devoted his career to studying the origin of life. Fox published experimental evidence to suggest that when amino acids assemble into chains (called peptides), they show a preference for just those combinations that lead to biologically useful molecules, that is, proteins. ‘Amino acids determine their own order in condensation,’ he wrote.8 If true, the claim would be evidence that the laws of chemistry somehow favoured life, as if they knew about it in advance. Even more dramatic were the claims of Gary Steinman and Marian Cole of Pennsylvania State University, who also reported non-random peptide formation: ‘These results prompt the speculation that unique, biologically pertinent peptide sequences may have been produced prebiotically,’ they wrote.9
The suggestion that chemistry is cunningly rigged in favour of life was widely dismissed, and indeed was scarcely credible in the form presented by Fox and others, involving as it did preferential bonding between pairs of molecules – a process well understood within the framework of quantum mechanics. But if one took an informational approach to molecular organization, it might be a different story.10
If we had properly worked-out candidates for informational state-dependent laws, they might suggest that systems self-organize in ways to amplify their information-processing abilities or lead to ‘unreasonable’ accumulation of integrated information. The recent discovery that in some circumstances ‘macro beats micro’ in terms of causal power (see here) opens the possibility that the spontaneous organization of higher-order information-processing modules might be favoured as a general trend in complex systems. The pathway from non-life to life might be far shorter when viewed in terms of the organization of information rather than chemical complexity. If so, it would greatly boost the search for a second genesis of life.fn3
In this book I have charted a burgeoning new area of science. As I write, scarcely a day passes without the publication of another paper or the announcement of a new experimental result having a direct impact on the physics of information and its role in the story of life. This is a field in its infancy and many questions remain unanswered. If there are new physical laws at work – informational laws, perhaps involving state dependence and top-down causation – how do we mesh them with the known laws of physics? And would these new laws be deterministic in form or contain an element of chance, like quantum mechanics? Indeed, does quantum mechanics come into them? Does it in fact play an integral role in life? In addition to these imponderables lies the question of origins. How do life’s informational patterns come into existence in the first place? The appearance of anything new in the universe is always an amalgam of laws and initial conditions. We simply don’t know the conditions necessary for biological information to emerge initially, or, once left to get going, how strong a role natural selection plays versus the operation of informational laws or other organizational principles that may be at work in complex systems. All this has to be worked out.
There will be those who object to dignifying the informational principles I have been elucidating with the word ‘law’ in any deep sense. While most scientists are happy to treat information patterns as things in their own right for practical purposes, reductionists insist that this is merely a methodological convenience and that, in principle, all such ‘things’ can be reduced to fundamental particles and the laws of physics – and hence defined out of existence. They don’t ‘really exist’, we are warned, except in our own imaginings. While reductionists may concede that certain rules ‘emerge’ in complex systems, they assert that these rules do not enjoy the fundamental status of the laws of physics that underlie all systems. The reductionist argument is undeniably powerful, but it rests on a major assumption about the nature of physical law. The way the laws of physics are currently conceived leads to a stratification of physical systems with the laws of physics at the bottom conceptual level and emergent laws stacked above them. There is no coupling between levels. When it comes to living systems, this stratification is a poor fit because, in biology, there often is coupling between levels, between processes on many scales of size and complexity: causation can be both bottom-up (from genes to organisms) and top-down (from organisms to genes). To bring life within the scope of physical law – and to provide a sound basis for the reality of information as a fundamental entity in its own right – requires a radical reappraisal of the nature of physical law, as I am arguing.11
It would be wrong to think that these arcane deliberations are important only to a handful of scientists, philosophers and mathematicians. They have sweeping implications not just for explaining life but for the nature of human existence and our place in the universe. Before Darwin, it was widely believed that God created life. Today, most people accept it had a naturalistic origin. While it is true that scientists lack a full explanation for how life emerged from non-life, invoking a one-off miracle is to fall into the god-of-the-gaps trap. It would imply a type of cosmic magician who sporadically intervenes, moving molecules around from time to time but mostly leaving them to obey fixed laws. Yet within the broad scope of the term ‘naturalistic’ lie very different philosophical (even theological) implications. Two contrasting views of life’s origin are the statistical fluke hypothesis championed by Jacques Monod and the cosmic imperative of Christian de Duve. Monod appealed to the flukiness of life to bolster his nihilistic philosophy: ‘The ancient covenant is in pieces,’ he wrote gloomily. ‘[Man’s] destiny is nowhere spelled out, nor is his duty. The kingdom above or the darkness below: it is for him to choose … The universe was not pregnant with life, nor the biosphere with man.’12 In responding to Monod’s negative reflections, de Duve wrote, ‘You are wrong. They were,’13 and proceeded to develop his view of what he called ‘a meaningful universe’. Boiled down to basics, the issue is this. Is life built into the laws of physics? Do those laws magically embed the designs of organisms-to-be? There is no evidence whatever that the known laws of physics are rigged in favour of life; they are ‘life-blind’. But what about new state-dependent informational laws of the sort I am conjecturing here? My hunch is that they would not be so specific as to foreshadow biology as such, but they might favour a broader class of complex information-managing systems of which life as we know it would be a striking representative. It’s an uplifting thought that the laws of the universe might be intrinsically bio-friendly in this general manner.
These speculative notions are very far from a miracle-working deity who conjures life into being from dust. But if the emergence of life, and perhaps mind, are etched into the underlying lawfulness of nature, it would bestow upon our existence as living, thinking beings a type of cosmic-level meaning.
It would be a universe in which we can truly feel at home.