16 The imbalance of nature

Chaos Theory

image

What does it say?

It models how a population of living creatures changes from one generation to the next, when there are limits to the available resources.

Why is that important?

It is one of the simplest equations that can generate deterministic chaos – apparently random behaviour with no random cause.

What did it lead to?

The realisation that simple nonlinear equations can create very complex dynamics, and that apparent randomness may conceal hidden order. Popularly known as chaos theory, this discovery has innumerable applications throughout the sciences including the motion of the planets in the Solar System, weather forecasting, population dynamics in ecology, variable stars, earthquake modelling, and efficient trajectories for space probes.

 

The metaphor of the balance of nature trips readily off the tongue as a description of what the world would do if nasty humans didn’t keep interfering. Nature, left to its own devices, would settle down to a state of perfect harmony. Coral reefs would always harbour the same species of colourful fish in similar numbers, rabbits and foxes would learn to share the fields and woodlands so that the foxes would be well fed, most rabbits would survive, and neither population would explode or crash. The world would settle down to a fixed state and stay there. Until the next big meteorite, or a supervolcano, upset the balance.

It’s a common metaphor, perilously close to being a cliché. It’s also highly misleading. Nature’s balance is distinctly wobbly.

We’ve been here before. When Poincaré was working on King Oscar’s prize, the conventional wisdom held that a stable Solar System is one in which the planets follow much the same orbits forever, give or take a harmless bit of jiggling. Technically this is not a steady state, but one in which each planet repeats similar motions over and over again, subject to minor disturbances caused by all the others, but not deviating hugely from what it would have done without them. The dynamics is ‘quasiperiodic’ – combining several separate periodic motions whose periods are not all multiples of the same time interval. In the realm of planets, that’s as close to ‘steady’ as anyone can hope for.

But the dynamics wasn’t like that, as Poincaré belatedly, and to his cost, found out. It could, in the right circumstances, be chaotic. The equations had no explicit random terms, so that in principle the present state completely determined the future state, yet paradoxically the actual motion could appear to be random. In fact, if you asked coarse-grained questions like ‘which side of the Sun will it be on?’, the answer could be a genuinely random series of observations. Only if you could look infinitely closely would you be able to see that the motion really was completely determined.

This was the first intimation of what we now call ‘chaos’, which is short for ‘deterministic chaos’, and quite different from ‘random’ – even though that’s what it can look like. Chaotic dynamics has hidden patterns, but they’re subtle; they differ from what we might naturally think of measuring. Only by understanding the causes of chaos can we extract those patterns from an irregular mishmash of data.

As always in science, there were a few isolated precursors, generally viewed as minor curiosities unworthy of serious attention. Only in the 1960s did mathematicians, physicists, and engineers begin to realise just how natural chaos is in dynamics, and how radically it differs from anything envisaged in classical science. We are still learning to appreciate what that tells us, and what to do about it. But already chaotic dynamics, ‘chaos theory’ in popular parlance, pervades most areas of science. It may even have things to tell us about economics and the social sciences. It’s not the answer to everything: only critics ever claimed it was, and that was to make it easier to shoot it down. Chaos has survived all such attacks, and for a good reason: it is absolutely fundamental to all behaviour governed by differential equations, and those are the basic stuff of physical law.

There is chaos in biology, too. One of the first to appreciate that this might be the case was the Australian ecologist Robert May, now Lord May of Oxford and a former president of the Royal Society. He sought to understand how the populations of various species change over time in natural systems such as coral reefs and woodlands. In 1975 May wrote a short article for the journal Nature, pointing out that the equations typically used to model changes to animal and plant populations could produce chaos. May didn’t claim that the models he was discussing were accurate representations of what real populations did. His point was more general: chaos was natural in models of that kind, and this had to be borne in mind.

The most important consequence of chaos is that irregular behaviour need not have irregular causes. Previously, if ecologists noticed that some population of animals was fluctuating wildly, they would look for some external cause – also presumed to be fluctuating wildly, and generally labelled ‘random’. The weather, perhaps, or a sudden influx of predators from elsewhere. May’s examples showed that the internal workings of the animal populations could generate irregularity without outside help.

His main example was the equation that decorates the opening of this chapter. It is called the logistic equation, and it is a simple model of a population of animals in which the size of each generation is determined by the previous one. ‘Discrete’ means that the flow of time is counted in generations, and is thus an integer. So the model is similar to a differential equation, in which time is a continuous variable, but conceptually and computationally simpler. The population is measured as a fraction of some overall large value, and can therefore be represented by a real number that lies between 0 (extinction) and 1 (the theoretical maximum that the system can sustain). Letting time t tick in integer steps, corresponding to generations, this number is xt in generation t. The logistic equation states that

xt + 1 = kxt (1 −xt)

where k is a constant. We can interpret k as the growth rate of the population when diminishing resources do not slow it down.1

We start the model at time 0 with an initial population x0. Then we use the equation with t = 0 to calculate x1, then we set t = 1 and compute x2, and so on. Without even doing the sums we can see straight away that, for any fixed growth rate k, the population size of generation zero completely determines the sizes of all succeeding generations. So the model is deterministic: knowledge of the present determines the future uniquely and exactly.

So what is the future? The ‘balance of nature’ metaphor suggests that the population should settle to a steady state. We can even calculate what that steady state should be: just set the population at time t + 1 to be the same as that at time t. This leads to two steady states: populations 0 and 1-1/k. A population of size 0 is extinct, so the other value should apply to an existing population. Unfortunately, although this is a steady state, it can be unstable. If it is, then in practice you’ll never see it: it’s like trying to balance a pencil vertically on its sharpened point. The slightest disturbance will cause it to topple. The calculations show that the steady state is unstable when k is bigger than 3.

What, then, do we see in practice? Figure 58 shows a typical ‘time series’ for the population when k = 4. It’s not steady: it’s all over the place. However, if you look closely there are hints that the dynamics is not completely random. Whenever the population gets really big, it immediately crashes to a very low value, and then grows in a regular manner (roughly exponentially) for the next two or three generations: see the short arrows in Figure 58. And something interesting happens whenever the population gets close to 0.75 or thereabouts: it oscillates alternately above and below that value, and the oscillations grow giving a characteristic zigzag shape, getting wider towards the right: see the longer arrows in the figure.

image

Fig 58 Chaotic oscillations in a model animal population. Short arrows show crashes followed by short-term exponential growth. Longer arrows show unstable oscillations.

Despite these patterns, there is a sense in which the behaviour is truly random – but only when you throw away some of the detail. Suppose we assign the symbol H (heads) whenever the population is bigger than 0.5, and T (tails) when it’s less than 0.5. This particular set of data begins with the sequence THTHTHHTHHTTHH and continues unpredictably, just like a random sequence of coin tosses. This way of coarsening the data, by looking at specific ranges of values and noting only which range the population belongs to, is called symbolic dynamics. In this case, it is possible to prove that, for almost all initial population values x0, the sequence of heads and tails is in all respects like a typical sequence of random tosses of a fair coin. Only when we look at the exact values do we start to see some patterns.

It’s a remarkable discovery. A dynamical system can be completely deterministic, with visible patterns in detailed data, yet a coarse-grained view of the same data can be random – in a provable, rigorous sense. Determinism and randomness are not opposites. In some circumstances, they can be entirely compatible.

May didn’t invent the logistic equation, and he didn’t discover its astonishing properties. He didn’t claim to have done either of those things. His aim was to alert workers in the life sciences, especially ecologists, to the remarkable discoveries emerging in the physical sciences and mathematics: discoveries that fundamentally change the way scientists should think about observational data. We humans may have trouble solving equations based on simple rules, but nature doesn’t have to solve the equations the way we do. It just obeys the rules. So it can do things that strike us as being complicated, for simple reasons.

Chaos emerged from a topological approach to dynamics, orchestrated in particular by the American mathematician Stephen Smale and the Russian mathematician Vladimir Arnold in the 1960s. Both were trying to find out what types of behaviour were typical in differential equations. Smale was motivated by Poincaré’s strange results on the three-body problem (Chapter 4), and Arnold was inspired by related discoveries of his former research supervisor Andrei Kolmogorov. Both quickly realised why chaos is common: it is a natural consequence of the geometry of differential equations, as we’ll see in a moment.

As interest in chaos spread, examples were spotted lurking unnoticed in earlier scientific papers. Previously considered to be just isolated weird effects, these examples now slotted into a broader theory. In the 1940s the English mathematicians John Littlewood and Mary Cartwright had seen traces of chaos in electronic oscillators. In 1958 Tsuneji Rikitake of Tokyo’s Association for the Development of Earthquake Prediction had found chaotic behaviour in a dynamo model of the Earth’s magnetic field. And in 1963 the American meteorologist Edward Lorenz had pinned down the nature of chaotic dynamics in considerable detail, in a simple model of atmospheric convection motivated by weather-forecasting. These and other pioneers had pointed the way; now all of their disparate discoveries were starting to fit together.

In particular, the circumstances that led to chaos, rather than something simpler, turned out to be geometric rather than algebraic. In the logistic model with k = 4, both extremes of the population, 0 and 1, move to 0 in the next generation, while the midpoint, image, moves to 1. So at each time-step the interval from 0 to 1 is stretched to twice its length, folded in half, and slapped down in its original location. This is what a cook does to dough when making bread, and by thinking about dough being kneaded, we gain a handle on chaos. Imagine a tiny speck in the logistic dough – a raisin, say. Suppose that it happens to lie on a periodic cycle, so that after a certain number of stretch-and-fold operations it returns to where it started. Now we can see why this point is unstable. Imagine another raisin, initially very close to the first one. Each stretch moves it further away. For a time, though, it doesn’t move far enough away to stop tracking the first raisin. When the dough is folded, both raisins end up in the same layer. So next time, the second raisin has moved even further away from the first. This is why the periodic state is unstable: stretching moves all nearby points away from it, not towards it. Eventually the expansion becomes so great that the two raisins end up in different layers when the dough is folded. After that, their fates are pretty much independent of each other. Why does a cook knead dough? To mix up the ingredients (including trapped air). If you mix stuff up, the individual particles have to move in a very irregular way. Particles that start close together end up far apart; points far apart may be folded back to be close together. In short, chaos is the natural result of mixing.

I said at the start of this chapter that you don’t have anything chaotic in your kitchen, except perhaps that dishwasher. I lied. You probably have several chaotic gadgets: a food processor, an egg-beater. The blade of the food processor follows a very simple rule: go round and round, fast. The food interacts with the blade: it ought to do something simple too. But it doesn’t go round and round: it gets mixed up. As the blade cuts through the food, some bits go one side of it, some go the other side: locally, the food gets pulled apart. But it doesn’t escape from the mixing bowl, so it all gets folded back in on itself.

Smale and Arnold realised that all chaotic dynamics is like this. They didn’t phrase their results in quite that language, mind you: ‘pulled apart’ was ‘positive Liapunov exponent’ and ‘folded back’ was ‘the system has a compact domain’. But in fancy language, they were saying that chaos is like mixing dough.

This also explains something else, noticed especially by Lorenz in 1963. Chaotic dynamics is sensitive to initial conditions. However close the two raisins are to begin with, they eventually get pulled so far apart that their subsequent movements are independent. This phenomenon is often called the butterfly effect: a butterfly flaps its wings, and a month later the weather is completely different from what it would otherwise have been. The phrase is generally credited to Lorenz. He didn’t introduce it, but something similar featured in the title of one of his lectures. However, someone else invented the title for him, and the lecture wasn’t about the famous 1963 article, but a lesser-known one from the same year.

Whatever the phenomenon is called, it has an important practical consequence. Although chaotic dynamics is in principle deterministic, in practice it becomes unpredictable very quickly, because any uncertainty in the exact initial state grows exponentially fast. There is a prediction horizon beyond which the future cannot be foreseen. For weather, a familiar system whose standard computer models are known to be chaotic, this horizon is a few days ahead. For the Solar System, it is tens of millions of years ahead. For simple laboratory toys, such as a double pendulum (a pendulum hung from the bottom of another one) it is a few seconds ahead. The long-held assumption that ‘deterministic’ and ‘predictable’ are the same is wrong. It would be valid if the present state of a system could be measured with perfect accuracy, but that’s not possible.

The short-term predictability of chaos can be used to distinguish it from pure randomness. Many different techniques have been devised to make this distinction, and to work out the underlying dynamics if the system is behaving deterministically but chaotically.

Chaos now has applications in every branch of science, from astronomy to zoology. In Chapter 4 we saw how it is leading to new, more efficient trajectories for space missions. In broader terms, astronomers Jack Wisdom and Jacques Laskar have shown that the dynamics of the Solar System is chaotic. If you want to know whereabouts in its orbit Pluto will be in 10,000,000 AD – forget it. They have also shown that the Moon’s tides stabilise the Earth against influences that would otherwise lead to chaotic motion, causing rapid shifts of climate from warm periods to ice ages and back again. So chaos theory demonstrates that, without the Moon, the Earth would be a pretty unpleasant place to live. This feature of our planetary neighbourhood is often used to argue that the evolution of life on a planet requires a stabilising Moon, but this is an overstatement. Life in the oceans would scarcely notice if the planet’s axis changed over a period of millions of years. Life on land would have plenty of time to migrate elsewhere, unless it got trapped somewhere that lacked a land route to a place where conditions were more suitable. Climate change is happening much faster right now than anything that a change in axial tilt could cause.

May’s suggestion that irregular population dynamics in an ecosystem might sometimes be caused by internal chaos, rather than extraneous randomness, has been verified in laboratory versions of several real-world ecosystems. In 1995 a team headed by American ecologist James Cushing found chaotic dynamics in populations of the flour beetle (or bran bug) Tribolium castaneum, which can infest stores of flour.2 In 1999, Dutch biologists Jef Huisman and Franz Weissing applied chaos to the ‘paradox of the plankton’, the unexpected diversity of plankton species.3 A standard principle in ecology, the principle of competitive exclusion, states that an ecosystem cannot contain more species than the number of environmental niches, ways to make a living. Plankton appear to violate this principle: the number of niches is small, but the number of species is in the thousands. They traced this to a loophole in the derivation of the principle of competitive exclusion: the assumption that populations are steady. If the populations can change over time, then the mathematical derivation from the usual model fails, and intuitively different species can occupy the same niche by taking turns – not by conscious cooperation, but by one species temporarily taking over from another and undergoing a population boom, while the displaced species drops to a small population, Figure 59.

image

Fig 59 Six species sharing three resources. The bands are closely spaced chaotic oscillations. Courtesy of Jef Huisman and Franz Weissing.

In 2008, Huisman’s team published the results of a laboratory experiment with a miniature ecology based on one found in the Baltic Sea, involving bacteria and several kinds of plankton. A six-year study revealed chaotic dynamics in which populations fluctuated wildly, often becoming 100 times as large for a time and then crashing. The usual methods for detecting chaos confirmed its presence. There was even a butterfly effect: the system’s prediction horizon was a few weeks.4

There are applications of chaos that impinge on everyday life, but they mostly occur in manufacturing processes and public services, rather than being incorporated into gadgets. The discovery of the butterfly effect has changed the way weather forecasts are carried out. Instead of putting all of the computational effort into refining a single prediction, meteorologists now run many forecasts, making different tiny random changes to the observations provided by weather balloons and satellites before starting each run. If all of these forecasts agree, then the prediction is likely to be accurate; if they differ significantly, the weather is in a less predictable state. The forecasts themselves have been improved by several other advances, notably in calculating the influence of the oceans on the state of the atmosphere, but the main role of chaos has been to warn forecasters not to expect too much and to quantify how likely a forecast is to be correct.

Industrial applications include a better understanding of mixing processes, which are widely used to make medicinal pills or mix food ingredients. The active medicine in a pill usually occurs in very small quantities, and it has to be mixed with some inert substance. It’s important to get enough of the active ingredient in each pill, but not too much. A mixing machine is like a giant food processor, and like the food processor, its dynamics is deterministic but chaotic. The mathematics of chaos has provided a new understanding of mixing processes and led to some improved designs. The methods used to detect chaos in data have inspired new test equipment for the wire used to make springs, improving efficiency in spring- and wire-making. The humble spring has many vital uses: it can be found in mattresses, cars, DVD players, even ballpoint pens. Chaotic control, a technique that uses the butterfly effect to keep dynamic behaviour stable, is showing promise in the design of more efficient and less intrusive heart pacemakers.

Overall, though, the main impact of chaos has been on scientific thinking. In the forty years or so since its existence started to be widely appreciated, chaos has changed from a minor mathematical curiosity into a basic feature of science. We can now study many of nature’s irregularities without resorting to statistics, by teasing out the hidden patterns that characterise deterministic chaos. This is just one of the ways in which modern dynamical systems theory, with its emphasis on nonlinear behaviour, is causing a quiet revolution in the way scientists think about the world.