Benoit Mandelbrot (b. 1924), a maverick mathematician of Lithuanian extraction, coined the word ‘fractal’ (from the Latin fractus, broken) in 1975, to describe irregular geometrical shapes that repeat themselves endlessly on smaller and smaller scales – a feature known as ‘self-similarity’. Fractals, he found, were everywhere in nature – in coastlines, trees, mountains, forked lightning, the cardio-vascular system, or cauliflowers (where each floret of the edible head resembles the whole). Fractal pictures, produced by repeatedly feeding equations into computers, became popular in T-shirt and poster design in the 1980s. The connection between fractals and chaos theory is explained here by Caroline Series, reader in mathematics at Warwick University.
Most people have become familiar in recent years with pictures of fractals, those elusive shapes that, no matter how you magnify them, still look infinitely crinkled. The pictures you saw were probably drawn by computer, but examples abound in nature – the edge of a leaf, the outline of a tree, or the course of a river. Fractal curves differ from those studied in normal geometry. The curve of a circle, for instance, if magnified sufficiently, just about becomes a straight line. A fractal curve, on the other hand, when viewed on many different scales, from macroscopic to microscopic, reveals the same intricate pattern of convolutions.
In recent years, there has been a revolution of interest in fractals. Previously, only a few people had appreciated the significance and beauty of these strange shapes. Benoit Mandelbrot drew much attention to their potential use in describing the natural world. At the same time, the development of high-speed computing and computer graphics has made them easily accessible and this has drawn many people to study them more closely.
Another reason for the interest in fractals is that they are connected with chaos. In mathematics, chaos has a specialized meaning. The easiest way to understand chaos is by some examples.
Suppose a particle is moving in a confined region of space according to a definite deterministic law. Following the path traced out by the particle, we are likely to observe that it settles down to one of three possible behaviours – the geometrical description of which is called an attractor. The particle may be attracted to a final resting position (like, for example, the bob on a pendulum as it gradually settles down to rest). In this case, the attractor is just a point – the final resting position of the bob. Or the particle may settle down in a periodic cycle (like the planets in their orbits around the Sun). Here the attractor is an ellipse and the future motion can be predicted with astonishingly high accuracy as far ahead as we want. The last possibility is that the particle may continue to move wildly and erratically while, nevertheless, remaining in some bounded region of space. The motion of some of the asteroids, for example, appears to exhibit exactly this phenomenon. Tiny inaccuracies in measuring the position and speed of the asteroid quickly lead to enormous errors in predicting its future path. This phenomenon is the signal of chaotic motion. The regions of space traced out by such motions are called strange attractors.
Once a particle is attracted to a strange attractor there is no escaping. Almost anywhere you start inside the attractor, the point moves, on the average, in the same way, just as no matter how you start off a pendulum, it always eventually comes to rest at the same point. Although the motion is specified by precise laws, for all practical purposes, the particle behaves as if it were moving randomly. The interesting point here is that strange attractors are very frequently fractals.
The meteorologist Edward Lorenz pioneered chaos theory in a 1963 paper (though he did not call it chaos theory – the name was invented in 1972 by the mathematician James Yorke). Studying weather systems, Lorenz attributed their unpredictability to the fact that a very small initial difference could enormously change the future state of the system. This became known as ‘the butterfly effect’ from the title of Lorenz’s 1979 paper ‘Predictability: Does the Flap of a Butterfly’s Wings in Brazil Set Off a Tornado in Texas?’ Lorenz at first used a seagull as his example, but the butterfly was more dramatic.
Fractal pictures, and their link with chaos, have inspired science-writers to make claims about the underlying ‘beauty’ of nature or mathematics. Ian Stewart’s rhapsody is typical.
Chaos is beautiful. This is no accident. It is visible evidence of the beauty of mathematics, a beauty normally confined within the inner eye of the mathematician but which here spills over into the everyday world of human senses. The striking computer graphics of chaos have resonated with the global consciousness; the walls of the planet are papered with the famous Mandelbrot sets.
This seems to assume that beauty is an absolute, rather than a subjective value. But of course it is not. In reality any of us might protest that fractal pictures are not beautiful, but as kitsch as a luminous cauliflower, and no one could prove us wrong. To any arts undergraduate this would be an obvious point, and Stewart’s neglect of it illustrates the gap that still exists between the two cultures. So (in the reverse direction) does the confusion among arts students about chaos theory, which is widely interpreted as meaning both that causation does not exist (‘nature is chaotic’) and, simultaneously, that chains of causation are so rigid that a butterfly’s wing can cause a tornado. Paul Davies’s is one of the most lucid attempts to resolve these difficulties in terms intelligible to the semi-numerate.
All science is founded on the assumption that the physical world is ordered. The most powerful expression of this order is found in the laws of physics. Nobody knows where these laws come from, nor why they apparently operate universally and unfailingly, but we see them at work all around us: in the rhythm of night and day, the pattern of planetary motions, the regular ticking of a clock.
The ordered dependability of nature is not, however, ubiquitous. The vagaries of the weather, the devastation of an earthquake, or the fall of a meteorite seem to be arbitrary and fortuitous. Small wonder that our ancestors attributed these events to the moodiness of the gods. But how are we to reconcile these apparently random ‘acts of God’ with the supposed underlying lawfulness of the Universe?
The ancient Greek philosophers regarded the world as a battleground between the forces of order, producing cosmos, and those of disorder, which led to chaos. They believed that random or disordering processes were negative, evil influences. Today, we don’t regard the role of chance in nature as malicious, merely as blind. A chance event may act constructively, as in biological evolution, or destructively, such as when an aircraft fails from metal fatigue.
Though individual chance events may give the impression of lawlessness, disorderly processes, as a whole, may still display statistical regularities. Indeed, casino managers put as much faith in the laws of chance as engineers put in the laws of physics. But this raises something of a paradox. How can the same physical processes obey both the laws of physics and the laws of chance?
Following the formulation of the laws of mechanics by Isaac Newton in the 17th century, scientists became accustomed to thinking of the Universe as a gigantic mechanism. The most extreme form of this doctrine was strikingly expounded by Pierre Simon de Laplace in the 19th century. He envisaged every particle of matter as unswervingly locked in the embrace of strict mathematical laws of motion. These laws dictated the behaviour of even the smallest atom in the most minute detail. Laplace argued that, given the state of the Universe at any one instant, the entire cosmic future would be uniquely fixed, to infinite precision, by Newton’s laws.
The concept of the Universe as a strictly deterministic machine governed by eternal laws profoundly influenced the scientific world view, standing as it did in stark contrast to the old Aristotelian picture of the cosmos as a living organism. A machine can have no ‘free will’; its future is rigidly determined from the beginning of time. Indeed, time ceases to have much physical significance in this picture, for the future is already contained in the present. Time merely turns the pages of a cosmic history book that is already written.
Implicit in this somewhat bleak mechanistic picture was the belief that there are actually no truly chance processes in nature. Events may appear to us to be random but, it was reasoned, this could be attributed to human ignorance about the details of the processes concerned. Take, for example, Brownian motion. A tiny particle suspended in a fluid can be observed to execute a haphazard zigzag movement as a result of the slightly uneven buffeting it suffers at the hands of the fluid molecules that bombard it. Brownian motion is the archetypical random, unpredictable process. Yet, so the argument ran, if we could follow in detail the activities of all the individual molecules involved, Brownian motion would be every bit as predictable and deterministic as clockwork. The apparently random motion of the Brownian particle is attributed solely to the lack of information about the myriads of participating molecules, arising from the fact that our senses are too coarse to permit detailed observation at the molecular level.
For a while, it was commonly believed that apparently ‘chance’ events were always the result of our ignoring, or effectively averaging over, vast numbers of hidden variables, or degrees of freedom. The toss of a coin or a die, the spin of a roulette wheel – these would no longer appear random if we could observe the world at the molecular level. The slavish conformity of the cosmic machine ensured that lawfulness was folded up in even the most haphazard events, albeit in an awesomely convoluted tangle.
Two major developments of the 20th century have, however, put paid to the idea of a clockwork universe. First there was quantum mechanics. At the heart of quantum physics lies Heisenberg’s uncertainty principle, which states that everything we can measure is subject to truly random fluctuations. Quantum fluctuations are not the result of human limitations or hidden degrees of freedom; they are inherent in the workings of nature on an atomic scale. For example, the exact moment of decay of a particular radioactive nucleus is intrinsically uncertain. An element of genuine unpredictability is thus injected into nature.
Despite the uncertainty principle, there remains a sense in which quantum mechanics is still a deterministic theory. Although the outcome of a particular quantum process might be undetermined, the relative probabilities of different outcomes evolve in a deterministic manner. What this means is that you cannot know in any particular case what will be the outcome of the ‘throw of the quantum dice’, but you can know completely accurately how the betting odds vary from moment to moment. As a statistical theory, quantum mechanics remains deterministic. Quantum physics thus builds chance into the very fabric of reality, but a vestige of the Newtonian-Laplacian world view remains.
Then along came chaos. The essential ideas of chaos were already present in the work of the mathematician Henri Poincaré at the turn of the century, but it is only in recent years, especially with the advent of fast electronic computers, that people have appreciated the full significance of chaos theory.
The key feature of a chaotic process concerns the way that predictive errors evolve with time. Let me first give an example of a non-chaotic system: the motion of a simple pendulum. Imagine two identical pendulums swinging in exact synchronism. Suppose that one pendulum is slightly disturbed so that its motion gets a little out of step with the other pendulum. This discrepancy, or phase shift, remains small as the pendulums go on swinging.
Faced with the task of predicting the motion of a simple pendulum, one could measure the position and velocity of the bob at some instant, and use Newton’s laws to compute the subsequent behaviour. Any error in the initial measurement propagates through the calculation and appears as an error in the prediction. For the simple pendulum, a small input error implies a small output error in the predictive computation. In a typical non-chaotic system, errors accumulate with time. Crucially, though, the errors grow only in proportion to the time (or perhaps a small power thereof), so they remain relatively manageable.
Now let me contrast this property with that of a chaotic system. Here a small starting difference between two identical systems will rapidly grow. In fact, the hallmark of chaos is that the motions diverge exponentially fast. Translated into a prediction problem, this means that any input error multiples itself at an escalating rate as a function of prediction time, so that before long it engulfs the calculation, and all predictive power is lost. Small input errors thus swell to calculation-wrecking size in very short order.
The distinction between chaotic and non-chaotic behaviour is well illustrated by the case of the spherical pendulum, this being a pendulum free to swing in two directions. In practice, this could be a ball suspended on the end of a string. If the system is driven in a plane by a periodic motion applied at the pivot, it will start to swing about. After a while, it may settle into a stable and entirely predictable pattern of motion, in which the bob traces out an elliptical path with the driving frequency. However, if you alter the driving frequency slightly, this regular motion may give way to chaos, with the bob swinging first this way and then that, doing a few clockwise turns, then a few anticlockwise turns in an apparently random manner.
The randomness of this system does not arise from the effect of myriads of hidden degrees of freedom. Indeed, by modelling mathematically only the three observed degrees of freedom (the three possible directions of motion), one may show that the behaviour of the pendulum is nonetheless random. And this is in spite of the fact that the mathematical model concerned is strictly deterministic …
Chaos evidently provides us with a bridge between the laws of physics and the laws of chance. In a sense, chance or random events can indeed always be traced to ignorance about details, but whereas Brownian motion appears random because of the enormous number of degrees of freedom we are voluntarily overlooking, deterministic chaos appears random because we are necessarily ignorant of the ultra-fine detail of just a few degrees of freedom. And whereas Brownian chaos is complicated because the molecular bombardment is itself a complicated process, the motion of, say, the spherical pendulum is complicated even though the system itself is very simple. Thus, complicated behaviour does not necessarily imply complicated forces or laws. So the study of chaos has revealed how it is possible to reconcile the complexity of a physical world displaying haphazard and capricious behaviour with the order and simplicity of underlying laws of nature.
Though the existence of deterministic chaos comes as a surprise, we should not forget that nature is not, in fact, deterministic anyway. The indeterminism associated with quantum effects will intrude into the dynamics of all systems, chaotic or otherwise, at the atomic level. It might be supposed that quantum uncertainty would combine with chaos to amplify the unpredictability of the Universe. Curiously, however, quantum mechanics seems to have a subduing effect on chaos. A number of model systems that are chaotic at the classical level are found to be non-chaotic when quantized. At this stage, the experts are divided about whether quantum chaos is possible, or how it would show itself if it did exist. Though the topic will undoubtedly prove important for atomic and molecular physics, it is of little relevance to the behaviour of macroscopic objects, or to the Universe as a whole.
What can we conclude about Laplace’s image of a clockwork universe? The physical world contains a wide range of both chaotic and non-chaotic systems. Those that are chaotic have severely limited predictability, and even one such system would rapidly exhaust the entire Universe’s capacity to compute its behaviour. It seems, then, that the Universe is incapable of digitally computing the future behaviour of even a small part of itself, let alone all of itself. Expressed more dramatically, the Universe is its own fastest simulator.
This conclusion is surely profound. It means that, even accepting a strictly deterministic account of nature, the future states of the Universe are in some sense ‘open’. Some people have seized on this openness to argue for the reality of human free will. Others claim that it bestows upon nature an element of creativity, an ability to bring forth that which is genuinely new, something not already implicit in earlier states of the Universe, save in the idealized fiction of the real numbers. Whatever the merits of such sweeping claims, it seems safe to conclude from the study of chaos that the future of the Universe is not irredeemably fixed. The final chapter of the great cosmic book has yet to be written.
Chaos theory reached the West End stage with Tom Stoppard’s play Arcadia, where Valentine explains it to Chloë:
CHLOË: The future is all programmed like a computer – that’s a proper theory, isn’t it?
VALENTINE: The deterministic universe, yes.
CHLOË: Right. Because everything including us is just a lot of atoms bouncing off each other like billiard balls.
VALENTINE: Yes. There was someone, forget his name, nineteenth century, who pointed out that from Newton’s laws you could predict everything to come – I mean, you’d need a computer as big as the universe but the formula would exist.
CHLOË: But it doesn’t work, does it?
VALENTINE: No. It turns out the maths is different.
Stoppard’s Valentine rejoices in unpredictability, because he sees it as restoring mystery to ordinary life:
The unpredictable and the predetermined unfold together to make everything the way it is. It’s how nature creates itself, on every scale, the snowflake and the snowstorm. It makes me so happy. To be at the beginning again, knowing almost nothing. People were talking about the end of physics. Relativity and quantum looked as if they were going to clean out the whole problem between them. A theory of everything. But they only explained the very big and the very small. The universe, the elementary particles. The ordinary-sized stuff which is our lives, the things people write poetry about – clouds – daffodils – waterfalls – and what happens in a cup of coffee when the cream goes in – these things are full of mystery, as mysterious to us as the heavens were to the Greeks. We’re better at predicting events at the edge of the galaxy or inside the nucleus of an atom than whether it’ll rain on auntie’s garden party three Sundays from now. Because the problem turns out to be different. We can’t even predict the next drip from a dripping tap when it gets irregular. Each drip sets up the conditions for the next, the smallest variation blows prediction apart, and the weather is unpredictable the same way, will always be unpredictable. When you push the numbers through the computer you can see it on the screen. The future is disorder. A door like this has cracked open five or six times since we got up on our hind legs. It’s the best possible time to be alive, when almost everything you thought you knew is wrong.
Arcadia’s scientific adviser was Robert May, a Royal Society Research Professor at Oxford and Imperial College London, and one of the pioneers of chaos theory. His programme note for the play is the best succinct summary:
The vision given to us by Newton and by those who followed in the Age of Enlightenment is of an orderly and predictable world, governed by laws and rules – laws and rules which can best be expressed in mathematical form. If the circumstances are simple enough (for instance, a planet moving around a sun, bound by the inverse square law of gravitational attraction), then the system behaves in a simple and predictable way. Effectively unpredictable situations (for instance, a roulette ball whose fate – the winning number – is governed by a complex concatenation of croupier’s hand, spinning wheel, and so on) were thought to arise only because the rules were many and complicated.
Over the past 20 years or so, this Newtonian vision has splintered and blurred. It is now widely recognised that the simplest rules or algorithms or mathematical equations, containing no random elements whatsoever, can generate behaviour which is as complicated as anything we can imagine.
This mathematics which ‘is different’ is the mathematics of ‘deterministic chaos’. What it says is that a situation can be both deterministic and unpredictable; that is, unpredictable without being random (on the one hand) or (on the other hand) attributable to very complicated causes.
‘Simple’ as they may be in themselves, these chaos-generating equations have the property of being ‘nonlinear’. In a linear equation you can ‘guess ahead’. Imagine a road lined with telegraph poles in a perspective drawing. Given two or three poles, you can easily draw in the rest for yourself. But nature often draws itself differently, using nonlinear equations. Imagine a river running alongside the road. The water has flat bits and bumpy bits. But however many I draw in for you, there is no way for you to tell (with a real river) where the next flat bit or bumpy bit is going to be. This holds true on every scale. Look down from a balloon and you’ll see that parts of the bumpy bits look relatively flat. Put your face close to the water and you’ll see that the flat bits contain relatively bumpy bits. The maths is the same for each case, and equally unpredictable.
In this sense, ‘nonlinear’ means two and two do not necessarily make four. Much of physics and other areas of science where so much progress has come, are linear (or at least decomposable into essentially linear bits). And so mathematical texts and courses have focused on linear problems. But increasingly it seems that most of what is interesting in the natural world, and especially in the biological world of living things, involves nonlinear mathematics. It was biologists – working on the ups and downs of animal population – who were among the first to see that not only can simple rules give rise to behaviour which looks very complicated, but the behaviour can be so sensitive to the starting conditions as to make long term prediction impossible (even when you know the rule).
There is a flip side to the chaos coin. Previously, if we saw complicated, irregular or fluctuating behaviour – weather patterns, marginal rates of Treasury Bonds, colour patterns of animals or shapes of leaves – we assumed the underlying causes were complicated. Now we realize that extraordinarily complex behaviour can be generated by the simplest of rules. It seems likely to me that much complexity and apparent irregularity seen in nature, from the development and behaviour of individual creatures to the structure of ecosystems, derives from simple – but chaotic – rules. (But, of course, a lot of what we see around us is very complicated because it is intrinsically so!)
I believe all this adds up to one of the real revolutions in the way we think about the world. Knowing the simple rule or equation that governs a system is not always sufficient to predict its behaviour. And, conversely, exceedingly complicated patterns or behaviour may derive not from exceedingly complex causes, but from the chaotic workings of some very simple algorithm. Ultimately, the mathematics of chaos offers new and deep insights into the structure of the world around us, and at the same time raises old questions about why abstract mathematics should be so unreasonably effective in describing this world.
Sources: (for Caroline Series’s and Paul Davies’s pieces) The New Scientist Guide to Chaos, ed. Nina Hall, London, Penguin, 1991; Tom Stoppard, Arcadia, Faber and Faber, 1993; Robert May, Programme note to Arcadia, 1993.