Chapter 4

Patterns at the Brink of Chaos

Mathematics has beauty and romance. It’s not a boring place to be, the mathematical world. It’s an extraordinary place; it’s worth spending time there.

– Marcus du Sautoy

Look up ‘chaos’ in a thesaurus and you’ll find synonyms like ‘turmoil’, ‘lawlessness’, and ‘anarchy’. But the kind of chaos with which mathematicians and scientists deal, in a relatively new field known as chaos theory, is very different. Far from being a disorderly free-for-all, it follows rules, its onset can be foretold, and its behaviour is revealed in patterns of exquisite beauty. Digital communications, modelling of the electrochemistry of nerve cells, and fluid dynamics are among the practical applications of chaos theory. But we’ll take a more scenic approach to the subject by way of a disarmingly simple question.

‘How Long Is the Coast of Britain?’ That’s part of the title of a paper by the Polish-born French-American mathematician Benoît Mandelbrot, a theorist at the IBM Thomas J. Watson Research Center, published in the journal Science in 1967. It seems a straightforward enough problem to solve. Surely, all you’d need to do is measure accurately all the way around the coast and that would be it. In fact, though, the length you measure depends on the scale you use, but in such a way that the length can increase without bound (as opposed to converging to a fixed value) – at least, down to the atomic scale. This puzzling result, that the coastline of an island, or a country or a continent, doesn’t have a well-defined length, was first talked about by the English mathematician and physicist Lewis Fry Richardson several years before Mandelbrot expanded on the idea.

Richardson, a pacifist interested in the theoretical roots of international conflict, wanted to know whether there was a connection between the chances of two countries going to war and the length of their common border. While researching this problem, he found marked discrepancies between the values quoted by different sources. The length of the Spanish–Portuguese border, for instance, was once said by the Spanish authorities to be just 987 kilometres long while the Portuguese gave it as 1,214 kilometres. Richardson realised that this kind of spread of measurements came about not because anyone was necessarily wrong but because they were using different ‘yardsticks’, or minimum length units, in their calculations. Measure the distance between two points on a wiggly coastline or border with an imaginary giant ruler that’s 100 kilometres long and you’ll get a smaller value than using a ruler that’s half the length. The shorter the ruler, the smaller the wiggles it can take into account and include in the final answer. Richardson showed that the measured length of a wiggly coast or border increased without limit as the yardstick, or unit of measurement, shrank and shrank. Evidently, in the case of the Spanish–Portuguese border, the Portuguese had done their measuring with a shorter length unit.

Great Britain and Ireland photographed by NASA’s Terra satellite on March 26, 2012.

At the time he published his findings, in 1961, no one paid much attention to this surprising discovery – what’s now called the Richardson effect or coastline paradox. But looking back it’s seen as an important contribution to the development of an extraordinary new branch of mathematics which Mandelbrot, the man who made it famous, eventually described as ‘beautiful, damn hard, increasingly unuseful’. In 1975, Mandelbrot also coined a name for the weird things at the heart of this novel field of research: fractals. A fractal is something, such as a curve or a space, with a fractional dimension.

To be a fractal all a shape needs is to have a complex structure at all scales, no matter how small. The vast majority of curves or geometrical figures we come across in maths aren’t fractals. A circle, for instance, isn’t a fractal because if we progressively zoom in on part of the circle’s circumference, it looks more and more like a straight line, after which nothing new is revealed at higher magnification. A square isn’t a fractal either. It retains the same structure at its four corners, and everywhere else looks just like a straight line after zooming in. To qualify as a fractal it isn’t even enough to have complex structure at one, or finitely many, points; there has to be that kind of structure at all points. The same is true of shapes in three or more dimensions. Spheres and cubes, for instance, aren’t fractals. But there are many shapes, in many different dimensions, that are.

Going back to the coastline of Great Britain, a small-scale map shows only the largest bays, inlets, and peninsulas. Go to a beach, however, and you’ll see much smaller features – coves, headlands, and so forth. Look even closer, with a magnifying glass or microscope, and you’d be able to discern the minuscule structure of the edges of every rock on the shore, down to smaller and smaller scales. In the real world, there’s a limit to how far it’s possible to zoom in. Below the level of atoms and molecules – and perhaps even well before then – it becomes meaningless to talk about more detail to do with the length of coastlines, and, in any case, the length changes, minute-by-minute, due to erosion and the ebb and flow of the tides. Nevertheless, the coast of Great Britain, and the outline of other islands and countries, are pretty good approximations to fractals and this explains why different sources of data give different values for the lengths of borders. Looking at a map of the whole of Great Britain, you wouldn’t be aware of all the little indentations that you’d see if you actually walked around the coast, and, based on the map, would come up with a much shorter length. Simply strolling along the beach, you’d miss all the fine structures of the rocks and get a much shorter length than if you used a foot-long ruler or something even more precise to measure with greater accuracy all the little ins and outs. The length actually grows exponentially, the more precise the measurement gets, rather than approaching, ever more closely, a final ‘true’ figure. In other words, if you use measuring equipment with enough resolution, the length you obtain can be as great as you like (again, within the limits set by the atomic nature of matter).

As well as natural fractals, such as coastlines, there are many purely mathematical fractals. A simple way to make one is to divide a line into three equal sections, draw an equilateral triangle, pointing outwards, that has the middle section as its base, and then remove the section that is the base. This process is then repeated starting from each of the four resulting line sections, and repeated again for each of the new shorter sections, and so on, for as long as you like, or forever. The final result is known as the Koch curve, after the Swedish mathematician Helge von Koch, who wrote about it in a paper in 1904. Three of these Koch curves can be joined to form what’s known as the Koch snowflake. The Koch curve was one of the earliest fractal shapes to be constructed. A couple of other now familiar fractals were described mathematically in the first quarter of the twentieth century by the Polish mathematician Wacław Sierpiński: the Sierpiński sieve (or gasket) and carpet. To make the sieve, Sierpiński started with an equilateral triangle and divided it into four, each half the size of the original. Then he removed the central one to leave three equilateral triangles, repeated the process with each new triangle, and kept on doing this, over and over again. Although such objects have been the subject of serious mathematical study for not much more than a century, artists have known about them since antiquity. The Sierpiński sieve, for instance, appeared in Italian art, such as mosaics in the Cathedral of Anagni, going back to the thirteenth century.

The first, second, and fourth iteration of the Koch curve.

The Koch snowflake.

Among the most interesting and counterintuitive properties of fractals is their dimension. On hearing the word ‘dimension’ a couple of ideas generally spring to mind, one to do with the overall size of something (as in the ‘dimensions’ of a room) and the other that refers to a unique spatial direction, which is the kind of ‘dimension’ we talked about in Chapter 2. We say that a cube is three-dimensional because it has sides that lie in three different directions at right angles to one another. This second, intuitive understanding of dimension, which relates to the number of perpendicular directions it’s possible to travel in, is roughly equivalent to what in mathematics is called the topological dimension. A sphere has topological dimension 2 because we can travel along it in the directions given by North and South, or by East and West. A ball, on the other hand, has topological dimension 3 because it also has an up and down, where down is towards the centre and up is away from the centre, as on Earth. The topological dimension can even be 4 or greater, as we saw in Chapter 2 (for example, a tesseract has topological dimension 4), but is always a whole number. However, the fractal dimension is different and measures, roughly speaking, how well a curve fills the plane or how well a surface fills space.

There are many different forms of the fractal dimension, but one of the easiest to grasp is the box-counting dimension, also known as the Minkowski–Bouligand dimension. To calculate it in the case of the coastline of Great Britain, we’d cover a map of the coastline with a transparent grid of small squares and count the number of boxes that overlap the coast. Then we’d halve the size of the boxes and count again. If this was done with a straight line, the number of boxes would simply double, or go up by a factor of 21, where the exponent (1) is the box-counting dimension. If it was done with a square, the number of boxes would quadruple, or go up by a factor of 22 giving a dimension of 2, and in the case of a cube (using a three-dimensional grid) the number of boxes would increase by a factor of 8, = 23, because a cube is three-dimensional.

Most of the shapes we’re used to thinking about have a whole-number dimension – 1, 2, or 3. But fractals are different. In the case of the Koch snowflake, to simplify things, we can use the fact that each element of it, the Koch curve, is made of four smaller Koch curves. If we reduce the size of the boxes in our grid by a factor of 3, we can split the Koch curve into four smaller copies, each one third the size. Each smaller copy has as many small boxes overlapping with it as the larger copy had with the original boxes, so the total number of boxes has been multiplied by 4. This gives us the dimension, d, of the Koch curve (and also the Koch snowflake, which is built of Koch curves) from the relationship 3d = 4. Solving this equation gives a value for d of about 1.26, so a Koch snowflake is approximately 1.26-dimensional. Think of this as telling us how much more wiggly a Koch snowflake is than a straight line at any scale we care to choose. Or think of it another way, as the extent to which the Koch snowflake fills the (2D) plane in which it lies; it’s too detailed to be one-dimensional but too simple to be two-dimensional. A line goes no way at all towards filling a plane because not only is it infinitely thin but also it’s simple in form. A fractal, like a Koch curve, is infinitely thin but so convoluted that between any two points, no matter how close they may appear when we zoom out, there’s an infinitely long distance along the curve.

Applying the box-counting method to the Sierpiński sieve, we end up with a value for d of about 1.58. This state of affairs, in which objects can have a non-integer dimension, seems very strange. And the strangeness spills over from the realm of the purely mathematical to things in the physical world.

Fractals such as the Koch snowflake and Sierpiński sieve are self-similar, which means that they’re made up of successively smaller copies of themselves. In nature, most fractals aren’t exactly self-similar. However, they’re statistically self-similar and so we can still work out their fractal dimension by applying the box method as before. When this is done, the fractal dimension of the coastline of Great Britain turns out to be about 1.25, remarkably similar to that of the Koch snowflake. In simple language, what this means is that Britain’s coast is one and a quarter times more wiggly, or rough, at all scales than a straight line or any other simple curve. South Africa, by comparison, has a much smoother coastline and a correspondingly lower fractal dimension of 1.05. Norway, with its impressive number of deep and convoluted fjords, scores a fractal dimension of 1.52. The concept can be applied to other natural fractals, one notable example being the human lung. Because the lung itself is obviously three-dimensional, you might expect its surface to be two-dimensional. However, the lung has evolved to have an enormous surface area – between 80 and 100 square metres, or roughly half the area of a tennis court – in order to be able to exchange gases as quickly as possible. So convoluted is the lung’s surface, with all its countless folds and tiny air sacs, or alveoli, that it almost fills the space it contains. Its box dimension works out to be about 2.97 so that, measured in this way, it’s almost three-dimensional.

In the real world, there are only three spatial dimensions but time is also sometimes considered to be the ‘fourth dimension’. It’s no surprise, then, that fractals can exist in time as well as in space. An economic example is the stock market. Over time, there may be large upward and downward fluctuations in the value of stocks, some of which take place over a period of years, and others (such as crashes) that can happen very quickly. As well as this, there are smaller fluctuations, when stocks rise and fall seemingly independently of the large-scale trends, and also tinier fluctuations that happen many times a day, as individual stocks rise and fall by slight amounts. With the computerisation of the stock market, these trends can be followed down to very small slices of time, from minute to minute, and even from one second to the next.

Another example of a time-based fractal is something we’ve already come across – the changing length of the coastline of an island, such as Great Britain. At any given moment, the coastline is a purely spatial fractal, the measured length of which depends on the magnification factor. But over time, as mentioned earlier, there are additional variations because of continual erosion (and deposition), the coming and going of tides and even of individual waves, and the almost imperceptible rise or fall of whole land masses due to tectonic activity.

Of all fractals known to mathematicians, one stands out because of its incredible intricacy. Not only does this fantastic shape have structure at all scales, but at different points at different scales it can look like two completely different fractals! It’s the famous Mandelbrot set, which was described by the American author James Gleick in his book Chaos, perhaps questionably, as ‘the most complex object in mathematics’. Although it carries Benoît Mandelbrot’s name, there has been some dispute over who actually discovered it. Two mathematicians have argued that they’d found it independently at about the same time, while another, John Hubbard of Cornell University, has pointed out that, in early 1979, he went to IBM and showed Mandelbrot how to program a computer to plot out parts of what, after Mandelbrot’s publication of a paper on the object, the following year, became known as the Mandelbrot set. The feeling is that Mandelbrot was a good populariser of the field of fractals and devised clever ways to display fractal images but that he was less than generous in giving credit to other mathematicians where credit was due.

Partial view of the Mandelbrot set.

Fabulously labyrinthine though the Mandelbrot set is, it arises from a very simple rule, which is just applied over and over again. In essence, the rule is this: take a number, square it, and add to it a fixed number. Then feed the result back into the formula, and keep going round and round, or iterating, in this way. The numbers in question are complex numbers – ‘complex’ meaning that each is made up of a real number part and an ‘imaginary’ one (a number times the square root of minus one). The fractal shape emerges when the real and imaginary values of each number are plotted on a graph.

To elaborate on this a little, say we start with a complex number z and a constant c, which is also a complex number. Having chosen a value for z we apply to it the rule, ‘multiply z by itself and add c’, or z2 + c. This gives us a new value for z, which we then feed back into the same rule to obtain the next z value. Some values of z will stay the same and others will repeat in a cycle before eventually returning to their original value. Any of these values, that either stays the same or repeats in a cycle, is said to be stable if we can change z very slightly and have the new value follow a path that stays very close to the original path. This is like the situation of a ball in a valley. If the ball is moved slightly, it will just roll back to its original position and is therefore stable. A ball on the peak of a mountain, on the other hand, even if nudged slightly, will roll down the mountain and follow a completely different path, so that this position at the mountain peak is unstable.

The stable points, out of those which stay the same or are in a cycle, are known as attractors. There are also other points, not necessarily very close to an attractor to start with, which get closer and closer to it as the iteration process continues. These form the ‘basin of attraction’ for c. Other points may get farther and farther away, diverging to infinity. The boundary of the basin of attraction is known as the Julia set for c. Julia sets are named after the French mathematician Gaston Julia who, along with his compatriot Pierre Fatou, did pioneering work in the early 1900s on the subject of complex dynamics. If you iterate any point on the Julia set, the resulting point will stay on the Julia set, but may move around it without settling into a repeating pattern.

The simplest possible Julia set is when c = 0 because then the rule for getting new values of z becomes simply ‘multiply z by itself’. What happens to a complex number z when it’s iterated in this way? If z starts off inside the unit circle (a circle of radius 1) centred on 0, it will rapidly spiral in towards 0. If z is outside this circle, it will rapidly spiral out to infinity. The Julia set is therefore the boundary of the unit circle, the basic of attraction is everywhere inside the unit circle, and the attractor is the point 0. Imagine the Julia set with c = 0 to be like a steel ball placed exactly between two magnets; the ball will stay put (on the Julia set, although in practice z can move about unpredictably as long as it stays on the Julia set) but if it were placed even slightly differently it would rapidly be attracted to a magnet. In this case, one of the magnets represents 0 and the other infinity.

This Julia set isn’t terribly interesting, and certainly isn’t a fractal. However, apart from c = 0, Julia sets do indeed form fractals and may come in many different shapes. Sometimes a Julia set is connected and sometimes it isn’t. When it isn’t, it takes the form of Fatou dust, which, as the name suggests, is a cloud of disconnected points. Fatou dust is actually a fractal with dimension less than 1.

The Mandelbrot set is the set of all values of c for which the Julia set is connected. It’s one of the most recognisable, yet counterintuitive, fractals. Although the Mandelbrot set is connected, there are tiny specks that don’t seem to be joined to it at all but in fact are, by means of extremely slender filaments. When magnified, these specks are found to be replicas of the entire Mandelbrot set, which may seem surprising at first but actually fits in with our understanding of the nature of fractals. These offshoots are imperfect replicas, however, no two of which are exactly alike – and for a very good reason that turns out to be one of the most profound facts about the Mandelbrot set. If you zoom in on any point on the boundary of the Mandelbrot set, it begins to look more and more similar to the Julia set at that point. The Mandelbrot set, a single fractal, contains infinitely many completely different fractals in the form of a vast array of Julia sets, all along its boundary. Indeed, the Mandelbrot set has been called a catalogue of Julia sets. Its boundary is so extraordinarily complex that it turns out to be two-dimensional, though it’s conjectured to have zero area.

Fractals often exemplify a straightforward yet counterintuitive principle: it’s possible to generate fantastically complex structures and patterns from extremely simple rules. The Koch snowflake is conjured up by a rule that a child could understand (just add an equilateral triangle onto the middle third of each line) and yet has an elaborate, albeit regular, structure. The Mandelbrot set is vastly more complex but, again, springs from a disarmingly simple recipe of instructions. You start with the function z2 + c and, by examining properties and asking questions, arrive at a fractal that has bewildering complexity, looking completely different at different points. Using a computer as a microscope it is possible to zoom in on any part of the Mandelbrot set and discover pattern within pattern, never exactly repeating and never reaching an end.

Fractals have one other interesting property. The fractal dimension of the Koch snowflake, as we’ve seen, is 1.26, which gives an idea of how ‘rough’ it is, or how well it fills the plane. If we take an arbitrary line that intersects the Koch snowflake, the intersection is almost always itself a fractal with dimension 0.26. (There are a few degenerate cases, such as a line of symmetry, which results in two isolated points for a fractal dimension of 0.) This is true for any fractal with dimension between 1 and 2 inclusive. For example, almost all lines that intersect the boundary of the Mandelbrot set form a fractal with dimension 1, though they consist of disconnected points and have length 0.

If we consider the same with fractals of dimension less than 1, something else happens. These fractals all consist of a cloud of isolated points. An example is Fatou dust. The surprising result is that almost all lines that intersect Fatou dust do so at only one single point, for a fractal dimension of 0, while almost all lines in general, even if restricted to those passing through the Fatou dust, will never intersect it.

These fractals all exist in two-dimensional space. We can even go down to one-dimensional space and find fractals, consisting of disconnected clouds of points and having a fractal dimension of 1 or less. The most famous example is the Cantor set. Start by taking a line segment. Remove the middle third, leaving two separate line segments. Do the same over and over again. In the end, all line segments have been reduced to disconnected points that comprise a fractal with a fractal dimension of approximately 0.63.

Fractals are related to another phenomenon in mathematics, known as chaos. Both arise from iterated functions, or rules that are cycled through over and over. Each iteration takes the state of the previous iteration as an input to produce the next state. In the case of fractals, the iterations generate a repeating or somewhat repeating pattern to which there’s no end no matter how much we zoom in. The distinguishing features of chaos are complexity that lacks any repeating pattern and an extreme sensitivity to initial conditions, or the starting state of the system.

The word ‘chaos’ itself is Greek and in its original form means ‘void’ or ‘emptiness’. In classical and mythological notions of creation it was the formless state out of which the universe emerged. In maths and physics, chaos or a chaotic state is equivalent to randomness or lack of pattern. But chaos theory is different from all these and refers to the behaviour of nonlinear dynamic systems under certain conditions. The behaviour of the weather gives a familiar example. Nowadays we can easily forecast the weather in the short term, over a few days or a week, and get it right much of the time. But there are no reliable forecasts for longer timescales, such as a month. That is because of chaos.

Suppose we take the weather starting from a particular initial condition. We can compute the forecast into the future from those conditions. However, if we change the conditions at the start by even a minuscule amount, the weather will very soon become unrecognisably different. This fact is what led to the discovery of chaos in the first place, by the American mathematician and meteorologist Edward Lorenz. In the 1950s, he was working on a mathematically simplified model of the weather. He plugged in numbers into his computer and generated a graph but was interrupted in mid-computation and had to restart the program. Instead of going back to the very start (which would have taken too much time) he started at a point in the middle and input the results from there. The graph he got at first seemed to agree with his previous one but soon rapidly diverged, as if it were a completely different graph. The reason was that a computer stores a few more digits than it outputs for rounding purposes. When Lorenz restarted the program those digits were lost so the input was imperceptibly different from the initial result at that point. The difference was amplified by the program until it diverged rapidly. This gave rise to a principle that Lorenz called the butterfly effect: a reference to the fact that if a butterfly flapped its wings today it might lead to a hurricane a month later.

Simpler equations than those used to predict the weather can show this same effect, revealing the point at which pattern and predictability break down and chaos takes over. Let’s say we start off with some value of x, where x can take any value between and including 0 and 1. Then we multiply x by (1 – x) and also by a constant number k, where k is between 1 and 4 inclusive. The new value of x is cycled back into this formula, again and again. In mathematical jargon, the process can be summarised as: x kx(1 – x) for 0 ≤ x ≤ 1 and 1 ≤ k ≤ 4. What we find is that for values of k that are less than or equal to 3, there’s an attractor consisting of a single point, with every value of x (apart from 0 and 1) converging to it. For values of k between 3 and 3.45, the attractor consists of two points, which alternate. When k lies between 3.45 and 3.54, the attractor consists of four points, then eight, and so on, doubling more and more often. At approximately k = 3.57, a big change takes place and the doubling goes from happening faster and faster to happening an infinite number of times. At this point the system can never settle down to a steady pattern and becomes completely chaotic. Chaos emerges when a predictable system becomes completely unpredictable. For example, in this case, when k is less than 3, it’s simple to predict that after, say, 100 iterations a point will be very close to the single attractor. For k greater than 3.57, there’s no way for us to predict the long-term behaviour of any point.

The doubling of attractor points, from one point to two to four, and so on, which happened when k exceeded the value 3 in the example we just looked at, is governed by an important mathematical constant known as the Feigenbaum constant. We can see how this important number emerges in the lead up to chaos. The first phase, with a cycle of one point, has length 2, because it lasts from k = 1 to k = 3. The second, with a cycle of two points, has length approximately 0.45, because it lasts from k = 3 to k = 3.45. The ratio 2:0.45 is approximately 4.45. The third phase has length approximately 0.095. The ratio 0.45:0.095 is approximately 4.74, and so on. These ratios eventually converge to the Feigenbaum constant, which is approximately 4.669. Each phase lasts exponentially shorter than the last, so that by k = 3.57 the cycling has occurred infinitely many times.

The Feigenbaum constant emerges from the process we’ve just considered, but what makes it fundamental to chaos theory is that it can be found in all similar chaotic systems. No matter what the equation, as long as it satisfies some basic conditions, it will have cycles that double in length according to the Feigenbaum constant.

To see how chaotic processes can generate fractals, we could take the iterative process above and plot the attractors for each k. Most of what appears after k = 3.57 is pure chaos, but there are a few values of k for which there’s a finite attractor. These are known as islands of stability. One such island occurs around k = 3.82, where we find an attractor consisting of just three values. Zoom in on any one of these values and what we see looks similar, though not exactly identical, to the entire graph.

In his pioneering studies of chaos, Lorenz also found a new kind of fractal, known as a strange attractor. Ordinary attractors are simple in the sense that points converge to them and then follow a fixed cycle along them. But strange attractors behave differently, as we’ll see. Lorenz used a system of differential equations to form the first example of one. When he zoomed in on any point on it, it gave the appearance of infinitely many parallel lines. Any point on the attractor followed a chaotic path along the attractor, never returning exactly to its original position, and two points that started very close to each other rapidly diverged and ended up following very different paths. For a physical analogy of this, imagine a ping-pong ball and an ocean. If the ping-pong ball is released above the ocean it will rapidly fall until it reaches the water. If it’s released below the surface it will rapidly float up. But once it’s on the ocean’s surface, its motion becomes unpredictable and chaotic. Likewise, if a point is not on a strange attractor it will rapidly move towards it. Once it’s on the strange attractor, though, it moves around in a chaotic manner.

Fractals are fascinating to explore and among the most visually stunning objects in maths. But they’re also profoundly important in the physical world. Anything in nature that appears random and irregular may be a fractal. In fact, it could be argued that everything that exists is a fractal since it will have some structure at every level, at least down to that of an atom. Clouds, the veins in our hands, the branching of our tracheal tubes, the leaves of a tree – all show a fractal structure. In cosmology, the distribution of matter across the universe is like a fractal and its structure may descend below the atomic and nuclear level down as far as the shortest length to which any physical meaning has been ascribed, the so-called Planck length, a mere 1.6×10-35 metre, or about one hundred million trillionth the width of a proton.

A strange attractor known as ‘Thomas’ cyclically symmetric attractor’.

Fractals crop up not just in spatial patterns but also in temporal ones. Drumming is a case in point. It’s easy to program a computer to generate a rhythmic drum pattern or have a robot musician play one. But there’s something about the sounds produced by professional drummers that distinguishes them from the perfectly steady, impeccably accurate beats of their synthetic counterparts. That ‘something’ is the slight variations in timing and loudness – the little deviations from perfection – which, research has shown, are fractal in nature.

An international team of scientists analysed the drumming of Jeff Porcaro, who played with the band Toto and was famed for his rapid and intricate one-handed playing of the hi-hat cymbals. In both the rhythm and loudness of Porcaro’s hits on the hi-hat, the researchers found self-similar patterns with structures in longer periods of time that echoed structures present in shorter time intervals. Porcaro’s hits are the sonic equivalent of a fractal coastline, revealing self-similarity at different scale lengths. What’s more, the researchers found that listeners prefer exactly this type of variation, as opposed to precise percussion or that produced more randomly.

The fractal patterns differ from one drummer to another, forming part of what makes their playing distinctive. Similar patterns occur when musicians perform on other instruments and, although subtle, are the minute imperfections that separate human from machine.

Because many things in the world around us are fractals – or good approximations to them – a computer can quickly create a picture of something that closely resembles an object in nature, such as a tree. All it needs is a formula to work with and some starting data and, in the wink of an eye, it can assemble a breathtakingly lifelike representation. Not surprisingly then, this technique of rapidly rendering clouds, moving water, landscapes, rocks, plants, planets, and all manner of other scenery items has become a favourite of those working with CGI-enhanced movies, animated films, flight simulators, and computer games. There’s no need for vast databases to hold all the objects and scenes needed to produce a realistic moving scene when the computer can calculate it all on the fly by just cycling at high speed through a few simple rules. This approach promises to play a major role in future virtual reality and other immersive technologies where the goal is to generate 3D imagery, indistinguishable from the actual thing in real-time.