3
Trinidad

Sanctus, sanctus, sanctus: what I tell you three times is really, very, truly, awesome. Three doesn’t just talk; Three asserts. Three is superlative; Three thinks big. Great, greater, greatest: a rhythm of three gives ten, hundred, thousand. A thousand is ten times ten times ten, and Fibonacci taught Europeans to write their larger numbers in thousands. A million is a thousand thousand, but a thousand thousand thousand is—well, just to be confusing, usages of ‘billion’ have differed from the beginning, with European billions being a thousand times greater than American billions. As in Iraq, the British government has parted company with Europe to side with America in counting 109 as a billion, and 1012 as a trillion.

Such large numbers have no immediate meaning. Billions, as for instance somehow showered on computer systems, are often confused with mere millions. Douglas Hofstadter has written a superb essay on the difficulty of visualising a billion of anything, quoting an American senator: ‘A billion here, a billion there, soon you’re talking real money.’ (That was long before the US casually dispersed nearly $12 billion in $100 bills as a detail of the ‘reconstruction’ of Iraq.) Perhaps Three is at the limit of what the mind can cope with directly. One, two, three, many … Gold, silver, bronze and also ran … The logic of number triplets, when solving 3 × 3 Sudoku, is near to the limit of direct perception.

Yet billions are real. Billions measure the age of the planet, the diversity of its life, its human population, and the spectrum of human individuals’ wealth and power. A billion heartbeats roughly measure the span of animal life, and a billion words a lifetime of consciousness. I think of this when the word ‘gig’ is casually used for the gigantic gigabyte storage now routinely available. The K, the meg and the gig give alternative words for a thousand, million and billion, and they tell a story relating Two and Three with Ten. The K for a thousand, the third power of ten, is convenient because it is close to 1024, the tenth power of two.

EASY: This is also why 280 ancestral lines going back to the year dot gives about a trillion trillion.

Less obvious is that the approximation 1024 ≈ 1000 is related to the problem of music, and this gives a starting point for both the harmonies and the cussedness of numbers.

The sound of music

The musical relationship between Two and Three is most obvious in rhythm. In the European tradition, three-rhythms for dance music predominated until the twentieth century. Then American marching bands became sexy with ragged-time four-four, and complex African-American rhythms soon changed popular music completely, leaving triple time as a wistful folk-music nostalgia. But there is another less obvious Three-ness in the distinctive European tradition of harmony. This is based on playing a chord of three notes at once, and is still very much alive. Which three notes? The answer depends on powers of two, three and five—with three as the dominant number.

Sound is carried through air by waves with frequencies in a range from about 20 to 20,000 per second. The shorter the wavelength, the higher the frequency and the higher the sound. Each musical sound comes from a length of air or solid medium vibrating with some fundamental frequency, or pitch, but that vibration is bound to contain higher frequencies at (some of) 2, 3, 4, 5 … times that basic pitch. This happens naturally through the way vibrations arise. The timbre of an instrument is largely determined by the mixture of harmonics it produces. The clarinet, for instance, has no even harmonics.

The harmonics determined by the multiples 2, 4, 8, 16 … are very special. They are so closely related to the fundamental vibration that they are interpreted by the brain as being the ‘same’ sound at a higher pitch. This gives to music a basic, natural connection with the powers of two, and so with the exponential growth as met with the proliferation of ancestors. In fact, some instruments display a sort of graph of exponential growth. It is striking in the shape of a harp, and even more explicit in a row of organ pipes, which gives a direct picture of the actual wavelengths.

The next most simple relationship is given by the third harmonic, and this gives the most closely related but different note, called the dominant. Beethoven’s Ninth Symphony begins and ends with a primeval sound, as heard in medieval European music, a chord of only two notes. These two notes are determined by the 3:2 ratio. The ratio is, confusingly, called a fifth, from counting the five notes CDEFG from C to G. Likewise the 5:4 ratio, based on the fifth harmonic, gives a (major) third, the term coming from the three notes counted CDE from C to E. The term ‘octave’ comes from counting the sequence CDEFGABC.

If further notes are to be combined harmoniously, what should their frequencies be? The principle of harmony is that their ratios should be based on simple numbers, so as to make as few clashes as possible between harmonics. But this is not as easy as it sounds. Imagine a string orchestra tuning up (actually changing the speed of sound in their strings by adjusting their tensions). Suppose the cellos have agreed upon and are sounding their low C, and the violins use it to tune their four strings to G, D, A, E in turn. To tune the G-string in perfect harmony with the cellos, the violinists locate the note which has exactly three times the frequency of the low C. Next, the D-string must have a frequency 3/2 times greater, the A-string then 3/2 of that and finally the E-string 3/2 greater again. The five notes C G D A E suffice for all the pentatonic melodies common to world music. But there is already a problem with the harmony, even for this basic scale. For the ratio of the final E note to the original C note is 81/8. This is almost, but not quite, 10. As a result, it will clash with the fifth harmonic in the cello’s vibration. It is slightly higher, or sharper. This clash contradicts the principle of resonance on which we based the allocation of notes.

Another problem faces a piano tuner who starts with all the C notes, then tunes each G to be in agreement with the third harmonic of the C. Similarly the tuning can go on to D, A, E, B, F# … ‘which brings us back to Doh’, as the song goes—but doesn’t. The sequence would only return to its starting point if twelve fifths were equal to seven octaves, equivalent to the non-equation

312 = 219, 531441 = 524288.

MODERATE: This, together with the untruth 80 = 81, implies the untruth that 1024 = 1000. This untruth also shows why three major thirds do not quite equal an octave. MODERATE: The non-equation 63 = 64 is equivalent to taking the seventh harmonic to be given by two perfect fourths.

Fortunately, the Western ear allows itself to be fooled by a compromise with truth. The standard fix needs mathematics beyond fractions, and we shall do this in Chapter 4. Meanwhile, our tour goes from tonality to tones of colour, where Three plays a completely different role.

The eye of the beholder

Light is an electromagnetic wave, and what we call its colour is a measure of its frequency. Light waves travel nearly a million times faster than sound, and the vibrations we perceive are also of much higher frequency than the vibrations of sound, around 500 trillion a second. The spectrum of the rainbow is the equivalent of about an octave. But light has no effect on the eye analogous to the harmonies of sound: two frequencies related by simple proportions do not have any effect on the eye at all. There is nothing in the way that light is created or absorbed that gives rise to an association between such frequencies. The reason for that, as for so much in life, lies in quantum mechanics.

The integer-based quantum does give rise to integer-based patterns, but they are of a quite different nature. It had been noticed in the 1880s that hydrogen emits light with frequencies proportional to (1/m2 − 1/n2), for integers m and n. The explanation of this pattern in terms of quantum leaps was one of the first great successes of quantum mechanics. The leap from m = 2 to n = 3 corresponds to a distinctive red colour. Other vivid frequencies, the analogues of pure musical notes, are seen in neon signs and sodium lamps. But they create no harmonics, and no chords.

Few people possess ‘perfect pitch’—the ability to recognise and name the pitch of a sound. Yet sighted people expect of each other the equivalent of perfect pitch for colour, giving detailed names to different frequency effects. These roughly agree, although words and their associations differ with culture and language. English is very clear that pink is pink, not light red; whilst light blue has no name of its own as it does in Russian. Paint colour charts go wild with imaginative names for more subtle distinctions. These names, and the boundarylines between different colours, are bound to be somewhat arbitrary, because colour is continuous. The interesting thing is that it is a continuum in a three-dimensional space, even though the spectrum of the rainbow is determined by the one dimension of frequency.

The reason is that the retina has three kinds of cells reacting to light, based on three different molecules. These peak in three different parts of the spectrum: actually in blue, green and yellow-green areas. Perceived colour measures the intensities with which these different cells react. ‘Colour blindness’ arises for people in whom these three molecules are not functioning, but it is an odd fact that all human eyes are rather blind to red, even though red is so clearly distinguished in language. It seems surprising that we distinguish red, orange and yellow so clearly, but do not differentiate those blue-green colours to which the receptors are more sensitive. This emphasises that perceived colour is an artefact of eye and brain processing.

The Three-ness of colour is entirely in the eye of the beholder, and this means that the brain accepts completely different combinations of light as being of the ‘same’ colour. Yellow may come from a sodium-yellow lamp which emits virtually a single frequency. It may come from a mixture of red and green dots on a television screen. It may come from pigments, as in van Gogh’s sunflowers, which absorb blues from white light falling on it, and reflect the rest. In fact there are infinitely many ways in which the ‘same’ colour may result. This has no analogue in sound.

The enormous world of photography and paint rests on exploiting and fooling the three-dimensional geometry of the eye’s colour space. It used to be said that the camera cannot lie, but everyone knows that a Photoshopper can tell a whopper. You may frown on manipulating photographs, but the very nature of photography, like painting, is that of deceiving the eye by creating a mixture of light frequencies which make the same impression on the human eye as some quite different mixture. (This may be one reason why animals don’t affect much interest in art, television or computers.) The question of which colours can be so imitated, and which cannot, is very sophisticated. The colour spots used on digital screens—which are not at all the same as the colours which mark the peak response of the eyes’ receptor cells—cannot in fact reproduce all colours. Somehow the brain forgives a great deal.

Websight

If you write HTML to create a webpage, and use the instructions for getting coloured backgrounds and spaces, you experience a direct introduction to colour three-dimensionality. A colour is coded in HTML by three numbers in the range from 0 to 255, specifying the red, green and blue intensities (roughly as they are perceived by the eye). For instance, black is (0, 0, 0), white is (255, 255, 255), grey is (128, 128, 128), yellow is (255, 255, 0) and Hot Pink is (255, 105, 180). The colour-space is thus a cube. The discrete values mean that it is not a true continuous cube, but this does not affect the essential point about three-dimensionality.

The reason for the number 255 is that HTML uses these numbers as written in base 16 (‘hexadecimal’ or ‘hex’) notation. Base 16 needs some extra numerals to represent the numbers from ten to fifteen, and the convention is to use ABCDEF for these, so the numerals are 0123456789ABCDEF. (The same convention is used in 4 × 4 Sudoku puzzles.) The numbers from 0 to 255 are then simply the hex numbers from 00 to FF.

MODERATE: Hot Pink, by any name, would be the same. Show it is represented by FF69B4. How many different colours are there in the colour cube?

Software for editing images may represent this cube of colour in other ways. Instead of describing a colour by values for redness, greenness and blueness, it may be described by brightness, saturation and hue. Brightness measures the sum of the three colour intensities. If you restrict to values which add up to 255, you factor out brightness and reduce the possibilities to a colour triangle. This restriction is equivalent to slicing through the cube at the three vertices nearest to 0. The resulting triangle has pure red, green and blue at the vertices. Saturation then measures distance from the grey centre of this triangle, and hue the angle round the triangle. Experience shows that navigating this three-dimensional space of possibilities is highly non-trivial.

DIFFICULT: Imagine holding the colour cube with white at the top and black directly beneath it. If you divide it exactly in two by a horizontal cut, what shape does the cut surface make, and what colours appear on it?

Fooling the eye with colour printing is a more difficult problem, and the standard ‘CMYK’ solution brings in more advanced geometry. It is the dual, or negative, of the problem of colour on luminescent screens. To produce white on a white page means printing nothing at all, leaving the white light that falls on it to be reflected. This is the analogue of producing black on a screen by doing nothing. Every other colour effect involves a similar reversal. To produce blue on a printed page means that the inks applied must absorb the red and green from the white, and reflect the blue. The red-absorbing ink is cyan and the green-absorbing ink is magenta. If both of these are printed in tiny dots, they do indeed fool the eye into thinking that it sees blue. The third of these negative colours is supplied by yellow ink, which absorbs blue light. Cyan, magenta and yellow are the C, M and Y dimensions. In principle, ink intensities drawn from just these three dimensions should be able to imitate all colours. However, the physics and chemistry of inks on paper means that a satisfactory black is not obtained by the printing of all three inks at full intensity. So a further black ink is used as well. This makes a fourth dimension available. Often, however, the space of colours printed is actually restricted to a three-dimensional hypersurface within that four-dimensional space. The location of that hypersurface may be decided by software converting an RGB image into CMYK space, dictating exactly when the black K will be used instead of a sum of C + M + Y.

If we had ultra-violet sensors like bees, colour space itself would be four-dimensional; unfortunately, we cannot read the come-on signals of the flowers, which are designed for bees to see. Many birds have four-colour sight, and we have a very limited idea of what the world looks like to them, or what they see in us.

A model of life

Of course, physical space, the space that estate agents describe as ‘location, location, location’, is also three-dimensional. A photograph or representational painting is, however, two-dimensional. Both of these imitate what the eye sees, which is also (a patch of) the two-dimensional sphere of sight. The standard method of perspective is a transformation of three-dimensional space into a plane image, keeping straight lines straight. This is also roughly what a camera does. But it is an artifice. It is only ‘realistic’ from one vantage point.

It is difficult to imagine the power of the trompe-l’oeil effect that the first fully worked perspective drawings must have had on their viewers. (Masaccio’s Trinità in Santa Maria Novella, Florence, around 1427, is a famous example.) But perspective only works satisfactorily for comparatively small fields of view, and never allows what the eye does in reality, which is to look right round. On panoramic photographs, straight lines on buildings become curved lines. These contradict the principle of perspective drawing but are more ‘realistic’ for the sphere of eyesight. Stand right in front of a long wall and ask if it looks ‘really’ straight.

Pictures and photographs are mappings from a three-dimensional space into a two-dimensional image space. Modern mathematics has run roughly parallel to modern art in probing the question of what representation means, and extending the idea to mappings involving spaces of any number of dimensions. The conversion between RGB and CMYK is a model of the way that the geometry of higher dimensions emerges as a natural development.

Mathematical spaces can be thought of as general spaces of possibilities, with the space of possible colours as a model. The common expression ‘another dimension’ is quite correctly used to mean bringing some qualitatively different consideration into discussion. Dimensions may also be referred to as parameters or degrees of freedom.

As an example, the coolness graph for boys’ trousers in Chapter 1 was an over-simplication of the fashion problem, by reducing it to just one dimension. In reality, the great challenge for a young man is to decide not only how much bottom to show at the top, but how much of the thigh to hide. Two dimensions are needed for these parameters alone. Fashion is always changing in time, and showing time as another dimension makes this problem of dress design, as urged by the Department of Education, into one of studying a three-dimensional hypersurface in a four-dimensional space.

GENTLE: Assuming that your life-class model sits still, what is the dimension of the space of possible camera shots that you can obtain? If you are also free to move a spotlight, how many extra dimensions does this add to the space of possibilities from which you select your composition?

Although we think of living in a world of three dimensions, the actual space of possibilities in which we have to operate is generally a space of many more dimensions. Consider a single shot in a ball game. At one instant, the position of a ball is specified by three dimensions. But its orientation gives another three—the same three dimensions as we counted in Chapter 2 as the space of rotations. The motion of the ball is then given a further three for velocity and three for its spin. (It is the spin which, as people say, gives a whole new dimension to the professional game.) The problem of batting against such a ball requires predicting how the ball will continue to move in these twelve dimensions, and then choosing a point in a similar twelve-dimensional bat space to optimise the outcome. In a 2006 cricket match between England and Pakistan, the allegation of ‘tampering’ with the ball to break its symmetry in one of these dimensions, thus complicating the prediction of its motion, turned into a major dispute.

How can the even greater problems of economics and life be reduced to manageable dimensions? One fascinating development lies in simplifying them to spaces of strategies with a small number of parameters. This gives the model of life known as Game Theory. The simplest kind of game is a zero-sum two-person game, in which a loss to one player is a gain to the other. As it happens, the theory of such games also leads to an example of mathematics in which the Hardy/Hogben dichotomy evaporates. In 1940 Hardy quoted Hogben on ‘planning’ with distaste, but the practical application of mathematics to strategic planning in the 1940s turned out to require a beautiful example of abstract many-dimensional geometry. Applied to zero-sum two-person games, it involves an elegant theorem using a duality between such spaces, to show how the strategies of the two players are locked into an embrace.

A famous example of such a game, rather easier than cricket to understand, is that of Scissors, Paper, and a third which is Stone or Rock depending on your side of the Atlantic. Each player has a three-dimensional strategy space, and the solution for optimal play is for each to choose the point in that space, which means playing the three options equally and at random. In this case it is fairly obvious that a player can win, and can only win, against an opponent whose choices fail to be random and equiprobable. Mathematical game theory shows how to solve much more general and less obvious strategic questions—though the solution opens up yet another question, of how to ensure randomness.

Trials and tribulations

It is common to applaud ‘win-win situations’ which arise from games which are not zero-sum. Unfortunately, turning now from optimism to pessimism, they can also be lose-lose situations. If you are a billionaire, you can probably make the rules to suit yourself, but those on the receiving end may be faced with less convenient constraints. The classic example is that of ‘prisoner’s dilemma’, a situation which could well be imagined in a Caribbean setting. Suppose two captured prisoners are interrogated separately. Both have a choice: to inform against the other, or not. They are both best off if both refuse to inform. But the direst outcome for a prisoner is when, having refused to inform, he finds himself incriminated by the other. The only way to avoid this worst scenario is to inform; the same logic applies to both and so both will incriminate each other. Neither can choose the strategy that would be to their collective advantage.

Refusing to inform is called ‘co-operating’ and informing is called ‘defecting’. Of course, from the point of view of the interrogator, the ‘co-operative’ prisoner is the one who can be made to squeal. This shows why a non-zero-sum two-player game is equivalent to a zero-sum game for three players, in which the players can make and break alliances. (It is zero-sum because one may assume, reasonably enough, that the more painful the outcome for the prisoners, the happier the interrogator.)

The best-known work of mathematician John Nash, of Beautiful Mind fame, established that for any such game, with three or more players, there would be an ‘equilibrium’ situation in which no player could gain by changing strategy. This, however, does not resolve the problem of how to bring about collective betterment in the face of individual risk. One approach is to consider not the strategy for a one-off game, but the strategy for repeated trials in which there is opportunity to learn from an opponent’s previous form and react accordingly. Extensive experiments have shown that a ‘tit for tat’ strategy establishes a much better long-run outcome. This is based on co-operating, but defecting if the opponent does. Such an extended scenario does make the biological evolution of altruism more comprehensible.

Douglas Hofstadter wrote extensively on this in Scientific American and then put his readers to a practical test. He offered a prize in a Luring Lottery; it would be a million dollars, divided by the number of entries. Individuals would be allowed to submit multiple entries. This created a many-person game, the players being the many readers of Scientific American. The best outcome (or from Hofstadter’s point of view, as prize donor, the worst) would be if there was just one entry. No entries, or very many entries, would be equally bad (or good). How could the millions of readers co-operate so as to submit a single entry? Hofstadter had a super-rational plan for how this could be achieved, but it only needed one defecting reader to wreck it by putting in many entries. He need not have worried that it would be followed. For he was flooded with entries, as readers vied to write down the largest number they possibly could.

In practice, people do co-operate in many ways, despite the lack of individual benefit. (There is, for example, no rational point in any individual troubling to vote in a large-scale election, as the chances of the election being decided by one vote are extremely slim.) But co-operation is fragile, as gun control and gang revenge questions indicate. The nuclear non-proliferation treaty, like other self-denying ordinances (the ABM Treaty and laws of war generally) are similarly unstable. The problem of mitigating climate change is also of this nature, and even more complex because it involves a vast number of players.

Co-operating in this game means making some sacrifice for the sake of those who are not just out of sight, but as yet unborn. But worse, there are plenty of players in the global game who will defect with pleasure, because a warmer climate suits them, because they believe it will advance the End Times, or because they don’t give a damn. Many more can at least live happily with predictions made for their lifetime, even if they don’t believe in them at all. Science needs a web of trust, quite unlike mathematics, where in principle you can see everything and think for yourself. If a professor of meteorology in a distinguished institution pours public scorn on the predictions of climate change, then inevitably that trust will be diminished.

On the governmental level, the year 2007 seems to have marked a decisive point. The British Conservative party has professed that ‘the politics must fit the science and not the other way round’. Even the United States administration is no longer asserting disbelief in the scientific consensus. But science is not easy to fit in with. It will be hard to overcome that kind of ‘realism’, which means boasting of outdoing others in ever-growing wasteful consumption. The largest industrial players are openly playing prisoner’s dilemma, refusing to co-operate lest competitors take advantage.

The world may gain advantage from co-operative scientific research, but there is not necessarily any advantage to the individual who does it. Indeed, such individuals may well have to sacrifice the k’s, megs and gigs that could have been earned by applying their talents in the ‘real world’. Nor will the benefit of fundamental research necessarily go to the country which funds it. Rather, as the great mathematician David Hilbert said, in the aftermath of the First World War, ‘for mathematics, the whole world is a single country’. That is the climate of reason, and the climate of the planet also needs genuinely united nations rather than competing warlords. A splendid Victorian hymn starts with Greenland’s icy mountains and Afric’s coral strand. When in pessimistic mood, I suspect that both may disappear before any politics can fit the science.

Page three news

Politicians speak of being judged by history, although historians aren’t judges and, when they do make judgments, disagree as much as anyone else. Posterity is a forgetful and feckless beneficiary, and there may not be one to thank the present for much. Posterity has famously never done anything in return: this is a sad aspect to the asymmetry of time. But mathematics is like time travelling, and can make some names live far outside their physical lives. A hundred years ago in old Europe, Raymond Poincaré was the leading statesman of France, and her president in the First World War. But it was his cousin Henri Poincaré who makes news today, though news of a kind that the world doesn’t find easy to absorb.

In 1904 Poincaré had published a conjecture: that every simply connected closed 3-manifold is homeomorphic to the 3-sphere. As Alan Turing said, ‘Conjectures are of great importance since they suggest useful lines of research.’ And so it did for a century, stimulating new understanding of many-dimensional spaces, but remaining an unanswered question. The significance of Three is that this three-dimensional problem is actually harder than its analogue in any other number of dimensions. In 2000 it was adopted as a Millennium problem by the Clay Institute, offering a million-dollar prize. This question now appears to have been settled by a sequence of difficult ideas, culminating in the work of an individualist Russian, Grigory Perelman. This has shown that Poincaré’s conjecture is correct. The prize money made it news, as it was intended to, but explaining the nature of the achievement brought accessibility to a crisis. The world’s press rose to the occasion pretty well, though there was spluttering hilarity on the main BBC news programme when Poincaré’s conjecture was read out.

What does it mean? A manifold is a space created by joining together patches, rather as fields of view are welded into the whole sphere of sight. Closed means having no edge. Simple-connectedness is a criterion of having no hole: it means that any circle drawn in the manifold can be continuously contracted to a point. A spherical surface is simply connected. The surface of a teacup, in contrast, has a hole: you can draw a circle on the handle which cannot be contracted. The property of having no hole is also shared by any stretched or squashed sphere. This is the significance of the word homeomorphic: it gives an exact definition of stretching and squashing. The mathematical theory of topology makes all these concepts precise, and it can be asserted that surfaces homeomorphic to the sphere are the only two-dimensional manifolds that have neither edge nor hole. Poincaré conjectured that the analogous statement was true in three dimensions.

We have already met an example of a closed three-dimensional manifold: it is the space of rotations described in Chapter 2. The shoe-string experiment relates to Poincaré’s conjecture rather neatly. The fact that a single rotation leads to a twisting up, is equivalent to the fact that the space of rotations is not simply connected. That is consistent with the fact that the space of rotations is not the 3-sphere, but only half of it.

I was slightly surprised to hear a media-savvy mathematician explain that the question concerned what shape the universe could take, presumably thinking of its three spatial dimensions. This understates the scope of Poincaré’s problem. The manifold should be thought of as a completely general conception of what three parameters could encompass, as a space of possibilities. As a physical application of the kind of research Poincaré inspired, the inner space of quantum particles and forces is more pertinent.

Poincaré’s work echoes to the honour of France and her Third Republic, but was not done for profit. The same applies to another famous problem Poincaré attacked, also featuring the number Three. This is the three-body uproblem.

Climates and comets

The bodies that Poincaré was concerned with were of the heavenly kind. The classical two-body problem arises for double stars, attracting each other according to Newton’s Law of gravitation. At first sight each star is characterised by six numbers which say where it is, and how it is moving. That would seem to make the problem one of a twelve-dimensional space of possibilities. But the symmetries of the problem reduce this to a total dimension of just two. In this two-dimensional space each possible story of the stars is traced out as a one-dimensional track. In fact, by making use of the idea of a space of possibilities, Newton’s laws become elegant geometrical rules dictating where these tracks must go. The whole ensemble of possible stories can be visualised as a system of tracks neatly covering a surface. Stability is a natural feature in this two-dimensional setting.

But for three or more bodies the situation is entirely different, with six extra dimensions of possibilities for each body. In two dimensions, there is not much scope for what tracks can do: it is rather like brushing short hair on a two-dimensional head. In three or more dimensions, beehives and Mohicans give a hint of how Newton’s laws can make them wind round in far more complex ways, more strangely than punk dreams. They may give the appearance of stability, but only as a temporary approximation.

Poincaré could not pursue detailed calculation of these complicated effects, but gained enormous insight from the general principle of treating the behaviour of a physical system as a track in a possibility-space of many dimensions. In the 1950s, when computers first became available, people tried to model the atmosphere for weather prediction. They effectively rediscovered what Poincaré had pointed out, and since then ‘chaos’ has been the subject of an enormous amount of new research.

Does chaos render climate prediction a forlorn hope? Are scientists reduced to waving their hands over the ‘butterfly effect’, unable to do better than fortune-tellers or astrologers? By the 1970s, as chaos became popular, people were indeed saying that any such complex system was hopelessly unpredictable. Now the prevailing view is that by trying out predictions based on many slightly differing scenarios, and leaving answers in terms of probabilities, it is possible to assert reliable statements about climate change. Another important factor is that the average temperatures involved in climate do not depend on the detailed butterfly effects of daily weather patterns. Even so, the work stemming from Poincaré’s discovery shows that a system may shift quite suddenly to a new global pattern, with no obligation to return to its original state. If the human species wipes itself out, then after a few thousand years CO2 might return to pre-industrial levels, but with the Arctic ice-free and the deep ocean currents like the Gulf Stream permanently changed.

Newton, applying his theory to the two-body problem, could explain the basic fact of the elliptical planetary orbits. But finer detail, including the effect of the planets on each other, has required much more modern insights, and has stimulated further centuries of mathematical development. Poincaré’s analysis arose from the very difficult problem of predicting the exact motion of the Moon. Asteroids show chaotic behaviour. Enormous computer calculations are now employed, and it is difficult to recover the excitement of the seventeenth century, when the solar system became the test-bed for mathematical prediction. The periodic return of Halley’s comet, whose successful prediction was one great early achievement, is a reminder of that original drama of the skies.

In The Decline and Fall of the Roman Empire, Edward Gibbon wrote an unusual passage reflecting, from the vantage point of the eighteenth century, on a comet visible in the sixth. His narrative suddenly adopted the out-of-time standpoint of science, reflecting on the return of the comet at 575-year intervals throughout human history, the latest having occurred in his own ‘enlightened age’. Gibbon explained that the ‘mathematical science of Bernoulli, Newton, and Halley investigated the laws of its revolutions’, and that the next return was predicted for 2255. Gibbon imagined ‘astronomers of some future capital in the Siberian or American wilderness’ verifying the prediction.

The passage is unfortunately based on a misidentification of the comet of 1680 with earlier comets, with the 575-year period based on jumping to wrong conclusions, and not at all on what Newton and Halley had found out. In contrast, Gibbon’s political prediction (so strikingly eclipsing his own Europe, then approaching its zenith) reads extraordinarily well. It would have been almost spot-on for the 1960s. It is now dated, not only politically, but because astronomers are no longer mere observers, but can design space probes to intercept comets. His bold vision is a salutary reminder that climate predictions which look ahead to 2100 are still shortsighted. That year is nearer to us than Poincaré himself. A strand of mathematical thought may easily take a century to unwind, as Poincaré’s conjecture shows.

One of the most fascinating questions about climate is to what extent past human history has already been profoundly affected by climate change—a subject not treated by Gibbon—and to what extent it has, through the invention of agriculture, already caused climate change. The interrelation of humanity and the atmosphere lies somewhere in between the simplicity of comets, and the complexity of history, the subject that Gibbon described as the crimes and follies of mankind.

Heart of the matter

For the really, really, really pessimistic, Three brings little cheer. It is the number of jealousy, and the world of the arts supplies copious three-body problem pages. Three is the number of dividing and ruling. Three is the number of fighting an ally over how best to oppose an enemy. Three is the number of triangulation, which seems to mean doing what your enemies want, while leaving your friends with nowhere else to go.

That masterpiece of the worst-case scenario, Orwell’s Nineteen Eighty-Four, is striking in its appeal to the language of numbers: it has a number for its title, and famously begins with a thirteen. (The brilliant ‘more equal than others’ of Animal Farm had already shown Orwell’s penchant for a mathematical imagery of truth.) Three-ness enters deeply into its plot, which is a prisoner’s dilemma, though the depth of pessimism is such that there is no dilemma. Winston Smith knows he will betray Julia and vice versa, and that both will co-operate with Oceania. Orwell gave a sophisticated picture of superpower politics, as a three-person game with unpredictable shifts, but hidden co-operation.

His analysis was based on the sudden and totally unprincipled pact between Germany and the USSR in 1939, reversed equally suddenly in June 1941, with alliances again shifting in 1945 when the American nuclear bomb developed for use against Germany became one which was, in effect, aimed at the USSR. Quantum mechanics is never far away. Robert Oppenheimer, the mathematical physicist in charge of developing the atomic bomb in 1945, was the same Oppenheimer accused and discredited as a ‘security risk’ in 1954. In the Cold War context, technical arguments about the hydrogen bomb were inextricably involved with individual and political conflicts.

The nuclear test of July 1945 was code-named Trinity. No one knew then that there was a deeper level to the nucleus than was touched by that explosion, and that the number Three would turn out to be vital to it. In contrast to the growth of nuclear weapons from that point, now far bigger than those of 1945 and still proliferating, the post-war world also resumed the amazing progress of global co-operation on inner nuclear space, peaceful, beautiful, creative and hugely surprising—and almost totally incomprehensible to citizens of the planet.

At the heart of matter

What keeps the nucleus together? The positive electric charges of the protons cause them to repel each other vigorously. So there must exist some other attractive force between the protons and the neutrons in the nucleus, to counteract that repulsion. This force was called the ‘nuclear force’ and by the 1960s it could be given a working description. Particles called ‘mesons’ were identified as its carriers. But as accelerators became powerful enough for collisions to penetrate single protons and neutrons, something quite unexpected was found.

There was evidence of three constituent parts inside each of these nuclear particles. The name ‘quark’ was invented by physicist Murray Gell-Mann for obscure reasons—the line ‘Three quarks for Muster Mark!’ from Joyce’s Finnegans Wake has often been quoted. The situation was about as clear as Joyce’s prose, and just to add to the confusion, there were two different senses in which ‘three quarks’ were involved. There seemed to be three quarks inside a nuclear particle, but there were also three kinds of quark—in modern terms, three ‘flavours’ of quark were then known. The proton had two ‘up-quarks’ and a ‘down-quark’ and the neutron had two ‘down-quarks’ and an ‘up-quark’. The third flavour was the ‘strange-quark’, and this made excellent new sense of a zoo of unstable ‘strange’ particles that had been observed since the 1950s.

But the presence of three quarks in a nuclear particle made no sense at all, and for years there was much scepticism over whether they could be genuine entities. Quarks would necessarily be fermions, and the ‘exclusion principle’ dictated that there was a limit to the number that could be inside the nuclear particle. There would be room for two of the same kind, one of each spin, but not three. Yet one particular nuclear particle (the Δ++) seemed to consist of three up-quarks. This was apparently impossible.

The way out of this was to propose that the quarks differed in some other, hitherto invisible quality. This unknown quality was given the name ‘colour’: nothing to do with light, but simply a reference to the three-dimensionality of human colour space. As a metaphor, the three quarks in a nuclear particle could be ‘red’, ‘blue’ and ‘green’, and then the overall particle would be an uncoloured ‘white’. This step into the unknown turned out to explain far more than the original puzzle.

With electric charge, the duality of positive and negative lies within a single dimension: charges add up and cancel like numbers on a line. The three dimensions of quark colour give a naturally analogous theory, in which the ‘colour charge’ generates a new three-dimensional ‘colour force’. Although the mathematical principle was very clear and simple, it took over 30 years to see the force of its predictions and check them against observations. One reason was that the resulting force was completely different from what anyone expected.

Electric force becomes stronger and stronger as two charges approach. Colour force is the other way round. It acts like a spring, becoming stronger as two colour charges separate, and diminishing to nothing as they approach. When this was understood, it explained why no quarks were ever seen alone: they were always ‘confined’ by colour springs. Strictly speaking, no complete proof of these properties of the colour force has yet been given, and that is the subject of another Millennium Prize problem. However, enormous computer calculations, especially since 2000, have made it all consistently credible. The subject is in the tradition of Oppenheimer, and the long calculations necessary for the atomic bomb, but also in debt to Poincaré, not so much because concerned with three bodies, but because it involves deep understanding of many-dimensional spaces. It brings together the most abstract mathematics and the most sophisticated high-energy experiments. Fortunately, mathematics comes particularly cheap.

The older observations of the nuclear force have gradually fallen into place. Quarks have their anti-particles, and the mesons turned out to be identifiable as quark-antiquark pairs. The large masses of the nuclear particles (like the proton, 1836 times heavier than the electron) can be explained in terms of more primitive quark parameters. What was originally called the nuclear force, between protons and nuetrons, can now be seen as a second-order effect of the colour-force acting on the constituent quarks. That second-order effect is still strong enough to bind about 240 nucleons into the nucleus of a large atom. As a rough analogy, an atom has no overall electric charge, but it still interacts via electromagnetic forces with other atoms. Indeed nuclear physics is now in a somewhat similar state to that of chemistry 80 years ago, when its atomic numbers and weights could at last be explained in terms of smaller and simpler things. Foremost amongst those smaller and simpler things is the nuclear number Three.

The 1960s picture of three different flavours of quark—up, down and strange—has turned out to be incomplete. There are (apparently) just six, with the remaining flavours now bearing the vaguely risqué names of ‘charm’, ‘top’ and ‘bottom’ (‘truth’ and ‘beauty’ were wittier suggestions for the last two, but seem to have lost out). Yet the number Three is still central in this picture of flavour, because the six are arranged in three ‘generations’ of pairs. This fundamental Three-ness is not understood at all, except in that there are reasons why there should be the same number of generations as are found in the non-nuclear (electron-like) particles. The weak force is now seen as penetrating inside the nucleon and changing the flavour of its quarks. It is still the weirdest force, muddling up the three generations of quarks and the three generations of electrons. Although based on the numbers One, Two, Three, there is a complexity in these fundamental forces that still eludes simple explanation.

Quantum chromodynamics, as the colour force is called, has ended the tyranny of Two. Polarities, lines of force, attraction of opposites, give way to a new troilistic music of unjealous threesomes. This is a surprise; in fact the whole thing is a surprise. In the 1960s it would have seemed far too much to hope that the interior of the proton could be explained and calculated accurately by a three-dimensional analogue of electromagnetism.

Robert Oppenheimer, trapped in the either/or logic of the Cold War, died while this nuclear revelation was in its early and tentative stage. As so often, it took too long for the timescale of individual human life. But Oppenheimer had made another, less famous contribution. On that three-based date, 1 September 1939, as Germany attacked Poland, he and his collaborator published a paper: On continued gravitational contraction. It was one of the first to take seriously the possibility that a star might ‘close itself off from any communication with a distant observer; only its gravitational field persists’. Too late for him, black holes were only taken seriously 30 years later. They demanded a better understanding of—