Chapter 11

The Biggest Number of All

The trouble with integers is that we have examined only the very small ones. Maybe all the exciting stuff happens at really big numbers, ones we can’t even begin to think about in any very definite way.

– Ronald L. Graham

Ask a child what’s the biggest number they can think of and the answer often runs along the lines of ‘50 thousand million billion trillion trillion … ’ until they run out of breath, with the odd nebulous ‘kazillion’ or ‘bazillion’ thrown in for good measure. Such numbers can certainly be big by everyday standards – maybe more than all the living things on Earth or all the stars in the universe. But they’re peanuts compared with the kinds of mind-bogglingly huge numbers that mathematicians can come up with. Even if you were determined and foolish enough to spend your entire adult waking life saying ‘trillion’ every second, the number you’d have named in the end would be unbelievably tiny compared to monsters of the numerical cosmos we’re about to meet, such as Graham’s number, TREE(3), and the seriously colossal Rayo’s number.

One of the first people known to have thought methodically about very large numbers was Archimedes, born in Syracuse, Sicily, around 287 bc, and widely considered to be the greatest mathematician of ancient times and one of the greatest in history. He wondered how many grains of sand there were in the world and, beyond that, how many could be crammed into the whole of space, which the ancient Greeks believed stretched out as far as a sphere holding what they called the fixed stars (in other words, the stars in the night sky as distinct from the planets). His treatise The Sand Reckoner begins:

There are some, King Gelon, who think that the number of the sand is infinite in multitude; and I mean by the sand not only that which exists about Syracuse and the rest of Sicily but also that which is found in every region whether inhabited or uninhabited. Again there are some who, without regarding it as infinite, yet think that no number has been named which is great enough to exceed its multitude.

To pave the way for his cosmos-wide sand estimate, Archimedes set about extending the system available at the time for naming big numbers – the key challenge that has faced all mathematicians ever since who have tried to define larger and larger integers. The Greeks referred to 10,000 as murious, which translates as ‘uncountable’ and which the Romans called myriad. As the starting point for his venture into the realm of truly huge numbers, Archimedes used a ‘myriad myriad’, that is, 100,000,000, or, in modern exponential notation, 108 – a number far bigger than anything for which the Greeks had a practical purpose. Any number up to a myriad myriad Archimedes labelled as being of the ‘first order’. Numbers up to a myriad myriad times a myriad myriad (1 followed by 16 zeros or 1016) he said were of the ‘second order’, and so he went on in this way to the third order and the fourth, each order in his scheme being a myriad myriad times larger than the numbers of the previous order. Eventually, he reached numbers of the myriad myriadth order, in other words 108 multiplied by itself 108 times, or 108 raised to the power 108. By this process, Archimedes was able to describe numbers up to those with 800,000,000 digits. All these numbers he defined as belonging to the ‘first period’. The number 10800,000,000 itself, he took to be the starting point for the second period, and then he began the process all over again. He defined orders of the second period by the same method as before, each new order being a myriad myriad times greater than the last, until, at the end of the myriad myriadth period, he’d reached the colossal value of 1080,000,000,000,000,000, or a myriad myriad raised to the power of a myriad myriad times a myriad myriad.

Archimedes, who believed that ‘mathematics reveals its secrets only to those who approach it with pure love, for its own beauty’.

As it turned out, for his sand-counting endeavour, Archimedes needn’t have bothered to go beyond the first of his periods. In his cosmic scheme of things, the whole of space, out as far as the fixed stars, was the equivalent of two light-years in diameter with the Sun at its centre. Using his estimate of the size of a grain of sand he came up with a figure of 8 × 1063 grains needed to turn the cosmos into one giant beach – a number of only the eighth order of the first period. Even taking a modern estimate for the diameter of the observable universe of 92 billion light-years there would be no room for more than about 1095 grains of sand, which is still a number of merely the twelfth order, first period.

Archimedes may have been the Wizard of the West as far as large numbers went, but in the East intellectuals would soon take the quest for numerical behemoths much further. In an Indian text called the Lalitavistara Sutra, written in Sanskrit and dating from about the third century, Gautama Buddha is portrayed as describing to the mathematician Arjuna a system of numerals beginning with the koti, a Sanskrit term for 10,000,000. From this starting point he works his way through a long list of named numbers, each 100 times bigger than the last: 100 koti make an ayuta, 100 ayuta make a niyuta, and so on, all the way up to the tallakshana, which is one followed by 53 zeros. He also gave names to even bigger numbers, such as the dhvajhagravati, equal to 1099, all the way up to the uttaraparamanurajapravesa, which is 10421.

Another Buddhist text went further – spectacularly further – along the road to the eye-wateringly vast. The Avatamsaka Sutra describes a cosmos of infinitely many, interpenetrating levels. In Chapter 30, the Buddha is once again expounding on big numbers, starting from 1010, squaring this to give 1020, squaring this in turn to give 1040, and so on to 1080, 10160, 10320, until he reaches 10101,493,392,610,318,652,755,325,638,410,240. Square this, he declared, and the number reached is ‘incalculable’. Beyond that, having apparently raided the Sanskrit thesaurus for superlatives, he names successively larger numbers ‘measureless’, ‘boundless’, ‘incomparable’, ‘innumerable’, ‘unaccountable’, ‘unthinkable’, ‘immeasurable’, and ‘unspeakable’, before culminating with ‘untold’, which works out to be 1010×(2^122). (The ^ sign is used to show where one number is being raised to the power of another, so 1010×(2^122) is the same as 1010×(2 to the power of 122).)This dwarfs the biggest number that Archimedes entertained in his writings – 1080,000,000,000,000,000 – to the extent that Archimedes’ number would have to be raised to the power of roughly 66,000,000,000,000,000,000 to get into even the same ballpark as ‘untold’.

Both Archimedes and the Buddhist sutras used large numbers to give some impression of the vastness of their respective versions of the universe. In the Buddhist case, too, it was believed that naming a thing imbued one with a certain power over it. But mathematicians generally weren’t too interested in inventing schemes for naming and representing bigger and bigger numbers just for the sake of it. Our convention of using words ending in ‘-illion’ to name big numbers dates back to the fifteenth-century French mathematician Nicolas Chuquet. In an article, he wrote down a huge number, broke it down into groups of six digits, and then suggested that the groups be called:

million, the second mark byllion, the third mark tryllion, the fourth quadrillion, the fifth quyillion, the sixth sixlion, the seventh septyllion, the eighth ottyllion, the ninth nonyllion and so on with others as far as you wish to go.

In 1920, the American mathematician Edward Kasner asked his nine-year-old nephew, Milton Sirotta, to invent a name for the number 1 followed by 100 zeros. His suggestion, ‘googol’, entered the popular lexicon after Kasner wrote about it in the book Mathematics and the Imagination, co-authored with James Newman. The young Sirotta also offered the name ‘googolplex’ for the number ‘one, followed by writing zeros until you get tired’. Kasner opted for a more precise definition ‘because different people get tired at different times and it would never do to have Carnera [a heavyweight boxing champion] be a better mathematician than Dr Einstein, simply because he had more endurance’. However, the actual effect of writing a googolplex is accurate, if a massive understatement. A googolplex, as Kasner defined it, is 10googol, or 1 followed by a googol number of zeros. A googol is easy to write out in full:

10,000,000,000,000,000,000,000,000,000,000,000,
000,000,000,000,000,000,000,000,000,000,000,000,
000,000,000,000,000,000,000,000,000,000

But a googolplex is sensationally larger. There isn’t enough paper on Earth, or, if it comes to that, matter in the entire observable universe, to write out the digits of a googolplex, not even if you wrote the zeros as small as protons or electrons. The googolplex is larger than any named number of antiquity, including the mighty ‘untold’. However, it is smaller than a number that, in 1933, arose out of some research being carried out by the South African mathematician Stanley Skewes into prime numbers. What became known as Skewes’ number is an upper bound, or maximum possible value, to a problem of how prime numbers are distributed. G. H. Hardy, the famous British mathematician, mentor of Ramanujan, and author of the widely read A Mathematician’s Apology, described Skewes’ number at the time as ‘the largest number which has ever served any definite purpose in mathematics’. It has the value 1010^10^34 or, more precisely, 1010^8852142197543270606106100452735038.55. This colossal upper bound itself required the assumption of the Riemann Hypothesis, which, as we saw in Chapter 7, still continues to stump mathematicians. A couple of decades later, Skewes announced another number, in connection with the same prime numbers problem, but without assuming the Riemann Hypothesis, that was even bigger – 1010^10^964, give or take a few trillion.

Not to be outdone by pure mathematics, physics was also coming up with its own gigantic numbers as solutions to some unusual conundrums. An early player in the big numbers game on the physics front was the French mathematician, theoretical physicist, and polymath Henri Poincaré who among his many contributions wrote about how long it would take physical systems to return to an exact state. In the case of the universe, the so-called Poincaré recurrence time is the interval it would take for matter and energy to rearrange itself, down to the subatomic level, having gone through an unbelievable number of combinations in between, back to a certain starting point. The Canadian theorist Don Page, one-time student of Stephen Hawking, has estimated that the Poincaré recurrence time for the observable universe is 1010^10^10^2.08 years, which is a number lying somewhere between the small and large Skewes’ numbers, and larger than the googolplex. He also calculated the maximum Poincaré recurrence time for any universe of a particular type, which was even larger, being 1010^10^10^10^1.1 years – larger than the large Skewes’ number. As for the googolplex itself, Page has pointed out that it’s roughly equal to the number of microscopic states in a black hole with a mass as big as that of the Andromeda Galaxy.

Untold, the googolplex, and the Skewes’ numbers are all colossal in terms of what we can truly grasp in our minds. But they’re vanishingly small compared with a number named after the American mathematician Ronald Graham who first published it in a paper in 1977. Like the Skewes numbers before it, Graham’s number arose in connection with a serious mathematical problem, in this case to do with a branch of the subject called Ramsey theory. Approaching Graham’s number has to be done in stages just as if you were climbing one of the world’s great mountains. The first step is to be aware of a way of representing very large numbers, devised by the American computer scientist Donald Knuth, which is known as up-arrow notation. This builds upon the idea that multiplication can be thought of as repeated addition, and exponentiation (raising a number to a power) can be thought of as repeated multiplication. For example, 3 × 4 is 3 + 3 + 3 + 3 and 34 = 3 × 3 × 3 × 3. In Knuth’s notation, exponentiation is shown by a single up-arrow: for example, a googol, or 10100, is written as 10100, and 3 cubed, or 33, becomes 33. Repeated exponentiation, for which we have no everyday notation, is shown by two up-arrows, so that 3↑↑3 = 33^3. The ↑↑ operation, known as tetration (because it comes fourth in the hierarchy after addition, multiplication, and exponentiation), is a lot more powerful than it seems at first glance. 3↑↑3 = 33^3 = 327, which has the value 7,625,597,484,987.

Another way to show tetration is in the form of a power tower, which is a typesetter’s worst nightmare. If a number a is to be tetrated to order k then it can be written:

In other words, the number a is raised to a stack of exponents k – 1 high.

The rate at which the operators gather momentum is startling: 3 × 3 = 9, 33 = 27, 3↑↑3 is more than 7.6 trillion (a 13-digit number). Tetrating 4 is even more surprising: 4↑↑4 = 4444 = 44256, which is roughly 1010154 – a number bigger than a googolplex (1010100). We’ve gone beyond the mighty googolplex with nothing more than a simply written action on the number 4.

The giant step from exponentiation to tetration suggests that adding another up-arrow will yield something even more dramatic, and we’re not disappointed. Repeated tetration, called pentation, results in a spectacular explosion of growth. The innocuous-looking 3↑↑↑3 = 3↑↑3↑↑3 = 3↑↑7,625,597,484,987 = 3333 … 3, which is a power tower with 7,625,597,484,987 threes. If a power tower of height 4 is enough to surpass the googolplex, think of what this can do. It’s an unimaginably huge number, impossible to write out in a human lifetime even in the form of a power tower. If printed as a power tower, it would stretch all the way to the Sun. Known as tritri, it is far, far larger than any number that we’ve mentioned up to this point, and can barely even be comprehended by us mere mortals. Yet, we’ve just begun the process. As large as tritri is, it’s an insignificant dust mote in comparison to the great summit of Graham’s number. Adding another up-arrow brings us to 3↑↑↑↑3 = 3↑↑↑3↑↑↑3 = 3↑↑↑tritri. Let’s just go over what that means. Ascending up the power tower of threes, the first is 3, the second is 333 = 7,625,597,484,987, and the third is 3333 … 3 with 7,625,597,484,987 threes, in other words tritri. The fourth power tower is 3333 … 3 with tritri 3’s, and so on. 3↑↑↑↑3 is the tritrith power tower. This is a mind-bogglingly enormous step up from three up-arrows. Yet it brings us only to G1, the first of a series of G-numbers needed to reach the summit of Graham’s number itself. Having arrived at base camp G1, our next goal is G2. Remember that every time we add a single extra up-arrow this produces a monstrous increase in the number being acted upon. Bearing this in mind, the definition of G2 is 3↑↑↑↑3 with G1 up-arrows. Even grasping dimly what this means is enough to bring on a feeling of vertigo, a dizzying glimpse into how large numbers can be. Just one extra up-arrow brings an awesome increase in size by everyday standards, yet G2 has G1 up-arrows. And, as you might guess, G3 has G2 up-arrows, G4 has G3 up-arrows, and so on. Graham’s number, as it turns out, is G64. The 1980 edition of the Guinness Book of World Records recognised it as the largest number ever used in a mathematical proof.

The problem that spawned Graham’s number is fantastically hard to solve, but quite easy to state. Graham was thinking about multidimensional cubes – hypercubes in n dimensions. Suppose any two corners, or vertices, are joined by a line that may be coloured either red or blue. The question that Graham asked was: what’s the smallest value of n so that for any such colouring, four vertices all lie in the same plane and all lines between any two of the vertices are the same colour. Graham managed to prove that the lower limit for n was 6 and the upper limit was G64. This colossal range reflects the difficulty of the problem. Graham could prove that a value for n satisfying the conditions existed but, as an upper bound, had to define a ridiculously huge number to prove anything. Since then mathematicians have managed to whittle down (relatively speaking) the range to values of n between 13 and 9↑↑↑4.

Graham’s number, like the googol and googolplex, is a much-quoted example of a really large number. But it’s also much misunderstood. Firstly, it’s no longer anywhere near the largest number ever defined. Secondly, in the search for ways of representing and defining new world record numbers there’s little point in starting from Graham’s number and making elementary extensions of it.

In recent years, a branch of recreational maths known as googology has sprung up, the sole aim of which is to push back the frontiers of truly big numbers by defining and naming ever larger integers. Of course, anyone can think of a number larger than any that is stated. If I say ‘Graham’s number’, you might say ‘Graham’s number plus 1’, or ‘Graham’s number to the power of Graham’s number’, or even ‘G64↑↑↑↑G64 with G64 up-arrows’ (which is roughly G65). However, all such extensions, involving repeated use of the same kind of operators, fail to bring about a radical change: the outcome is still Graham’s numberish. In other words, it will be a number produced in roughly the same way as Graham’s number itself, using a similar combination of tricks. Among serious googologists, an inelegant mishmash of existing numbers and functions that does little to enlarge the original large number is referred to as a ‘salad number’ and is heavily frowned upon. Graham’s number takes up-arrow notation and extends it to its limits. By contrast, salad numbers simply tack one insignificant operation onto Graham’s number. What googologists want is not a naïve and modest increase of it but an entirely new system that can be extended to the point where Graham’s number appears utterly negligible. There is one such system that can be extended indefinitely. It’s known as the fast-growing hierarchy, because of the prodigious growth rates it allows. What’s more, it’s a technique well tried and tested by mainstream mathematicians and so is often used nowadays as the benchmark for new ways of generating fantastically large numbers.

Two things are important to know from the start about the fast-growing hierarchy. The first is that it’s a series of functions. A function in maths is just a relationship, or a rule, for turning inputs into outputs. Think of a function as being like a little machine that transforms one value into another by always going through the same process. The process might be, for instance, ‘add 3 to the input’. If we call the input x and the function f(x), pronounced ‘f of x’, then f(x) = x + 3.

The second key thing about the fast-growing hierarchy is that it uses ordinals to index the functions, which means how many times a process has to be carried out. We came across ordinals in the last chapter, on the subject of infinity. Ordinals, or ordinal numbers, tell us the position of something in a list or the order of something in a series. They can be finite or infinite. Everyone’s familiar with finite ordinals, such as fifth, eighth, one hundred and twenty third, and so on. But no one hears about the infinite ones unless they take a deeper plunge into maths. It turns out that in trying to reach and define super large (but finite) numbers, both finite and infinite ordinals are extremely useful. Indexing functions with finite ordinals allows us to make our way to some reasonably large numbers. But the fast-growing hierarchy really comes into its own when it taps the power of infinite ordinals to control how many times a function is performed.

The starting point of the hierarchy is as straightforward as you could imagine. It’s just a function that adds one to a number. Let’s call this starting function f0. So, if the number we want to put through the function mill is n, then f0 (n) = n + 1. This isn’t going to get us to big numbers any time soon – it’s just counting up in steps of 1 – so we’ll move on to f1 (n). This new function feeds the previous one into itself n times, in other words f1 (n) = f0 (f0 ( … f0 (n))) = n + 1 + 1 + 1 … + 1, with n ones, giving a total of 2n. Again, this isn’t too impressive in terms of how quickly it can get us into the land of giant numbers. But it reveals the process that ultimately gives the fast-growing hierarchy its immense power: recursion.

In art, music, language, computing, and maths, recursion pops up in all kinds of different guises, but always it refers to something that feeds back into itself. In some cases this just leads to an endless, repetitive loop. For example, there’s the joke glossary entry: ‘Recursion. See Recursion’. More elaborate is the recursive loop appearing in Maurits Escher’s Print Gallery (1956), which shows a gallery in a city in which there’s a picture of a gallery in a city in which … In engineering, a classic example of recursion is feedback, where the output from a system gets routed back as input. It’s a familiar problem for performers, such as rock musicians, on stage, and often happens if a microphone is located in front of a loudspeaker to which it’s connected. Sounds picked up by the mic come out of the speaker, having been amplified, then re-enter the mic at a higher volume to be amplified again, and so it goes on until, very quickly, the familiar, ear-piercing squeal of feedback emerges. Recursion in maths works along similar lines. A function takes the place of an electronic system, such as a microphone-amplifier-loudspeaker combination, and calls upon itself, so that it feeds its own output back as input.

We’d reached f1 (n) on the ladder of the fast-growing hierarchy. The next rung, represented by f2 (n), simply feeds f1 (n) into itself n times. We can write it as f2 (n) = f1 (f1 ( … f1 (n))) = n × 2 × 2 × 2 … × 2 with n twos. This is the same as n × 2n, which is an exponential function. Plugging in a value of, say, 100 for n, we get f2 (100) = 100 × 2100 = 126,765,060,022,822,940,149,670,320,537,600, or about 127 billion trillion trillion. If this were a bank balance it would be wealth far beyond the dreams of even Bill Gates, but it’s a lot smaller than some of the numbers, such as a googol, that we’ve already run into. It’s also smaller than the largest ever lawsuit, for two undecillion (two trillion trillion trillion) dollars, filed on April 11, 2014, by Manhattan resident Anton Purisima after claims that he was bitten by a ‘rabies-infected’ dog on a New York City bus. In a rambling 22-page handwritten complaint, accompanied by a photo of an unreasonably outsized bandage around his middle finger, Purisima sued NYC Transit, LaGuardia Airport, Au Bon Pain (where he insisted he was routinely overcharged for coffee), Hoboken University Medical Center, and hundreds of others for more money than there is on the planet. His case was dismissed in May 2017 on the grounds that it ‘lacks an arguable basis either in law or in fact’. Hopefully, Purisima’s mathematical knowledge doesn’t extend to the fast-growing hierarchy, otherwise even larger suits (he’s previously sued several major banks, Lang Lang International Music Foundation, and the People’s Republic of China) may follow.

The function f3 (n) involves n repetitions of f2 (n) and leads to a number slightly greater than 2 to the power n to the n to the n … where the power tower is n high. This brings us to the stage of two up-arrows, or tetration – the operation we came across earlier when making our assault on Graham’s number. Continuing in the same vein, f4 (n) involves 3 up-arrows, f5 (n) 4 up-arrows, and so on, every increase of the ordinal by 1 having the effect of adding one more up-arrow and boosting the number of up-arrows to n – 1. This takes us into big number territory by everyday – and even the litigious Anton Purisima’s – standards. But just adding one up-arrow at a time would never get us to Graham’s number, let alone anything vastly bigger, in a sensible amount of time. For that we need to do something a little unexpected. To reach truly colossal finite numbers we have to make use of numbers that are actually infinite.

The smallest kind of infinity, as we found in the last chapter, is aleph-null, the infinity of the natural numbers. Although aleph-null can’t change in size – in other words, how much it contains – it can vary in length depending on how its contents are organised. The shortest length of aleph-null is the infinite ordinal known as omega (ω). The next shortest is ω + 1, then ω + 2, ω + 3, and so on, without end. These infinite ordinals, said to be countable because they can be put in a definite order, serve as the springboard for reaching some of the largest finite numbers ever conceived. To start off, we need a definition of what’s meant by fω (n), the function indexed by the smallest infinite ordinal. We can’t simply subtract 1 and apply the recursion process we talked about earlier because there’s no such thing as ω – 1. Instead we define fω (n) to be fn (n). Now, to be clear, we aren’t saying that ω = n. What we’re doing is expressing fω (n) in terms of (finite) ordinals smaller than ω, so that we can reduce the function to a form that’s useful for doing calculations. You might say we may as well write fn (n) instead of fω (n) and get the same result, but that would prevent the next crucial step – the step at which the immense power of the fast-growing hierarchy becomes apparent. As soon as we go from fω (n) to fω+1 (n) something dramatic happens. Remember, when the ordinal that indexes the function goes up by 1 this feeds the previous function back into itself n times. If using a finite ordinal results in a fixed number of up-arrows and using ω produces n – 1 up-arrows, then using ω + 1 allows us to feed back into the number of up-arrows n times, which amounts to a fantastic jump in the strength of the recursion.

To understand this, think about the function fω+1 (2), which equals fω (fω (2)) using our recursion rule. Because we defined fω (2) to be the same as fn (2), we can rewrite fω+1 (2) as fω (f2 (2)), just replacing the innermost ω with 2. (We can’t work out the value of the outer fω until we know what value the inner one will take.) As it turns out, f2 (2) = 8 so now we’re left with fω+1 (2), which equals fω (8). Finally, we can simplify the outermost ω and get f8 (8), which is a number with 7 up-arrows. While this shows how fω+1 can be used to feed back into the number of up-arrows, it doesn’t give a clear impression of the function’s awesome capability. This only becomes apparent as n gets larger, and the corresponding number of feedback loops grows. Putting n = 64 gives fω+1 (64), which is approximately Graham’s number. The next step up the fast-growing hierarchy, fω+2 (n), breaks into new territory because it plugs all of the mathematical machinery used to reach the level of Graham’s number back into itself. The result is a number we can write roughly as GG 64 (with 64 levels of the G subscript), although there’s no hope of trying to grasp, even vaguely, what this means.

The countably infinite ordinals stretch away into the far distance, each successive one providing the basis for a recursive function that utterly dwarfs the power of the one before it. The omegas alone form a sequence so long that it only ends when we reach omega raised to a power tower of omega omegas. This mighty ordinal, known as epsilon-zero, is so vast that it can’t be described within our conventional system of arithmetic, known as Peano arithmetic. With each step along the eternal road of omegas, the finite number generated by recursion increases by an amount that defies comprehension. But beyond the loftiest omegan power tower lie tier upon higher tier of yet greater infinite ordinals – first the epsilons, then the zetas, and on and on, without end – as we saw earlier in our exploration of infinity. These increasingly vast ordinals serve to inform more and more powerful degrees of feedback. Finally, we reach a tremendously large ordinal known as Gamma-zero (Γ0) or, more magnificently, the Feferman–Schütte ordinal, after the American philosopher and logician Solomon Feferman and the German mathematician Karl Schütte who first defined it. While Gamma-zero is still countable, and there are countable ordinals beyond it, to actually define it requires the use of uncountable ordinals (ones that can’t be made from rearranging elements of aleph-null but instead require aleph-one or more elements). This process is reminiscent of how the fast-growing hierarchy itself evolves. Just as we have to resort to using infinite ordinals in the fast-growing hierarchy to describe huge finite numbers, so we need to turn to uncountable ordinals to describe truly tremendous countably infinite ordinals. There aren’t any adjectives left to describe adequately the size of the finite numbers to which the Feferman–Schütte ordinal, and others beyond it, give rise by recursion. Nor does any mathematician have a brain big or clever enough to grasp the immensity of the numbers their recursive techniques can spawn. But that doesn’t stop them from coming up with even mightier methods for big number generation. Notable among these is the TREE function.

As the name suggests, a tree in mathematics can have the appearance of a tree that grows in the ground or a family tree, with branches spreading out from a common trunk. Mathematical trees are a special variety of what in maths are known as graphs. Usually we think of graphs as being charts drawn on graph paper, in which one value is plotted against another. But the type of graphs we’re talking about in connection with trees are different: they’re ways of representing data in which dots, called nodes, are connected by line segments, called edges. If it’s possible to start at a node, move to other nodes along edges, and then return to the starting node without repeating any edges or nodes, then the route taken is known as a cycle and the graph is said to be cyclic. If it’s possible to start at any node and travel to any other node, without repeating an edge or node along the way, then the route taken is called a path and the graph is said to be connected. A tree is defined as a graph that is connected but has no cycles. Both family trees and biological trees also have this kind of structure. If a unique number or colour is assigned to each node, then the tree is said to be labelled. Furthermore, if we assign one node to be the root, then we have a rooted tree. One useful property of rooted trees is that for any node, we can always trace back a path to the root.

Some mathematical trees that have the same kind of branching structure as a real tree can be fitted inside others of their kind. They’re said to be homeomorphically embeddable, which is a fancy way of saying they’re similar in form or appearance and one of them is like a smaller version of the other. Of course, mathematicians are a bit more precise about the definition. They start with a larger tree and see how much of it can be pruned using a couple of different methods. First, if there’s a node (except for the root node) that has just two edges leading into, or from it, that node can be removed and the two edges joined into one. Second, if two nodes are joined by a single edge, the edge can be contracted and the two nodes compressed into one. The colour of this new node is the colour of whichever node was originally closer to the root. If a smaller tree can be made by applying these two steps in any order to the larger tree, the smaller one is said to be homeomorphically embeddable in the larger. The American mathematician and statistician Joseph Kruskal proved an important theorem to do with this kind of tree. Suppose there’s a sequence of them so that the first tree can only have one node, the second up to two nodes, the third up to three, and so on, and that no tree is homeomorphically embeddable in any subsequent tree. What Kruskal found is that such a sequence always has to end at some point. The question is: how long can the sequence be?

In response, American mathematician and logician Harvey Friedman, listed in the Guinness Book of Records in 1967 as the world’s youngest professor (an assistant professor at Stanford aged just 18) defined the tree function, TREE(n), as the maximal length of such a sequence. Friedman then investigated the output of the function for different values of n. The first tree consists of a single node of a certain colour, which can’t be used again. If n = 1, this is the only colour and the sequence immediately stops, so that TREE(1) = 1. If n = 2, there’s one more colour. The second tree can contain up to two nodes, so it contains two nodes both of this colour. The third tree also must contain only this colour, but can only have one node, as otherwise the second tree would be homeomorphically embeddable in the third. Beyond that, no more trees are possible so TREE(2) = 3. The big shock, as Friedman found, comes when we reach TREE(3). In a sudden explosion of complexity and proliferation the number of nodes far surpasses Graham’s number and reaches the small Veblen ordinal, that extraordinarily un-small number we mentioned in our travels among the various infinities, in the fast-growing hierarchy.

So popular has googology – the quest to define ever larger numbers – become that it’s given rise to several contests. One of the first was the Bignum Bakeoff, organised by American maths whizz-kid David Moews in 2001. Competitors in the Bakeoff were challenged to produce the biggest number they possibly could from a computer program that was no more than 512 characters long (ignoring spaces) in the programming language C. No present-day computer could actually complete any of the programs submitted within the lifetime of the universe, so the entries were analysed by hand and ordered based on their position in the fast-growing hierarchy. The winning entry was a program called loader.c, after its creator, the New Zealander Ralph Loader. A computer with an unfeasibly large memory and an outlandish length of time available would be needed to generate the final output. But if it could be done the result would be Loader’s number – an integer known to be bigger than TREE(3) and some other heroic inhabitants of the googologist’s cosmos, such as SCG(13), the thirteenth member of a sequence known as subcubic graph numbers (similar to the TREE sequence, but consisting of graphs in which each vertex has at most three edges).

In 2007, a large number contest called the Big Number Duel pitted philosophers and old graduate-school chums Agustín Rayo (aka The Mexican Multiplier) of MIT against Adam Elga (aka Dr Evil) of Princeton in a back-and-forth tournament to see who could define the most colossal integer. The numerical slugfest, which blended comedy, convoluted mathematical, logical, and philosophical manoeuvrings, and the melodrama of a world-title boxing match, took place in a packed room in MIT’s Stata Center. Elga opened optimistically with the number 1, perhaps hoping that Rayo would have an off day. But Rayo swiftly countered by filling the entire blackboard with 1’s. Elga promptly rubbed out a line near the bottom of all but two 1’s, turning them into factorial signs. Then the duel progressed, eventually transcending the bounds of familiar mathematics, until each competitor was inventing their own notations for ever larger numbers. It’s been reported that, at one point, a spectator asked Elga, ‘Is this number even computable?’ to which Elga, after a brief pause, replied, ‘No’. Finally, Rayo delivered the knock-out blow with a number that he described as: ‘The smallest positive integer bigger than any finite positive integer named by an expression in the language of first-order set theory with a googol symbol or less.’ Just how large Rayo’s number is, we don’t know and probably never will. No computer could ever calculate it, even given access to a universe that could hold a googol symbol or more. The issue isn’t having enough time and space: Rayo’s number is uncomputable in the same way that the halting problem is uncomputable.

The poster for MIT’s Big Number Duel.

For now, when it comes to the largest integers that we can sensibly talk about, Rayo’s number more or less marks the boundary with the unknown. Some bigger numbers have been named, notably BIG FOOT, which was announced in 2014. But to gain even a hazy understanding about BIG FOOT would mean entering a strange domain known as the oodleverse and learning the language of first-order oodle theory – a venture best tackled with a higher maths degree and a wry sense of humour. In any case, the largest named numbers to date all build on the same kind of concepts used to reach Rayo’s number.

To penetrate further into endless number space, googologists must build on old methods or develop new ones, just as sending spacecraft deeper into physical space depends on breakthroughs, great and small, in propulsion technology. For the present, big number seekers will probably need to rely on the same tricks used by Rayo but apply them to beefed-up versions of first-order set theory (FOST). They might, for instance, add axioms that would give FOST access to even more staggeringly huge infinities, which could then be used to generate new record-breaking finite numbers.

To be frank, most professional mathematicians aren’t too fussed with the subject of gigantic numbers just for the sake of defining them, any more than they are with extending the known digits of pi. Googology is a sideshow, a bit of intellectual machoism, NASCAR for the number theorist. At the same time, it isn’t without its merits. It does expose the limits of our current mathematical universe, just as peering into space with the world’s largest telescopes pushes back the frontiers of the physical cosmos.

It’s tempting to think that such huge numbers as Rayo’s bring us closer to the infinite. But, in fact, that isn’t the case. Infinite numbers may be used to generate finite ones, but, however high we go, there’s never a point at which the finite merges with the infinite. The truth is that seeking out ever larger finite numbers gets us no nearer to infinity than the ‘1, 2, 3’ of our early childhood.