The number for the ratio between the circumference and diameter of circles, π, may seem prosaic because it’s so familiar. For instance, it is widely celebrated in festivals on March 14 during which people talk mathematics between bites of pie. (That date written as 3/14 shows the first three digits of π, which is approximately 3.14159.) But in truth it’s borderline eerie—like e, it seems to pass through the walls that divide mathematics into different subject areas as if they weren’t there.
The number π is like e in another important way: it’s an irrational number, which was proved in 1761 by Swiss mathematician Johann Lambert, one of Euler’s contemporaries. In 1882, π was demonstrated to possess a more unusual property by German mathematician Carl Louis Ferdinand von Lindemann: it’s a transcendental number (see box), a type of irrational that’s extra-far-removed from the integers, fractions, and other relatively ordinary quantities encountered in arithmetic and algebra. (e is also transcendental, which was proved in 1873 by French mathematician Charles Hermite. Mathematicians, including Euler, suggested the existence of transcendental numbers during the seventeenth and eighteenth centuries. But none were actually known to exist until 1844, when French mathematician Joseph Liouville proved that a group of infinitely complicated fractions he’d dreamed up were transcendental.)
A transcendental number is defined as a number that isn’t the solution of any polynomial equation with integer constants times the x’s. The term “polynomial equation” refers to the basic kind of equations that algebra students are asked to solve. (Equations, that is, with x’s raised to various integer powers and multiplied times constants.) An example of such an equation is x2 − 2x − 35 = 0. Seven is a solution of this equation, meaning that when 7 is plugged in for x, the equation is true. That rules out 7 as a transcendental. We could also rule out 7 as a transcendental by noting that it’s a solution of x − 7 = 0, x3 − 343 = 0, and an infinite number of other polynomial equations. It’s generally easy to prove that numbers encountered in basic math aren’t transcendentals by devising polynomial equations for which they’re solutions. But proving that a given number is transcendental can be fiendishly difficult. In fact, the only familiar numbers known to be transcendental are π and e. Interestingly, eπ (e raised to the π power) is also known to be transcendental, but no one has been able to determine whether πe, ee, and ππ are transcendental or not. The term transcendental refers to the fact that such numbers lie outside (or transcend) the “algebraic” set of numbers that can be solutions of polynomial equations.
The most remarkable thing about π, however, is the way it turns up all over the place in math, including in calculations that seem to have nothing to do with circles. Highlighting this point, physicist Eugene Wigner related a story about a statistician who showed a friend some formulas involving π that are routinely used for analyzing population trends. After noticing π in the equations, the friend exclaimed, “Well, now you are pushing your joke too far. Surely the population has nothing to do with the circumference of a circle.”
As nineteenth-century mathematician and logician Augustus De Morgan once mused, “this mysterious 3.14159… comes in at every door and window, and down every chimney.” He might have added that once it manages to creep into a room where a mathematician is scribbling away, it likes to sneak onto a page of equations where it seemingly has no right to be and then freeze in place there with a Cheshire Cat-like smile on its face.
In 1671, for example, Scottish mathematician James Gregory discovered an astonishing equation into which π seemed to have quietly slipped while he was playing around with infinite sums. The equation implied that when fractions with consecutive odd-integer denominators are combined in this straightforward way,
1 − 1/3 + 1/5 − 1/7 + 1/9 − 1/11 + …
(where ‘…’ means that the pattern of alternately added and subtracted fractions is continued forever) the grand total equals precisely one-fourth of π, or π/4. (In math, such infinite combinations of similar fractions are called series. Today mathematicians would say that the limit of this series is π/4.) Three years later, Leibniz, the German mathematician who co-invented calculus, independently made the same discovery. Historians believe the first discoverer of this astonishing math fact was an Indian mathematician who lived in the fourteenth or fifteenth century.
Proving that this infinite sum of fractions equals π/4 involves trigonometry, which, as I’ll show you later, is much concerned with circles. This suggests that there’s a link between the infinite sum and circles, which in turn makes the connection between π and the sum seem at least somewhat plausible. Math is full of such surprising connections, which is one of its greatest attractions. Indeed, mathematics is said to be the perfect subject for conspiracy theorists because when an unexpected connection turns up in it, something is very likely going on that wants explaining. (I’ve not been able to determine who first made this witty observation, but it bears repeating. However, the mathematicians I know are a lot smarter than the typical conspiracy theorist.)
While the link between Gregory’s infinite sum and π can be explained via a fairly involved mathematical proof, it is by no means obvious at first glance. And imagine how you’d feel if you’d lined up the perfectly orderly row of simple, innocent-looking fractions shown above during a math-phobia recovery session and seemingly out of nowhere this infinitely complicated beast of a number jumped up screaming in your face—a transcendental no less, which, by the way, has somehow gotten itself eternally trapped in the structure of circles. You’d be freaked out, and rightly so.
SUCH STRANGENESS IS ANOTHER example of the kind of thing that can happen when you cross over into the zone of the infinite. The portal in this case was that little group of three dots at the end of the line of fractions shown above. During Euler’s time, mathematicians were particularly fond of this portal; they developed infinite sums like the Gregory-Leibniz one that, among other things, enabled estimating irrational numbers such as π and e with unprecedented accuracy. On one of his frequent forays into the I-zone, Euler showed that “transcendental functions” such as ex could be recast as infinite sums. We’ll later see how that led to his celebrated formula.
But it’s perilously easy to get confused during encounters with the infinite. Consider this question: 1 − 1 + 1 − 1 + 1 − 1 + … = what? The answer is obviously 1 if you write the infinite sum this way: 1 + (−1 + 1) + (−1 + 1) + …, since all the (−1 + 1) terms equal 0. Sticking in parentheses to specify which operations to do first seemingly shouldn’t change anything—after all, 2 − 3 + 4 equals 3, as do both (2 − 3) + 4 and 2 + (−3 + 4). But the sum appears to be equal to 0 if you write it this way: (1 − 1) + (1 − 1) + …, since each of the added terms is obviously 0.
A person who was formerly of sound mind might conclude from this that 1 = 0 (because both 1 and 0 apparently equal the same infinite sum), which, as noted in the previous chapter, is a detonator for blowing away the entire number system. This thought-twister is called Grandi’s series—it’s a much-contemplated puzzler in mathematics. Euler believed that its sum equals 1/2, as did other mathematicians of his time. Today, it’s considered a “divergent” series, meaning that you can’t affix a value to its sum—it endlessly wobbles back and forth between 1 and 0 as you go about adding it up.
Then there’s the fact that if you treat infinity like a number and try to do arithmetic with it, you soon find yourself drawing wacky-sounding conclusions like “infinity plus infinity is equal to infinity, and therefore infinity is twice as big as itself.” (Zero could inspire the same sort of strange statement, since two zeroes added together are equal to zero. But since zero has no largeness associated with it, there’s less temptation to conclude that it is twice as large as itself.) By the same logic, infinity is also a bazillion times as big as itself. By why stop there? The same argument leads to the conclusion that infinity is an infinite number of times as big as itself.
You might conclude from this insanity that it’s not a good idea to assume that infinity is a number. But asserting that there’s no such number seems to imply that the sequence 1, 2, 3,… comes to an end at some point. That would be as counterintuitive as saying, “infinity is an entity that is twice as big as itself.”
Perplexity about the infinite reached its apogee in ancient times when Zeno, a pre-Socratic Greek philosopher, originated a series of famous paradoxes concerning it. We’ve already met him indirectly—the paradox about points in time mentioned in Chapter 2 was a takeoff on one of his befuddling themes. Greek thinkers were basically baffled by the infinite until Aristotle came up with a clever way to think about it that held sway for the next two millennia: infinity, he argued, isn’t an actual thing. But there are “potential infinities,” like the counting numbers (1, 2, 3,… ), that “never give out in our thought.”
Aristotle’s argument didn’t solve all the problems that infinity poses, nor did it end the philosophical fussing about it. But his clever conceptualization, which might be summarized as, “infinity sort of exists, but not really, because it’s an imaginary process, not a true thing,” let people tiptoe around the infinite without blowing their minds. Potential infinity, and related process-invoking ideas that future mathematicians would use, such as approaching a limit, built on the conceptual foundation that Aristotle laid.
With the advent of calculus in the 1600s, however, a new set of difficulties involving the infinite arose in mathematics. The function-manipulating procedures in calculus that make it possible to quantify instantaneous rates of change were formulated by introducing vanishingly small numbers called infinitesimals into calculations. These speck-like numberlets were thought of as tiny, finite quantities, and yet, when it was convenient to do so in calculations, they were treated just like zeroes. Newton called them vanishing quantities, a rather unfortunate term that highlighted their dubious nature—it made them sound like props used in magic tricks. Anglican bishop and philosopher George Berkeley famously lampooned the implicit contradiction at the heart of the new math of the era by pointing out that infinitesimals were “neither finite quantities nor quantities infinitely small, nor yet nothing. May we not call them the ghosts of departed quantities?”
During the 1800s, mathematicians eradicated the troubling numerical spooks when they rigorously reformulated the foundations of calculus. But in the late nineteenth century, German mathematician Georg Cantor spearheaded a revolutionary way of thinking about infinity that dismissed Aristotle’s “it’s not really real, folks” approach and forced mathematicians to confront unsettling issues about the infinite once again. Cantor couched his new ideas in the language of set theory, which concerns groupings such as the positive integers, all the fractions between zero and one, and the irrational numbers. He proposed that such sets possess actual, not just potential, infinity.
Cantor happily embraced strange-looking arithmetical statements like “infinity + infinity = infinity.” The most boggling implications of his theory, however, concerned the relative sizes of infinite sets—he demonstrated that some such sets are actually larger than others. For instance, he proved that the irrational numbers possess a bigger degree of infinity than the rationals do. In fact, there’s an infinite number of levels of infinity, according to Cantor’s theory.
This is far-out stuff, and it was dismissed as feverish ravings by a number of Cantor’s eminent contemporaries. French mathematician Jules Henri Poincaré, for example, declared that Cantor’s theory was a “grave disease” infecting mathematics. But Cantor also had defenders. Perhaps his most prominent advocate was the great German mathematician David Hilbert, who famously asserted in 1926 that “nobody shall ever expel us” from the “paradise” of Cantor’s theory of the infinite. Tragically, Cantor, who suffered from recurring bouts of depression, died in a mental hospital eight years before Hilbert wrote that ringing declaration. It isn’t clear whether the horrendously tricky challenges he took on contributed to his worsening mental instability. But they may well have done so, at least indirectly, because of all the stress he faced when attacked by prominent mathematicians who dismissed his ideas as ridiculous.
Yet mathematicians have been drawn to infinity through the ages like moths to flames. Or perhaps I should say, like night-hikers to the glimmering of distant fires. Unlike philosophers, who are attracted to the infinite because of its endless fertility as a subject of debate, mathematicians mainly regard infinity as a highly useful conceptual tool (albeit a somewhat tricky-to-use one) for solving practical problems. This is why its symbol, ∞ (sometimes called the lazy eight), which was introduced in 1655 by English mathematician John Wallis, is ubiquitous in calculus and other areas of math. As mentioned earlier, Euler invoked infinity to make a number of his landmark advances, including the derivation of the general equation that led to Euler’s formula.
In short, infinity is like a colossal dragon that’s known for inducing madness in those who dare to stare hard at it but which is also known for making an honest living by traveling around the countryside and hiring itself out to pull farmers’ plows.* (Which is yet another paradox about it.)
BUT LET’S GET BACK to π and its intriguing history. If you’re short on time, here’s a one-sentence summary: the story of π is the deeply ironic tale of one thinker after another trying to nail down the size of a number that is fundamentally immeasurable. (Because it’s irrational.)
The number we now call π has fascinated people for millennia—its study is said to be math’s oldest research topic. The main reason, of course, is that it’s very handy for calculating the circumferences of circles. For instance, if you wanted to know how long to make a metal band to go around a chariot wheel, you could simply measure the wheel’s diameter with a ruler and multiply that length by π.
We don’t know who first realized that a number slightly larger than three can be multiplied by the diameter of any circle, no matter the size, to yield its circumference. But knowledge of this fact goes back at least to the time of the ancient Egyptians and Babylonians some 4,000 years ago. Using the Greek letter π to represent the number, however, didn’t become standard in math until the mid-1700s, when—who else?—Euler put his imprimatur on that symbol for it.
After it dawned on people that a single number was universally applicable to circle-related calculations, it probably wasn’t long before they began trying to express it as a ratio of two integers—a fraction.* Thus began the long quest to pin down the value of π.
Since it’s impossible to express an irrational number such as π as a fraction, the quest for a fraction equal to π could never be successful. Ancient mathematicians didn’t know that, however. As noted above, it wasn’t until the eighteenth century that the irrationality of π was demonstrated. Their labors weren’t in vain, though. While enthusiastically pursuing their fundamentally doomed enterprise, they developed a lot of interesting mathematics as well as impressively accurate approximations of π.
The ancient Greek mathematician, Archimedes, came up with one of the best early approximations by using regular polygons (many-sided figures with equal-length sides, such as stop-sign octagons) that had so many sides they were nearly circular. Calculating a many-sided, regular polygon’s perimeter and then dividing it by the length of a diameter-like line through its center yields an approximation of π. Employing polygons with 96 sides, Archimedes showed via this method that π is very slightly less than 22/7, a number that was long mistaken by lesser sages as its exact value.
Chinese mathematician Zu Chongzhi topped Archimedes in the fifth century by approximating π as 355/113—this fraction’s decimal equivalent is good to an accuracy of six decimals. It’s not clear how he arrived at this remarkably accurate approximation, but historians think he based it on calculations involving a hypothetical polygon with 24,576 sides. In any case, his computations must have been very meticulous, to say the least. It took mathematicians about a millennium to find a more accurate approximation.
During the 1600s, π chasers abandoned the polygon approach in favor of using infinite sums like the one found by Gregory and Leibniz. Mathematicians eventually discovered a splendid array of infinite sums that could be used to calculate approximations of π. Some are far superior to the Gregory-Leibniz formula because fewer of their terms need to be summed to derive a close approximation. One of the most elegant, amazingly simple ones was established by Euler when he was in his late 20s:
π2/6 = 1/12 + 1/22 + 1/32 + 1/42 + ….
This formula is even more shocking than the Gregory-Leibniz one—it reveals a startling connection between π and the positive integers, 1, 2, 3,… . Before Euler derived it, several mathematicians, including Leibniz, had tried unsuccessfully to figure out what all those similar fractions add up to; the problem was first posed in 1644 by Italian mathematician Pietro Mengoli. If you start adding them from the left, you’ll soon find that the total sum seems to lie between 1 and 2. If you add up enough of them, you can ascertain that the sum lies in the vicinity of 1.64. But eighteenth-century mathematicians weren’t satisfied with such approximations—they wanted to know exactly what number the sum equals.
This puzzle, known as the Basel problem (named after the town in Switzerland), was considered one of the most significant questions in math at the time. Thus, when the young Euler demonstrated to everyone’s astonishment that the mystery number is exactly π2/6, he catapulted himself to international fame. He also provided truly stunning evidence of the uncanny ability of π to sneak through windows and down chimneys.*
Even before Euler came along, the quest for better approximations of π had ceased to matter for improving real-world calculations and had morphed into a competition for bragging rights. By the early 1600s mathematicians had managed to crank out an approximation of π that was accurate to 35 digits, which is far more than needed for any earthly purpose. With only 39 digits, you could calculate the circumference of the observable universe to within the diameter of a hydrogen atom.
One of the most determined π chasers in the nineteenth century was amateur British mathematician William Shanks, who won minor fame by calculating the first 707 digits of π in 1873. A boarding-school owner with ample spare time to pursue his hobby, he reportedly spent many of his mornings calculating digits of π and his afternoons double-checking his morning’s work. After nearly 20 years of work he recorded the 707th digit and moved on to other calculations. Later he proudly commented, “Whether any other mathematician will appear, possessing sufficient leisure, patience, and facility of computation, to calculate the value [of π] to a still greater extent, remains to be seen.”
Poor old Bill. In 1944 one of Shanks’s successors in the π chase discovered that he’d made a mistake on the 528th digit, which meant that all his later ones were wrong too—more than a fourth of his effort, representing years of work, was wasted. Now he’s largely remembered for the goof.
With the help of computers, modern π hunters have taken the quest to truly fantastic levels of accuracy. One of the more notable records was set in 1996 when two extraordinarily clever brothers living in New York City, Gregory and David Chudnovsky, put together a home-brew supercomputer in their Manhattan apartment and used it to churn out nearly nine billion digits of π. At this writing, the π-obsessed tribe has reportedly managed to pin down several trillion digits. Which goes to show that once you get hooked on something that’s infinite, you just can’t stop.
* This simile was suggested by Bill Peet’s delightful children’s classic, How Droofus the Dragon Lost His Head.
* Approximating π as a decimal number such as 3.14159 was still far in the future during the time of the ancients. Our modern decimal representation of numbers, which was originated in India and transmitted to the West by Arab mathematicians, was established in Europe during the sixteenth century.
* Please pause for a moment to ponder these questions: What do circles, implicitly alluded to in the formula by π, have to do with the counting numbers that every school child knows? How do we reconcile π’s infinite numerical randomness as an irrational number with the infinite sum’s perfectly regular pattern? (π2 is also an irrational number, as is one-sixth of π2.) After poring over how Euler solved the Basel problem, I still can’t think of satisfying answers to these puzzles—that is, explanations that are intuitively compelling enough to force themselves on me in the way that the truth of 2 + 2 = 4 does. This isn’t to say that I feel truly mystified—Euler’s astoundingly clever solution makes a lot of sense. But I find myself experiencing a kind chronic surprise about his result, coupled with the sensation that I must be missing something fundamental about π despite having read a lot about it.