CHAPTER NINE

The Computer Age: From the Mind to the Desktop

I will offer you a bet. When the Riemann Hypothesis is proved it will be proved without the use of a computer. Gerhard Frey, inventor of the crucial link between Fermat’s Last Theorem and elliptic curves

Once they’ve left school, most people’s sole encounter, if at all, with prime numbers is via perennial news items about large computers making the latest discovery of the biggest known prime number. Julia Robinson’s treasured FINDS LARGEST NUMBER newspaper cutting illustrates that, as early as the 1930s, even incorrect discoveries were making the news. Thanks to Euclid’s proof that there are an infinite number of primes, this is a news story that will run and run. By the end of the Second World War, the largest known prime number was one with thirty-nine digits that had held the record since its discovery in 1876. Today, the record prime has more than a million digits. The number would take more pages than this book to print and several months to read aloud. It is the computer that has allowed us to reach such heady heights. In Bletchley, Turing had already been thinking about how to use his machines to find record-breaking prime numbers.

Although Turing’s theoretical universal machine was blessed with an infinite amount of memory in which to store information, the machines that he and Newman built in Manchester after the war were very limited in terms of what they could remember. They could only carry out calculations which didn’t require too much memory. For example, all it requires to generate the Fibonacci sequence (1, 1, 2, 3, 5, 8, 13, …) is to remember the previous two numbers in the sequence, and their computers had no trouble with such a simple sequence. Turing knew of a clever trick that had been developed by the younger member of the Lehmer family duo to find the special primes made famous by the seventeenth-century monk Marin Mersenne. Turing realised that, as with the Fibonacci numbers, Lehmer’s test wouldn’t need a lot of memory. Searching for Mersenne’s primes would be the perfect task for the machines Turing was conceiving.

Mersenne had hit on the idea of generating prime numbers by multiplying 2 together many times and then subtracting 1. For example, 2 × 2 × 2 − 1 = 7 is a prime number. He saw that if 2n − 1 was going to stand a chance of being prime, then he would have to choose n to be prime. Yet, as he discovered, this doesn’t guarantee that 2n − 1 is prime. 211 − 1 is not a prime, although 11 is. Mersenne had made the prediction that

2, 3, 5, 7, 13, 19, 31, 67, 127, 257

were the only choices of n up to 257 that would make 2n − 1 a prime number.

A number the size of 2257 − 1 is so huge that the human mind could not possibly check Mersenne’s claim. Perhaps that’s why he felt safe in making his bold statement. He had believed that ‘all time would not suffice to determine whether they are prime’. His choice of numbers had been driven by Euclid’s proof of the infinity of primes. Take a number like 2n that is divisible by lots of numbers, and then shift it by 1 in the hope of making it indivisible.

Although it didn’t guarantee primes, Mersenne’s intuition about his numbers was right in one respect. Because Mersenne’s numbers were close to this highly divisible number 2n, there is a very efficient way to check whether such numbers are prime. The method was devised in 1876, when the French mathematician Édouard Lucas discovered how to confirm that Mersenne was correct about 2127 − 1 being prime. This thirty-nine digit prime remained the largest known until the dawn of the computer age. Armed with his new method, Lucas succeeded in revealing Mersenne’s list of ‘primes’ for what they really were. The monk’s own list of those numbers n for which 2n − 1 is prime turned out to be well off the mark: he’d missed out 61, 89 and 107, and mistakenly included 67. But 2257 − 1 was still beyond Lucas’s reach.

Mersenne’s mystical insight turned out to be blind guesswork. Mersenne’s reputation might have taken a knock, but his name lives on as king of the big primes. The record primes that make the news are invariably one of Mersenne’s numbers. Although Lucas could confirm that 267 − 1 wasn’t prime, his method couldn’t crack this number into its prime building blocks. As we shall see, cracking such numbers is considered such a tough problem that it is now at the heart of the present-day cryptographic security systems, successors to the Enigma code Turing cracked with his Bletchley bombes.

Turing wasn’t the only one thinking about primes and computers. As Robinson had discovered as a child listening to the radio, the Lehmer family had also been fascinated by the idea of using machines to explore prime numbers. The elder Lehmer had already, at the turn of the century, produced a table of primes up to 10,017,000. (No one has since published a table of primes beyond this.) His son had made a more theoretical contribution. In 1930, aged only twenty-five, he discovered a refinement of Lucas’s idea for testing whether Mersenne’s numbers were prime.

To prove that a Mersenne number is prime and not divisible by any smaller number, Lehmer showed that you could turn the problem on its head. The Mersenne number 2n − 1 will be prime only when 2n − 1 divides a second number, called a Lucas—Lehmer number and denoted by Ln. These numbers are built up, as are the Fibonacci numbers, from the previous numbers in the sequence. To get Ln you square the previous number, Ln − 1, and subtract 2:

Image

The test gets going when n = 3 and the corresponding Lucas—Lehmer number is L3 = 14. From there the sequence continues with L4 = 194 and L5 = 37,634. What gives this test its power is that all you need to do is generate the number Ln and test whether the Mersenne number 2n − 1 divides this number, a computationally easy task. For example, since 25 − 1 = 31 divides the Lucas—Lehmer number L5 = 37,634, Mersenne’s number 25 − 1 is prime. This simple test allowed Lehmer to finish off Mersenne’s list and prove that Mersenne had been wrong about 2257 − 1 being prime.

How did Lucas and Lehmer discover their test for Mersenne primes? It’s not the most obvious idea to come to mind. This sort of discovery is very different to the thunderbolt discovery of the Riemann Hypothesis or Gauss’s discovery of a connection between primes and logarithms. The Lucas—Lehmer test is not a pattern that will emerge through experiment or numerical observation. They discovered this by playing around with what it means for 2n − 1 to be prime, continually turning the statement like a Rubik’s cube until suddenly the colours come together in a new way. Each turn will be like a step in the proof. Unlike other theorems where the destination is clear from the outset, the Lucas—Lehmer test ultimately emerged by following the proof without quite knowing where it was going. Lucas had begun turning the cube but Lehmer successfully brought it into the simple form used today.

While he was cracking the German Enigma codes in Bletchley, Turing discussed with his colleagues the potential for machines, similar to the bombes they had built, to find large prime numbers. Thanks to the method developed by Lucas and Lehmer, Mersenne’s numbers are particularly amenable to having their primality checked. The method was perfectly suited to being automated on a computer, but the pressures of the war effort soon pushed Turing’s ideas on to the back burner. But after the war, Turing and Newman could return to the idea of finding more Mersenne primes. It would be a perfect test of the machine they were proposing to build at the research laboratory in Manchester. Despite its tiny storage capacity, the Lucas – Lehmer method for determining primes would not require too much memory at each stage. To calculate the nth Lucas – Lehmer number, the computer just had to remember what the (n − 1)th number had been.

Turing had been unlucky with Riemann’s zeros, and his fortune didn’t change when he turned his attention to finding Mersenne primes. His computer in Manchester failed to surpass the seventy-year-old record prime, 2127 − 1. The next Mersenne prime wouldn’t appear until 2521 − 1, which was just beyond the reach of Turing’s machine. By a strange twist of fate it was Julia Robinson’s husband, Raphael, who claimed the discovery of this new record prime. He had somehow obtained the manual of a machine that Derrick Lehmer had built in Los Angeles. By this time, Lehmer had moved on from the bicycle gears and chains of his pre-war machine. He was now director of the National Bureau of Standards’ Institute for Numerical Analysis and had created a machine called the Standards Western Automatic Computer (SWAC). From the comfort of his office in Berkeley, and never having set eyes on the machine, Raphael wrote a program that would run on the SWAC to hunt for Mersenne primes. On January 30, 1952, the computer discovered the first prime numbers beyond the reach of the computational abilities of the human mind. Just a few hours after the record 2521 − 1 had been set, SWAC spat out a new big prime, 2607 − 1. Within the year, Raphael Robinson had broken his record another three times. The biggest prime number was now 22,281 − 1.

The hunt for record primes came to be dominated by those with access to the biggest computers. Through to the mid-1990s, new records were set using Cray computers, the giants of the computing world. Cray Research, founded in 1971, had exploited the fact that a computer doesn’t have to finish one operation before it starts the next. This simple idea was what lay behind the creation of machines acknowledged for decades to be the fastest calculators in the world. Since the 1980s, the Cray machine based at the Lawrence Livermore Laboratory in California, under the watchful eye of Paul Gage and David Slowinski, had been grabbing the records and headlines. In 1996 they announced their seventh record-breaking prime, 21,257,787 − 1, a number with 378,632 digits.

Recently, though, the tide has turned in favour of the little guys. Like David conquering Goliath, the records are now falling to humble desktop computers. And the catapult that gives them the power to challenge the Cray computers? The Internet. With the combined strength of countless small computers networked together, the potential is there to set this ant colony of machines hunting for big primes. This is not the first time the Internet has been used to empower the amateur in doing real science. Astronomy has benefited greatly from assigning to thousands of amateur astronomers their own small piece of the night sky to scour; the Internet had provided the network to coordinate this astronomical effort. Inspired by the success of the astronomers, an American programmer, George Woltman, published a piece of software on the Internet which, once downloaded, assigns your desktop a tiny part of the infinite expanse of numbers. Instead of training their telescopes on the night sky looking for a new supernova, amateur scientists are using their computers’ idle time to scan corners of the galaxy of numbers for new record primes.

The search was not without its dangers. One of Woltman’s recruits worked for a US telephone company and had enlisted the help of 2,585 of the company’s computers in his search for Mersenne primes. The company grew suspicious that something was amiss when the computers in Phoenix were taking five minutes as opposed to five seconds to retrieve telephone numbers. When the FBI eventually tracked the source of the slowdown, the employee admitted that ‘All that computational power was just too tempting for me.’ The telephone company showed little sympathy with their employee’s pursuit of science, and fired him.

The first discovery of a new Mersenne prime by this band of Internet hunters came a few months after the Cray announcement in 1996. Joel Armengaud, a Paris-based computer programmer, struck gold in the small seam of numbers he had been mining for Woltman’s project. For the media, the discovery came a little too soon after the previous find. When I contacted The Times about the latest biggest prime, they told me they would run this story only every other year. Slowinski and Gage, the Cray twins, had matched supply to demand with discoveries coming on average every two years since 1979.

But there was more to all of this than the discovery of new primes. It marked a turning point in the role of the computer in the search for primes, and it wasn’t missed by the Internet-savvy Wired magazine. Wired ran a story about what is now known as the Great Internet Mersenne Prime Search, or GIMPS. Woltman has successfully enrolled over two hundred thousand computers from all around the world to create what is in effect one huge parallel-processing machine. Not that the big guns with Crays are out of business. They are now equal partners, checking the discoveries of the little guys.

By 2002 there have been five lucky winners in the search for Mersenne primes. The Paris find was followed by one in England and a third in California. But it was Nayan Hajratwala from Plymouth, Michigan who truly struck gold in June 1999. His prime, 26,972,593 − 1, contains 2,098,960 digits and was the first to pass the one-million-digit milestone. A symbolic prize in itself, this achievement also won him a cash prize of $50,000 from the Electronic Frontier Foundation. This Californian organisation is the self-styled protector of the civil liberties of Netizens – those who use the Internet. If your appetite has been whetted by Hajratwala’s success, the Foundation still has half a million dollars in prize money to reward the discoverers of more big primes, with the next milestone set at 10 million digits. Hajratwala’s prime was beaten in November 2001 by Canadian student Michael Cameron, who used his PC to prove the primality of 213,466,917 − 1, a number with over 4 million digits. Mathematicians believe that there are infinitely many of these special Mersenne primes waiting to be discovered.

The computer – the death of mathematics?

If the computer can out-count us, doesn’t that make the mathematician redundant? Fortunately, it doesn’t. Rather than heralding the end of mathematics, it highlights the true difference between the mathematician as creative artist and the computer as tedious calculator. The computer is certainly a powerful new ally in mathematicians’ attempts to navigate their world, and a sturdy sherpa in our ascent of Mount Riemann, but it can never replace the mathematician. Although the computer can outstrip the mathematician in any finite computation, it lacks the imagination (as yet) to embrace an infinite picture and unmask the structure and patterns underlying mathematics.

For example, does the computer search for big primes give us any better understanding of the primes? We might be able to sing higher and higher notes, but the music remains hidden. Euclid has already assured us that there will always be a bigger prime to find. It isn’t known, though, whether Mersenne’s special numbers will produce primes infinitely often. It may be that Michael Cameron has discovered the thirty-ninth and final Mersenne prime. When I talked to Paul Erdos, he ranked the task of proving that there are infinitely many Mersenne primes as one of the greatest unsolved problems of number theory. It is generally believed that there are infinitely many choices of n that make 2n − 1 prime. But it is very unlikely that a computer will prove it.

That’s not to say that computers can’t prove things. Given a set of axioms and rules of deduction, you can program a computer to start churning out mathematical theorems. The point is that, like the monkey on the typewriter, the computer won’t be able to distinguish Gaussian theorems from primary-school sums. The human mathematician has developed the critical faculties to differentiate between theorems which are important and those which aren’t. The aesthetic sensibilities of the mathematical mind are tuned to appreciate proofs that are beautiful compositions and shun proofs which are ugly. Although the ugly proof is just as valid, elegance has always been recognised as an important criterion in charting the best course through the mathematical world.

The first successful use of a computer to prove a theorem was in connection with a challenge called the Four-Colour Problem, which began as an amateur curiosity. The problem is about something we probably all discovered as kids: if you want to colour a map so that no two neighbouring countries have the same colour, you can always get away with just four colours. Despite the most creative redrawing of national boundaries it seems you just can’t force the map of Europe to need any more colours. The current boundaries of France, Germany, Belgium and Luxembourg prove that you do need at least four colours:

But could you prove that four colours would be enough for any map?

The question was first aired in 1852 when a law student, Francis Guthrie, wrote to his brother, a mathematician at University College, London, asking if anyone had proved that four colours would always suffice. Admittedly it was not then a question that many people felt was important. A number of minor mathematicians tried their hand at providing Guthrie with a proof. But, as a proof remained elusive, the problem gradually worked its way up the ladder of mathematical ability. Even Hilbert’s best friend in Göttingen, Hermann Minkowski, had his fingers burnt by it. The question of the Four-Colour Problem arose in a lecture course he was giving. ‘This theorem has not yet been proved but that is because only mathematicians of the third rank have occupied themselves with it,’ he announced. ‘I believe I can prove it.’ He spent several lectures battling with his ideas on the blackboard. One morning, as he entered the lecture hall, there was a clap of thunder. ‘Heaven is angered by my arrogance,’ he conceded. ‘My proof is defective.’

The more people tried and failed, the greater the problem grew in stature, especially as it was so simple to pose. It resisted all attempts to prove it until 1976, over a hundred years after Guthrie’s letter to his brother. Two mathematicians, Kenneth Appel and Wolfgang Haken at the University of Illinois, showed that instead of the impossible task of colouring the infinite number of all possible maps, the problem could be reduced to considering 1,500 different basic maps. This was a crucial breakthrough. It was like the discovery of a cartographic Periodic Table of elementary maps from which all others could be built. But checking each of these ‘atomic’ maps by hand would mean that even if they had started back in 1976, Appel and Haken would still be colouring away today. Instead, a computer was used for the first time to complete the proof. It took 1,200 hours of computing time, but eventually it came back with the answer that, yes, every map can be coloured with four colours. The combination of human ingenuity to prove that you only needed to consider these 1,500 basic maps to understand all maps, coupled with the brute force of the computer, confirmed what Guthrie had conjectured in 1852: that any map needs only four colours.

Knowing that the Four-Colour Theorem is true is of no practical use. Cartographers did not breathe a collective sigh of relief on hearing the news that they wouldn’t need to go out and buy a fifth pot of paint. Mathematicians were not eagerly awaiting confirmation of the result before they could explore beyond this problem. They simply couldn’t see much on the other side that invited much investigation. This wasn’t the Riemann Hypothesis, with thousands of results depending on its proof. The significance of the Four-Colour Problem was that it illustrated that we still didn’t understand two-dimensional space sufficiently to be able to answer it. For as long as it remained unsolved, it spurred mathematicians to seek a deeper understanding of the space around us. That is why many were unsatisfied by Appel and Haken’s proof. The computer had given us an answer but it had not deepened our understanding.

Whether Appel and Haken’s computer-aided proof of the Four-Colour Problem really captures the true spirit of ‘proof’ has been a matter of hot debate. Many felt unsettled by the computer’s role, even if most knew that the proof was more likely to be correct than many human-generated proofs. Shouldn’t a proof offer understanding? ‘A mathematical proof should resemble a simple and clear-cut constellation, not a scattered Milky Way’ was how Hardy liked to express it. The computer proof of the Four-Colour Problem resorted to laboriously mapping out the chaos of the heavens rather than offering a deep understanding of why the heavens look the way they do.

The computer-aided proof highlighted that the pleasure of mathematics comes not just from the end result. We don’t read mathematical mystery stories just to find whodunnit. The pleasure comes from seeing how the convolutions of the plot unfold themselves in the build-up to the moment of revelation. Appel and Haken’s proof of the Four-Colour Problem had deprived us of the ‘Aha – now I get it!’ feeling that we crave when we read mathematics. We like to share the eureka moment that the creator of the proof first experienced. Debate will rage for decades over whether computers will ever feel emotions, but the proof of the Four-Colour Problem certainly didn’t offer us the chance to share any exhilaration the computer might have felt.

Aesthetic sensibilities notwithstanding, the computer has gone on to serve the mathematical community in proving theorems. Whenever a problem has been reduced to checking a finite number of things, a computer can and does help. Can the computer therefore help us in our ascent on the Riemann Hypothesis? By the time of Hardy’s death at the end of the Second World War there was some suspicion that the Riemann Hypothesis might be false. As Turing recognised, if it is false, then a computer can help. A machine can be programmed to search for zeros until it finds a zero off the line. But if the Hypothesis is true, then the computer is useless in actually proving that this infinite number of zeros are all on the line. The best it can do is to provide more and more evidence to support our faith in Riemann’s hunch.

The computer also satisfied another need. By the time of Hardy’s death, mathematicians were rather stuck. The theoretical advances on the Riemann Hypothesis had dried up. It seemed that, given the available techniques, Hardy, Littlewood and Selberg had obtained the best results possible about the location of the points at sea level in Riemann’s landscape. They had squeezed these techniques for all they were worth. Most mathematicians agreed that new ideas were needed if they were to get any closer to proving the Riemann Hypothesis. In the absence of new ideas the computer gave the impression of progress. But it was just an impression – the participation of the computer was covering up the distinct lack of progress then being made with the Riemann Hypothesis. Computing became a substitute for thought, chewing gum for the mind that lulled us into thinking that we were really doing something when in fact we were facing a brick wall.

Zagier, the mathematical musketeer

The secret formula that Siegel had discovered in 1932 amongst Riemann’s unpublished notes was a formula for accurately and efficiently calculating the location of zeros in Riemann’s landscape. Turing had tried to speed up the calculation using his complex system of cogs, but it has taken more modern machines to realise the formula’s full potential. Once this secret formula was programmed into a computer, one could start exploring this landscape in regions that were previously unimaginable. In the 1960s, as humankind began to probe the distant universe with unmanned spacecraft, mathematicians began to send computers calculating their way into the outer reaches of Riemann’s landscape.

The farther the mathematicians went north in search of zeros, the more evidence they gathered. But what use is that evidence? How many zeros did you have to show were on the line before you could feel confident that the Riemann Hypothesis was true? The trouble is that, as Littlewood’s work proved, in mathematics evidence is rarely grounds for confidence. That is why many dismissed the computer as a useful instrument to probe the Riemann Hypothesis. However, there was a surprise in store that would begin to convince even the most diehard sceptics that there was a good chance the Riemann Hypothesis was true.

At the beginning of the 1970s, one mathematician stood at the head of this small band of sceptics. Don Zagier is one of the most energetic mathematicians on today’s mathematical circuit, cutting a dashing figure as he sweeps through the corridors of the Max Planck Institute for Mathematics in Bonn, Germany’s answer to the Institute for Advanced Study in Princeton. Like some mathematical musketeer, Zagier flourishes his razor-sharp intellect, ready to slay any passing problem. His enthusiasm and energy for the subject whisk you away in a whirlwind of ideas delivered in a rat-a-tat-tat voice and at a speed that leaves you breathless. He has a playful approach to his subject and is always ready with a mathematical puzzle to spice up lunch at the Institute in Bonn.

Zagier had become exasperated by some people’s desire to believe in the Riemann Hypothesis on purely aesthetic grounds, ignoring the dearth of real evidence to support it. Faith in the Hypothesis was probably based on little more than a reverence for simplicity and beauty in mathematics. A zero off the line would be a blot on this beautiful landscape. Each zero contributed a note to the prime number music. Enrico Bombieri expresses what it would mean if the Riemann Hypothesis turns out to be false: ‘Say you go to a concert and you are listening to the musicians playing together in a very harmonious way. Then suddenly there is a big tuba which plays with an extremely strong sound which drowns all the rest.’ So much beauty abounds in the mathematical world that we cannot – dare not – believe that Nature would choose a discordant universe in which the Riemann Hypothesis was false.

Whereas Zagier was the leading sceptic at this point, Bombieri represented the archetypal believer in the Riemann Hypothesis. In the early 1970s he had yet to move to the Institute in Princeton, and was still a professor in his native Italy. As Zagier explained, ‘Bombieri believes as an absolute article of faith that the Riemann Hypothesis is true. It is a religious belief that it has to be true or else the whole world would be wrong if it weren’t.’ Indeed, as Bombieri elaborated, ‘When I was in the eleventh grade I studied several of the medieval philosophers. One of them, William of Occam, elevated the idea that when one must choose between two explanations, one should always choose the simpler. Occam’s razor as the principle is called cuts out the difficult and chooses the simple.’ For Bombieri, a zero off the line would be like an instrument in the orchestra ‘that drowns out the others – an aesthetically distasteful situation. As a follower of William of Occam, I reject that conclusion, and so I accept the truth of the Riemann Hypothesis.’

Things came to a head when Bombieri visited the Institute in Bonn, and discussion over tea turned to the Riemann Hypothesis. Zagier, ever the mathematical swashbuckler, had his chance to challenge Bombieri to a duel. ‘I told him over tea that there wasn’t enough evidence yet to convince me either way. So I would be willing to take an evens money bet against it. Not that I thought it was necessarily false, but I was willing to play devil’s advocate.’

Bombieri responded, ‘Well, I’ll certainly be prepared to take you up on that,’ and Zagier realised that he was stupid to offer evens – Bombieri was such a firm believer he would have taken odds of a billion to one. The stakes were agreed upon: two bottles of very good Bordeaux to be chosen by the winner.

‘We wanted the bet to be settled in our lifetime,’ Zagier explains. ‘However, there was a very good chance we would go to our graves with the bet still open. We didn’t want to put a time limit on it so that if ten years passed then we would drop it. That seemed silly. What does ten years have to do with the Riemann Hypothesis? We wanted something mathematical.’

So Zagier offered the following. Although Turing’s machine had broken down after calculating the first 1,104 zeros, by 1956 Derrick Lehmer had been more successful. He had got his machines in California to check that the first 25,000 zeros were on the line. By the beginning of the 1970s, a famous calculation confirmed that the first three and a half million zeros were indeed on the line. The proof was a fantastic tour de force which exploited some brilliant theoretical techniques to push the computation to the absolute limits of the computer technology available at the time. As Zagier explains:

So I said OK, right now three million zeros have been computed, I’m not yet convinced, even though most people would say what do you want … blood … my god, three million zeros. Most people would say what’s going to change, what’s the difference three million … three trillion. That’s the whole point of what I’m telling you. That’s not the case. At three million I still wasn’t convinced. I wish I’d made the bet a bit early because I was already beginning to be convinced. I wish I’d made the bet at a hundred thousand because at that point there was absolutely no reason to believe the Riemann Hypothesis. It turns out that when you analyse the data a hundred thousand zeros is totally useless. It’s essentially zero evidence. Three million is beginning to be interesting.

But Zagier recognised that 300 million zeros represented an important watershed. There were theoretical reasons why the first several thousand zeros had to be on Riemann’s ley line. However, as one advanced farther north, the reasons why early zeros had to be on Riemann’s line began to be outweighed by even stronger reasons why zeros should start falling off the line. By 300 million zeros, Zagier realised, it would be a miracle if zeros weren’t pushed off the line.

Zagier based his analysis on a graph he knew kept track of the behaviour of the gradient in the hills and valleys along Riemann’s ley line. Zagier’s graph represented a new perspective from which to view the cross-section of Riemann’s landscape along the critical line. What was interesting was that it facilitated a new interpretation of the Riemann Hypothesis. If this new graph ever crosses Riemann’s critical line, there has to be a zero off Riemann’s line in this region, making the Riemann Hypothesis false. To begin with, the graph is nowhere near the critical line, and in fact climbs away. But as one marches farther and farther north the graph starts coming down, edging towards the critical line. Every now and again Zagier’s graph attempts to crash through the critical line, but as the figure opposite illustrates something seems to be preventing it from crossing.

So, the farther north one advances, the more likely it seems that this graph might cross the critical line. Zagier knew that the first real weakness was around the 300-millionth zero. This region of the critical line would be the real test. By the time you had gone this far north, if the graph still did not cross the critical line, then there really had to be a reason why it didn’t. And that reason, Zagier reasoned, would be that the Riemann Hypothesis was true. And that is why Zagier set the threshold for the bet at 300 million zeros. Bombieri would win the bet either if a proof was found or if 300 million zeros were calculated and no counter-example was found.

Zagier could see that the computers of the 1970s were still far too weak to explore this region of Riemann’s ley line. Computers had managed to calculate three and a half million zeros. Zagier estimated, considering the growth of computer technology at the time, that it might be as long as thirty years before 300 million zeros could be calculated. But he hadn’t counted on the computer revolution that was just around the corner.

For about five years nothing happened. Computers grew slowly more powerful, but to compute even twice as many zeros let alone a hundred times as many would have required such a huge amount of work that nobody bothered. After all, in this business there was no point expending huge amounts of energy on merely doubling the amount of evidence. But then, about five years later, computers suddenly got a lot faster, and two teams took up the challenge of exploiting this new power to calculate more zeros. One team was in Amsterdam, run by Herman te Riele, and the second was in Australia, led by Richard Brent.

Brent was the first to make an announcement in 1978, that the first 75 million zeros were still on the line. The team in Amsterdam then joined forces with Brent, and after a year’s work they produced a big paper, carefully written up and beautifully presented, everything finished and in place. How far had they calculated … 200 million! Zagier laughs:

I breathed a sigh of relief because this was such a huge project. Thank God they had luckily stopped at 200 million. Obviously they could have gone to 300 million, but thank God they didn’t. Now I’ll have a reprieve of many years. They won’t go on just another 50 per cent. People will wait until they can go to a billion. So it’s going to be many years. Unfortunately I didn’t count on my good friend Hendrik Lenstra, who knew about the bet and was in Amsterdam.

Lenstra went to te Riele and asked him, ‘Why did you people stop at 200 million? You know if you go to 300 million Don Zagier will lose a bet.’ So the team went on to 300 million. Naturally they didn’t find a zero off the line, so Zagier had to pay up. He took the two bottles to Bombieri, who shared the first with Zagier. As Zagier likes to point out, that was probably the most expensive bottle ever drunk, because

two hundred million had nothing to do with my bet. They’d done that independently. But the last hundred million, they did that computation at that time only because they’d heard about my bet. It took about a thousand hours of CPU time approximately to calculate the extra 100 million. The going price for an hour of CPU time at the time was $700. Since they did this computation solely that I should lose the bet and have to pay up my two bottles of wine, I claim that these bottles were 350 thousand dollars each – and that’s a lot more than the most expensive bottle of wine ever sold at auction.

More importantly though, the evidence, in Zagier’s view, was now overwhelmingly in support of the Hypothesis. The computer as a calculating tool was finally powerful enough to navigate far enough north in Riemann’s zeta landscape for there to be every chance of throwing up a counter-example. Despite numerous attempts by Zagier’s auxiliary graph to storm across Riemann’s critical line, it was clear that something was acting like a huge repulsive force stopping the graph from crossing the line. And the reason? The Riemann Hypothesis.

‘That is what makes me a firm believer in the Riemann Hypothesis,’ Zagier now admits. He compares the role of the computer to the way a particle accelerator is used in support of theoretical physics. Physicists have a model of what makes up matter, but to test that model requires the generation of sufficient energy to blow apart the atom. For Zagier, 300 million zeros was finally enough energy to test whether the Riemann Hypothesis had a good chance of being true:

This, I believe, is 100 per cent convincing evidence that there is something preventing the graph from crossing, and the only thing I can imagine that is possibly happening is yes, indeed, the Riemann Hypothesis is true. And now I am absolutely as firm a believer in the Riemann Hypothesis as Bombieri, not a priori because it is too beautiful and is such an elegant thing or because of the existence of God, but because of this evidence.

One of te Riele’s team in Amsterdam, Jan van de Lune, has now retired. But mathematicians never entirely lose the mathematical bug, even though they might have to give up their offices. Using the program the team had used decades before, he has now confirmed, using three PCs he has running at home, that the first 6.3 billion zeros are all obeying Riemann’s Hypothesis. However long his computers keep calculating, there is no chance that they will provide a proof in this manner. But if there is a zero off the line, the computer does have the potential to play a role in exposing the Riemann Hypothesis as pure fantasy.

This is where the computer is in its element – as a destroyer of conjectures. In the 1980s, calculations of zeros were used to bring down a close cousin of the Riemann Hypothesis – something called the Mertens Conjecture. But the calculations weren’t made from the comfort of a mathematics department. Instead, interest switched to calculations of zeros made by a rather unexpected source: the AT&T telephone company.

Odlyzko, the calculating maestro of New Jersey

In the heart of New Jersey, near the sleepy town of Florham Park, an unlikely powerhouse of mathematical talent thrives under the commercial aegis of AT&T’s research laboratories. Once inside their building, you could be mistaken for believing you were in the mathematics department of a university. But this is the home of a major telecommunications business. The origins of the laboratory go back to the 1920s, when AT&T Bell Labs was first created. Turing had spent some time in the Bell Laboratory in New York during the war. He had been involved in a project to design a voice encryption system to enable Washington and London to talk securely on the telephone. Turing claimed that his time at Bell Labs was more exciting than Princeton, though that might have had something to do with the Village nightlife in Manhattan. Erdos would often make visits to the New Jersey base on his mathematical wanderings.

With the explosion of technology that hit the telecommunications industry in the 1960s, it was clear that to stay ahead of the game AT&T would need more and more sophisticated mathematical expertise. After the rapid expansion of the universities in that decade, the seventies by contrast were lean years for mathematicians trying to find academic jobs. By expanding their research facilities, AT&T were able to attract some of this excess. Although they were ultimately hoping that the research would translate into technological innovation, they were happy for their mathematicians to continue pursuing their mathematical passions. It sounds altruistic, but it was in fact good business: because of the monopoly it enjoyed in the 1970s, certain restrictions were put on how the company spent its profits. Investing in its research laboratory was seen as a canny way of soaking up some of the profits.

Whatever the reasons behind AT&T’s move, mathematics has a lot to be thankful for. Some of the most interesting theoretical advances have their source in ideas coming out of their laboratory. It is a fascinating mix of academia and the hard-nosed world of business. On a visit to talk to mathematicians at the laboratory, I witnessed this mix at first hand. Faced with the job of maximising AT&T’s bids in auction for mobile phone bandwidth, several mathematicians were presenting over a working lunch a theoretical model that would provide the company with the best strategy to negotiate the complex bidding process. For the mathematicians, it could have been a strategy for a game of chess rather than spending millions of the company’s dollars. But the two weren’t inconsistent.

The head of the research laboratory until 2001 was Andrew Odlyzko. Originally from Poland, Odlyzko still retains a strong but gentle eastern European accent. His time in the commercial sector has made him a very good communicator of difficult mathematical ideas. He has an engaging, inclusive attitude which encourages you to join him on his mathematical journey. Nevertheless he is very precise and always the consummate mathematician: each step must leave no room for ambiguity. Odlyzko’s interest in the zeta function had been sparked during his doctorate under the supervision of Harold Stark at MIT. One of the problems he had been looking at required him to know as accurately as possible the location of the first few zeros in the zeta landscape.

High-precision calculations are just the sort of thing that a computer does much better than a human. Shortly after Odlyzko joined AT&T Bell laboratories, he got his break. In 1978 the lab purchased their first supercomputer – a Cray 1. It was the first Cray to be owned by a private company as opposed to the government or a university. Since AT&T was a commercial organisation, with accounting and budgets controlling most things, each department had to pay for time on the mainframe computers. However, it took a while for people to acquire the skills to program the Cray, and to start with it saw little use. So the computer section decided to offer free chunks of five hours on the Cray for worthy scientific projects that had no funding.

The opportunity to exploit the power of the Cray was too much for Odlyzko to resist. He contacted the teams in Amsterdam and Australia who had proved that the first 300 million zeros were on the line. Had any of them accurately located the positions of these zeros along Riemann’s ley line? None of them had. They had focused on proving that the east – west coordinate of each zero was 1/2, as Riemann predicted it should be. They hadn’t been too concerned with the exact north south location.

Odlyzko applied for time on the Cray to determine the exact location of the first million zeros. AT&T agreed, and for decades Odlyzko has been using whatever computer time the company can spare to compute more and more zeros. These calculations aren’t just an unmotivated exercise in computing. His supervisor. Stark, had applied the knowledge gained about the location of the first few zeros to prove one of Gauss’s conjectures about how certain sets of imaginary numbers factorise. Odlyzko, on the other hand, employed his accurate location of the first 2,000 zeros to disprove a conjecture which had been on the mathematical circuit since the beginning of the twentieth century: the Mertens Conjecture. He was joined in the demolition of this conjecture by te Riele in Amsterdam, the mathematician who had helped Zagier to lose his bet by proving that the first 300 million zeros were on Riemann’s line. The Mertens Conjecture is very closely related to the Riemann Hypothesis, and its disproof showed mathematicians that if the Riemann Hypothesis were true, it was only just true.

The Mertens Conjecture is best understood as a variation on the game of tossing the prime number coin. The Mertens coin lands heads on the Nth toss if N is built from an even number of prime building blocks. For example, the result is ‘heads’ for N = 15, since 15 is the product of two primes, 3 and 5. If, on the other hand, N is built from an odd number of primes, for example N = 105 = 3 × 5 × 7, the result is ‘tails’. There is, however, a third possibility. If N uses one of the building blocks twice, then the result is counted as zero: 12, for example, is built from two 2’s and a 3 (12 = 2 × 2 × 3), so it scores 0. Think of zero results as corresponding to the coin landing out of sight or on its side. Mertens made a conjecture about the behaviour of this coin as N gets larger. It is very similar to Riemann’s Hypothesis, which says that the prime number coin isn’t biased. But the Mertens Conjecture was slightly stronger than what Riemann was predicting for the primes. It was predicting that the error was slightly smaller than what one might expect from a fair coin. If the conjecture were true, then so was the Riemann Hypothesis – but not vice versa.

In 1897 Mertens produced tables of calculations up to N = 10,000 to support his conjecture. By the 1970s experimental evidence had reached numbers up to a billion. But, as Littlewood had shown, experimental evidence in the billions is peanuts. There was now growing scepticism that the Mertens Conjecture would indeed be true. It took Odlyzko and te Riele’s accurate calculations of the locations of the first 2,000 zeros calculated to 100 decimal places to finally disprove Mertens Conjective. As yet another warning, though, to those impressed by numerical experimental evidence, Odlyzko and te Riele estimated that even if Mertens had analysed the tosses of his coin up to 1030, his conjecture would still appear to be true.

Odlyzko’s computers at AT&T have continued to help mathematicians in their quest to unearth the mysteries of the primes. But it isn’t one-way traffic. Prime numbers are now making their own contribution to the ever-expanding computer age. In the 1970s the primes suddenly became the key, literally, to securing the privacy of electronic communication. Hardy had always been very proud of how useless mathematics and especially number theory was in the real world:

The ‘real’ mathematics of the ‘real’ mathematicians, the mathematics of Fermat and Euler and Gauss and Abel and Riemann, is almost wholly ‘useless’ (and this is true of ‘applied’ as of ‘pure’ mathematics). It is not possible to justify the life of any genuine professional mathematician on the ground of the ‘utility’ of his work.

Hardy could not have got it more wrong. The mathematics of Fermat, Gauss and Riemann was to be put to use at the heart of the commercial world. This is why AT&T were to recruit even more mathematicians during the 1980s and 1990s. The security of the electronic village stands or falls on our understanding of prime numbers.