15
THE GOLDEN AGE OF PHYSICS

The period from 1919, when Ernest Rutherford first split the atom, to 1932, when his student James Chadwick discovered the neutron, was a golden decade for physics. Barely a year went by without some momentous breakthrough. At that stage, America was far from being the world leader in physics it has since become. All the seminal work of the golden decade was carried out in one of three places in Europe: the Cavendish Laboratory in Cambridge, England; Niels Bohr’s Institute of Theoretical Physics in Copenhagen; and the old university town of Göttingen, near Marburg in Germany.

For Mark Oliphant, one of Rutherford’s protégés in the 1920s, the main hallway of the Cavendish, where the director’s office was, consisted of ‘uncarpeted floor boards, dingy varnished pine doors and stained, plastered walls, indifferently lit by a skylight with dirty glass.1 For C. P. Snow, however, who also trained there and described the lab in his first novel, The Search, the paint and the varnish and the dirty glass went unremarked. ‘I shall not easily forget those Wednesday meetings in the Cavendish. For me they were the essence of all the personal excitement in science; they were romantic, if you like, and not on the plane of the highest experience I was soon to know [of scientific discovery]; but week after week I went away through the raw nights, with east winds howling from the fens down the old streets, full of a glow that I had seen and heard and been close to the leaders of the greatest movement in the world.’ Rutherford, who followed Maxwell as director of the Cavendish in 1919, evidently agreed. At a meeting of the British Association in 1923 he startled colleagues by suddenly shouting out, ‘We are living in the heroic age of physics!’2

In some ways, Rutherford himself – now a rather florid man, with a moustache and a pipe that was always going out – embodied in his own person that heroic age. During World War I, particle physics had been on hold, more or less. Officially, Rutherford was working for the Admiralty, researching submarine detection. But he carried on research when his duties allowed. And in the last year of war, in April 1919, just as Arthur Eddington was preparing his trip to West Africa to test Einstein’s predictions, Rutherford sent off a paper that, had he done nothing else, would earn him a place in history. Not that you would have known it from the paper’s title: ‘An Anomalous Effect in Nitrogen.’ As was usual in Rutherford’s experiments, the apparatus was simple to the point of being crude: a small glass tube inside a sealed brass box fitted at one end with a zinc-sulphide scintillation screen. The brass box was filled with nitrogen and then through the glass tube was passed a source of alpha particles – helium nuclei – given off by radon, the radioactive gas of radium. The excitement came when Rutherford inspected the activity on the zinc-sulphide screen: the scintillations were indistinguishable from those obtained from hydrogen. How could that be, since there was no hydrogen in the system? This led to the famously downbeat sentence in the fourth part of Rutherford’s paper: ‘From the results so far obtained it is difficult to avoid the conclusion that the long-range atoms arising from collision of [alpha] particles with nitrogen are not nitrogen atoms but probably atoms of hydrogen…. If this be the case, we must conclude that the nitrogen atom is disintegrated.’ The newspapers were not so cautious. Sir Ernest Rutherford, they shouted, had split the atom.3 He himself realised the importance of his work. His experiments had drawn him away, temporarily, from antisubmarine research. He defended himself to the overseers’ committee: ‘If, as I have reason to believe, I have disintegrated the nucleus of the atom, this is of greater significance than the war.’4

In a sense, Rutherford had finally achieved what the old alchemists had been aiming for, transmuting one element into another, nitrogen into oxygen and hydrogen. The mechanism whereby this artificial transmutation (the first ever) was achieved was clear: an alpha particle, a helium nucleus, has an atomic weight of 4. When it was bombarded on to a nitrogen atom, with an atomic weight of 14, it displaced a hydrogen nucleus (to which Rutherford soon gave the name proton). The arithmetic therefore became: 4+14-1=17, the oxygen isotope, O17.5

The significance of the discovery, apart from the philosophical one of the transmutability of nature, lay in the new way it enabled the nucleus to be studied. Rutherford and Chadwick immediately began to probe other light atoms to see if they behaved in the same way. It turned out that they did – boron, fluorine, sodium, aluminum, phosphorus, all had nuclei that could be probed: they were not just solid matter but had a structure. All this work on light elements took five years, but then there was a problem. The heavier elements were, by definition, characterised by outer shells of many electrons that constituted a much stronger electrical barrier and would need a stronger source of alpha particles if they were to be penetrated. For James Chadwick and his young colleagues at the Cavendish, the way ahead was clear – they needed to explore means of accelerating particles to higher velocities. Rutherford wasn’t convinced, preferring simple experimental tools. But elsewhere, especially in America, physicists realised that one way ahead lay with particle accelerators.

Between 1924 and 1932, when Chadwick finally isolated the neutron, there were no breakthroughs in nuclear physics. Quantum physics, on the other hand, was an entirely different matter. Niels Bohr’s Institute of Theoretical Physics opened in Copenhagen on 18 January 1921. The land had been given by the city, appropriately enough next to some soccer fields (Niels and his brother, Harald, were both excellent players).6 The large house, on four floors, shaped like an ‘L,’ contained a lecture hall, library, and laboratories (strange for an institute of theoretical physics), as well as a table-tennis table, where Bohr also shone. ‘His reactions were very fast and accurate,’ says Otto Frisch, ‘and he had tremendous will power and stamina. In a way those qualities characterised his scientific work as well.’7 Bohr became a Danish hero a year later when he won the Nobel Prize. Even the king wanted to meet him. But in fact the year was dominated by something even more noteworthy – Bohr’s final irrevocable linking of chemistry and physics. In 1922 Bohr showed how atomic structure was linked to the periodic table of elements drawn up by Dmitri Ivanovich Mendeléev, the nineteenth-century Russian chemist. In his first breakthrough, just before World War I, Bohr had explained how electrons orbit the nucleus only in certain formations, and how this helped explain the characteristic spectra of light emitted by the crystals of different substances. This idea of natural orbits also married atomic structure to Max Planck’s notion of quanta. Bohr now went on to argue that successive orbital shells of electrons could contain only a precise number of electrons. He introduced the idea that elements that behave in a similar way chemically do so because they have a similar arrangement of electrons in their outer shells, which are the ones most used in chemical reactions. For example, he compared barium and radium, which are both alkaline earths but have very different atomic weights and occupy, respectively, the fifty-sixth and eighty-eighth place in the periodic table. Bohr explained this by showing that barium, atomic weight 137.34, has electron shells filled successively by 2, 8,18, 18, 8, and 2 (=56) electrons. Radium, atomic weight 226, has on the other hand electron shells filled successively by 2, 8, 18, 32, 18, 8, and 2 (=88) electrons.8 Besides explaining their position on the periodic table, the fact that the outer shell of each element has two electrons means barium and radium are chemically similar despite their considerable other differences. As Einstein said, ‘This is the highest form of musicality in the sphere of thought.’9

During the 1920s the centre of gravity of physics – certainly of quantum physics – shifted to Copenhagen, largely because of Bohr. A big man in every sense, he was intent on expressing himself accurately, if painfully slowly, and forcing others to do so too. He was generous, avuncular, completely devoid of those instincts for rivalry that can so easily sour relations. But the success of Copenhagen also had to do with the fact that Denmark was a small country, neutral, where national rivalries of the Americans, British, French, Germans, Russians, and Italians could be forgotten. Among the sixty-three physicists of renown who studied at Copenhagen in the 1920s were Paul Dirac (British), Werner Heisenberg (German), and Lev Landau (Russian).10

There was also the Swiss-Austrian, Wolfgang Pauli. In 1924 Pauli was a pudgy twenty-three-year-old, prone to depression when scientific problems defeated him. One problem in particular had set him prowling the streets of the Danish capital. It was something that vexed Bohr too, and it arose from the fact that no one, just then, understood why all the electrons in orbit around the nucleus didn’t just crowd in on the inner shell. This is what should have happened, with the electrons emitting energy in the form of light. What was known by now, however, was that each shell of electrons was arranged so that the inner shell always contains just one orbit, whereas the next shell out contains four. Pauli’s contribution was to show that no orbit could contain more than two electrons. Once it had two, an orbit was ‘full,’ and other electrons were excluded, forced to the next orbit out.11 This meant that the inner shell (one orbit) could not contain more than two electrons, and that the next shell out (four orbits) could not contain more than eight. This became known as Pauli’s exclusion principle, and part of its beauty lay in the way it expanded Bohr’s explanation of chemical behaviour.12 Hydrogen, for example, with one electron in the first orbit, is chemically active. Helium, however, with two electrons in the first orbit (i.e., that orbit is ‘full’ or ‘complete’), is virtually inert. To underline the point further, lithium, the third element, has two electrons in the inner shell and one in the next, and is chemically very active. Neon, however, which has ten electrons, two in the inner shell (filling it) and eight in the four outer orbits of the second shell (again filling those orbits), is also inert.13 So together Bohr and Pauli had shown how the chemical properties of elements are determined not only by the number of electrons the atom possesses but also by the dispersal of those electrons through the orbital shells.

The next year, 1925, was the high point of the golden age, and the centre of activity moved for a time to Göttingen. Before World War I, British and American students regularly went to Germany to complete their studies, and Göttingen was a frequent stopping-off place. Moreover, it had held on to its prestige and status better than most in the Weimar years. Bohr gave a lecture there in 1922 and was taken to task by a young student who corrected a point in his argument. Bohr, being Bohr, hadn’t minded. ‘At the end of the discussion he came over to me and asked me to join him that afternoon on a walk over the Hain Mountain,’ Werner Heisenberg wrote later. ‘My real scientific career only began that afternoon.’14 In fact it was more than a stroll, for Bohr invited the young Bavarian to Copenhagen. Heisenberg didn’t feel ready to go for two years, but Bohr was just as welcoming after the delay, and they immediately set about tackling yet another problem of quantum theory, what Bohr called ‘correspondence.’15 This stemmed from the observation that, at low frequencies, quantum physics and classical physics came together. But how could that be? According to quantum theory, energy – like light – was emitted in tiny packets; according to classical physics, it was emitted continuously. Heisenberg returned to Göttingen enthused but also confused. And Heisenberg hated confusion as much as Pauli did. And so when, toward the end of May 1925, he suffered one of his many attacks of hay fever, he took two weeks’ holiday in Heligoland, a narrow island off the German coast in the North Sea, where there was next to no pollen. An excellent pianist who could also recite huge tracts of Goethe, Heisenberg was very fit (he liked climbing), and he cleared his head with long walks and bracing dips in the sea.16 The idea that came to Heisenberg in that cold, fresh environment was the first example of what came to be called quantum weirdness. Heisenberg took the view that we should stop trying to visualise what goes on inside an atom, as it is impossible to observe directly something so small.17 All we can do is measure its properties. And so, if something is measured as continuous at one point, and discrete at another, that is the way of reality. If the two measurements exist, it makes no sense to say that they disagree: they are just measurements.

This was Heisenberg’s central insight, but in a hectic three weeks he went further, developing a method of mathematics, known as matrix math, originating from an idea by David Hilbert, in which the measurements obtained are grouped in a two-dimensional table of numbers where two matrices can be multiplied together to give another matrix.18 In Heisenberg’s scheme, each atom would be represented by one matrix, each ‘rule’ by another. If one multiplied the ‘sodium matrix’ by the ‘spectral line matrix,’ the result should give the matrix of wavelengths of sodium’s spectral lines. To Heisenberg’s, and Bohr’s, great satisfaction, it did; ‘For the first time, atomic structure had a genuine, though very surprising, mathematical base.’19 Heisenberg called his creation/discovery quantum mechanics.

The acceptance of Heisenberg’s idea was made easier by a new theory of Louis de Broglie in Paris, also published in 1925. Both Planck and Einstein had argued that light, hitherto regarded as a wave, could sometimes behave as a particle. De Broglie reversed this idea, arguing that particles could sometimes behave like waves. No sooner had de Broglie broached this theory than experimentation proved him right.20 The wave-particle duality of matter was the second weird notion of physics, but it caught on quickly. One reason was the work of yet another genius, the Austrian Erwin Schrödinger, who was disturbed by Heisenberg’s idea and fascinated by de Broglie’s. Schrödinger, who at thirty-nine was quite ‘old’ for a physicist, added the notion that the electron, in its orbit around the nucleus, is not like a planet but like a wave.21 Moreover, this wave pattern determines the size of the orbit, because to form a complete circle the wave must conform to a whole number, not fractions (otherwise the wave would descend into chaos). In turn this determined the distance of the orbit from the nucleus. Schrödinger’s work, set out in four long papers in Annalen der Physik in spring and summer 1926, was elegant and explained the position of Bohr’s orbits. The mathematics that underlay his theory also proved to be much the same as Heisenberg’s matrices, only simpler. Again knowledge was coming together.22

The final layer of weirdness came in 1927, again from Heisenberg. It was late February, and Bohr had gone off to Norway to ski. Heisenberg paced the streets of Copenhagen on his own. Late one evening, in his room high up in Bohr’s institute, a remark of Einstein’s stirred something deep in Heisenberg’s brain: ‘It is the theory which decides what we can observe.’23 It was well after midnight, but he decided he needed some air, so he went out and trudged across the muddy soccer fields. As he walked, an idea began to germinate in his brain. Unlike the immensity of the heavens above, the world the quantum physicists dealt with was unimaginably small. Could it be, Heisenberg asked himself, that at the level of the atom there was a limit to what could be known? To identify the position of a particle, it must impact on a zinc-sulphide screen. But this alters its velocity, which means that it cannot be measured at the crucial moment. Conversely, when the velocity of a particle is measured by scattering gamma rays from it, say, it is knocked into a different path, and its exact position at the point of measurement is changed. Heisenberg’s uncertainty principle, as it came to be called, posited that the exact position and precise velocity of an electron could not be determined at the same time.24 This was disturbing both practically and philosophically, because it implied that in the subatomic world cause and effect could never be measured. The only way to understand electron behaviour was statistical, using the rules of probability. ‘Even in principle,’ Heisenberg said, ‘we cannot know the present in all detail. For that reason everything observed is a selection from a plenitude of possibilities and a limitation on what is possible in the future.’25

Einstein, no less, was never very happy with the basic notion of quantum theory, that the subatomic world could only be understood statistically. It remained a bone of contention between him and Bohr until the end of his life. In 1926 he wrote a famous letter to the physicist Max Born in Göttingen. ‘Quantum mechanics demands serious attention,’ he wrote. ‘But an inner voice tells me that this is not the true Jacob. The theory accomplishes a lot, but it does not bring us closer to the secrets of the Old One. In any case, I am convinced that He does not play dice.’26

For close on a decade, quantum mechanics had been making news. At the height of the golden age, German preeminence was shown by the fact that more papers on the subject were published in that language than in all others put together.27 During that time, experimental particle physics had been stalled. It is difficult at this distance to say why, for in 1920 Ernest Rutherford had made an extraordinary prediction. Delivering the Bakerian lecture before the Royal Society of London, Rutherford gave an insider’s account of his nitrogen experiment of the year before; but he also went on to speculate about future work.28 He broached the possibility of a third major constituent of atoms in addition to electrons and protons. He even described some of the properties of this constituent, which, he said, would have ‘zero nucleus charge.’ ‘Such an atom,’ he argued, ‘would have very novel properties. Its external [electrical] field would be practically zero, except very close to the nucleus, and in consequence it should be able to move freely through matter.’ Though difficult to discover, he said, it would be well worth finding: ‘it should readily enter the structure of atoms, and may either unite with the nucleus or be disintegrated by its intense field.’ If this constituent did indeed exist, he said, he proposed calling it the neutron.29

Just as James Chadwick had been present in 1911, in Manchester, when Rutherford had revealed the structure of the atom, so he was in the audience for the Bakerian lecture. After all, he was Rutherford’s right-hand man now. At the time, however, he did not really share his boss’s enthusiasm for the neutron. The symmetry of the electron and the proton, negative and positive, seemed perfect, complete. Other physicists may never have read the Bakerian lecture – it was a stuffy affair – and so never have had their minds stimulated. Throughout the late 1920s, however, anomalies built up. One of the more intriguing was the relationship between atomic weight and atomic number. The atomic number was derived from the nucleus’s electrical charge and a count of the protons. Thus helium’s atomic number was 2, but its atomic weight was 4. For silver the equivalent numbers were 47 and 107, for uranium 92 and 235 or 238.30 One popular theory was that there were additional protons in the nucleus, linked with electrons that neutralised them. But this only created another, theoretical anomaly: particles as small and as light as electrons could only be kept within the nucleus by enormous quantities of energy. That energy should show itself when the nucleus was bombarded and had its structure changed – and that never happened.31 Much of the early 1920s was taken up by repeating the nitrogen transmutation experiment with other light elements, so Chadwick scarcely had time on his hands. However, when the anomalies showed no sign of being satisfactorily resolved, he came round to Rutherford’s view. Something like a neutron must exist.

Chadwick was in physics by mistake.32 A shy man, with a gruff exterior that concealed his innate kindness, he had wanted to be a mathematician but turned to physics after he stood in the wrong queue at Manchester University and was impressed by the physicist who interviewed him. He had studied in Berlin under Hans Geiger but failed to leave early enough when war loomed and was interned in Germany for the duration. By the 1920s he was anxious to be on his way in his career.33 To begin with, the experimental search for the neutron went nowhere. Believing it to be a close union of proton and electron, Rutherford and Chadwick devised various ways of, as Richard Rhodes puts it, ‘torturing’ hydrogen. The next bit is complicated. First, between 1928 and 1930, a German physicist, Walter Bothe, studied the gamma radiation (an intense form of light) given off when light elements such as lithium and oxygen were bombarded by alpha particles. Curiously, he found intense radiation given off not only by boron, magnesium, and aluminum – as he had expected, because alpha particles disintegrated those elements (as Rutherford and Chadwick had shown) – but also by beryllium, which was not disintegrated by alpha particles.34 Bothe’s result was striking enough for Chadwick at Cambridge, and Irène Curie, daughter of Marie, and her husband Frédéric Joliot in Paris, to take up the German’s approach. Both labs soon found anomalies of their own. H. C. Webster, a student of Chadwick, discovered in spring 1931 that ‘the radiation [from beryllium] emitted in the same direction as the … alpha particles was harder [more penetrating] than the radiation emitted in a backward direction.’ This mattered because if the radiation was gamma rays – light – then it should spray equally in all directions, like the light that shines from a lightbulb. A particle, on the other hand, would behave differently. It might well be knocked forward in the direction of an incoming alpha.35 Chadwick thought, ‘Here’s the neutron.’36

In December 1931 Irène Joliot-Curie announced to the French Academy of Sciences that she had repeated Bothe’s experiments with beryllium radiation but had standardised the measurements. This enabled her to calculate that the energy of the radiation given off was three times the energy of the bombarding alphas. This order of magnitude clearly meant that the radiation wasn’t gamma; some other constituent must be involved. Unfortunately Irène Joliot-Curie had never read Rutherford’s Bakerian lecture, and she took it for granted that the beryllium radiation was caused by protons. Barely two weeks later, in mid-January 1932, the Joliot-Curies published another paper. This time they announced that paraffin wax, when bombarded by beryllium radiation, emitted high-velocity protons.37

When Chadwick read this account in the Comptes rendus, the French physics journal, in his morning mail in early February, he realised there was something very wrong with this description and interpretation. Any physicist worth his salt knew that a proton was 1,836 times heavier than an electron: it was all but impossible for a proton to be dislodged by an electron. While Chadwick was reading the report, a colleague named Feather, who had read the same article and was eager to draw his attention to it, entered his room. Later that morning, at their daily progress meeting, Chadwick discussed the paper with Rutherford. ‘As I told him about the Curie-Joliot observation and their views on it, I saw his growing amazement; and finally he burst out “I don’t believe it.” Such an impatient remark was utterly out of character, and in all my long association with him I recall no similar occasion. I mention it to emphasise the electrifying effect of the Curie-Joliot report. Of course, Rutherford agreed that one must believe the observations; the explanation was quite another matter.’38 Chadwick lost no time in repeating the experiment. The first thing to excite him was that he found the beryllium radiation would pass unimpeded through a block of lead three-quarters of an inch thick. Next, he found that bombardment by the beryllium radiation knocked the protons out of some elements by up to 40 centimetres, fully 16 inches. Whatever the radiation was, it was huge – and in terms of electrical charge, it was neutral. Finally, Chadwick took away the paraffin sheet that the Joliot-Curies had used so as to see what happened when elements were bombarded directly by beryllium radiation. Using an oscilloscope to measure the radiation, he found first that beryllium radiation displaced protons whatever the element, and crucially, that the energies of the displaced protons were just too huge to have been produced by gamma rays. Chadwick had learned a thing or two from Rutherford by now, including a habit of understatement. In the paper, entitled ‘Possible Existence of a Neutron,’ which he rushed to Nature, he wrote, ‘It is evident that we must either relinquish the application of the conservation of energy and momentum in these collisions or adopt another hypothesis about the nature of radiation.’ Adding that his experiment appeared to be the first evidence of a particle with ‘no net charge,’ he concluded, ‘We may suppose it to be the “neutron” discussed by Rutherford in his Bakerian lecture.’39 The process observed was 4He + 9Be→ 12C + n where n stands for neutron of mass number 1.40

The Joliot-Curies were much embarrassed by their failure to spot what was, for Rutherford and Chadwick, the obvious (though the French would make their own discoveries later). Chadwick, who had worked day and night for ten days to make sure he was first, actually announced his results initially to a meeting of the Kapitza Club at Cambridge, which had been inaugurated by Peter Kapitza, a young Russian physicist at the Cavendish. Appalled by the formal, hierarchical structure of Cambridge, Kapitza had started the club as a discussion forum where rank didn’t matter. The club met on Wednesdays, and on the night when Chadwick, exhausted, announced that he had discovered the third basic constituent of matter, he delivered his address – very short – and then remarked tartly, ‘Now I want to be chloroformed and put to bed for a fortnight.’41 Chadwick was awarded the Nobel Prize for his discovery, the result of dogged detective work. The neutral electrical charge of the new particle would allow the nucleus to be probed in a far more intimate way. Other physicists were, in fact, already looking beyond his discovery – and in some cases they didn’t like what they saw.

Physics was becoming the queen of sciences, a fundamental way to approach nature, with both practical and deeply philosophical implications. The trans-mutability of nature apart, its most philosophical aspect was its overlap with astronomy.

At this point we need to return – briefly – to Einstein. At the time he produced his theory of relativity, most scientists took it for granted that the universe was static. The nineteenth century had produced much new information about the stars, including ways to measure their temperatures and distances, but astronomers had not yet observed that heavenly bodies are clustered into galaxies, or that they were moving away from one another.42 But relativity had a surprise for astronomers: Einstein’s equations predicted that the universe must either be expanding or contracting. This was a wholly unexpected consequence, and so weird did it appear, even to Einstein himself, that he tinkered with his calculations to make his theoretical universe stand still. This correction he later called the biggest blunder of his career.43

Curiously, however, a number of scientists, while they accepted Einstein’s theory of relativity and the calculations on which it was based, never accepted the cosmological constant, and the correction on which it was based. Alexander Friedmann, a young Russian scientist, was the first man to cause Einstein to think again (‘cosmological constant’ was actually his term). Friedmann’s background was brutish. His mother had deserted his father – a cruel, arrogant man – taking the boy with her. Convicted of ‘breaking conjugal fidelity,’ she was sentenced by the imperial court to celibacy and forced to give up Alexander. He didn’t see his mother again for nearly twenty years. Friedmann taught himself relativity, during which time he realised Einstein had made a mistake and that, cosmological constant or no, the universe must be either expanding or contracting.44 He found this such an exciting idea that he dared to improve on Einstein’s work, developing a mathematical model to underline his conviction, and sent it to the German. By the early 1920s, however, Arthur Eddington had confirmed some of Einstein’s predictions, and the great man had become famous and was snowed under with letters: Friedmann’s ideas were lost in the avalanche.45 Undaunted, Friedmann tried to see Einstein in person, but that move also failed. It was only when Friedmann was given an introduction by a mutual colleague that Einstein finally got to grips with the Russian’s ideas. As a result, Einstein began to have second thoughts about his cosmological constant – and its implications. But it wasn’t Einstein who pushed Friedmann’s ideas forward. A Belgian cosmologist, Georges Lemaître, and a number of others built on his ideas so that as the 1920s advanced, a fully realised geometric description of a homogeneous and expanding universe was fleshed out.46

A theory was one thing. But planets and stars and galaxies are not exactly small entities; they occupy vast spaces. Surely, if the universe really was expanding, it could be observed? One way to do this was by observation of what were then called ‘spiral nebulae.’ Nowadays we know that nebulae are distant galaxies, but then, with the telescopes of the time, they were simply indistinct smudges in the sky, beyond the solar system. No one knew whether they were gas or solid matter; and no one knew what size they were, or how far away. It was then discovered that the light emanating from spiral nebulae is shifted toward the red end of the spectrum. One way of illustrating the significance of this redshift is by analogy to the Doppler effect, after Christian Doppler, the Austrian physicist who first explained the observation in 1842. When a train or a motorbike comes toward us, its noise changes, and then, as it goes past and away, the noise changes a second time. The explanation is simple: as the train or bike approaches, the sound waves reach the observer closer and closer together – the intervals get shorter. As the train or bike recedes, the opposite effect occurs; the source of the noise is receding at all times, and so the interval between the sound waves gets longer and longer. Much the same happens with light: where the source of light is approaching, the light is shifted toward the blue end of the spectrum, while light where the source is receding is shifted toward the red end.

The first crucial tests were made in 1922, by Vesto Slipher at the Lowell Observatory in Flagstaff, Arizona.47 The Lowell had originally been built in 1893 to investigate the ‘canals’ on Mars. In this case, Slipher anticipated finding redshifts on one side of the nebulae spirals (the part swirling away from the observer) and blueshifts on the other side (because the spiral was swirling toward earth). Instead, he found that all but four of the forty nebulae he examined produced only redshifts. Why was that? Almost certainly, the confusion arose because Slipher could not really be certain of exactly how far away the nebulae were. This made his correlation of redshift and distance problematic. But the results were nonetheless highly suggestive.48

Three years elapsed before the situation was finally clarified. Then, in 1929, Edwin Hubble, using the largest telescope of the day, the 100-inch reflector scope at Mount Wilson, near Los Angeles, managed to identify individual stars in the spiral arms of a number of nebulae, thereby confirming the suspicions of many astronomers that ‘nebulae’ were in fact entire galaxies. Hubble also located a number of ‘Cepheid variable’ stars. Cepheid variables – stars that vary in brightness in a regular way (periods that range from 1—50 days) – had been known since the late eighteenth century, but it was only in 1908 that Henrietta Leavitt, at Harvard, showed that there is a mathematical relationship between the average brightness of a star, its size, and its distance from earth.49 Using the Cepheid variables that he could now see, Hubble was able to calculate how far away a score of nebulae were.50 His next step was to correlate those distances with their corresponding redshifts. Altogether, Hubble collected information on twenty-four different galaxies, and the results of his observations and calculations were simple and sensational: he discovered a straightforward linear relationship. The farther away a galaxy was, the more its light was redshifted.51 This became known as Hubble’s law, and although his original observations were made on twenty-four galaxies, since 1929 the law has been proven to apply to thousands more.52

Once more then, one of Einstein’s predictions had proved correct. His calculations, and Friedmann’s, and Lemaître’s, had been borne out by experiment: the universe was indeed expanding. For many people this took some getting used to. It involved implications about the origins of the universe, its character, the very meaning of time. The immediate impact of the idea of an expanding universe made Hubble, for a time, almost as famous as Einstein. Honours flowed in, including an honorary doctorate from Oxford, Time put him on its cover, and the observatory became a stopping-off place for famous visitors to Los Angeles: Aldous Huxley, Andrew Carnegie, and Anita Loos were among those given privileged tours. The Hubbies were taken up by Hollywood: the letters of Grace Hubble, Edwin’s wife, written in the early thirties, talk of dinners with Helen Hayes, Ethel Barrymore, Douglas Fairbanks, Walter Lippmann, Igor Stravinsky, Frieda von Richthofen (D. H. Lawrence’s widow), Harpo Marx and Charlie Chaplin.53 Jealous colleagues pointed out that, far from being a Galileo or Copernicus of his day, Hubble was not all that astute an observer, and that since his findings had been anticipated by others, his contribution was limited. But Hubble did arduous spadework and produced enough accurate data so that sceptical colleagues could no longer scoff at the theory of an expanding universe. It was one of the most astonishing ideas of the century, and it was Hubble who put it beyond doubt.

At the same time that physics was helping explain massive phenomena like the universe, it was still making advances in other areas of the minuscule world, in particular the world of molecules, helping us to a better understanding of chemistry. The nineteenth century had seen the first golden age of chemistry, industrial chemistry in particular. Chemistry had largely been responsible for the rise of Germany, whose nineteenth-century strength Hitler was so concerned to recover. For example, in the years before World War I, Germany’s production of sulphuric acid had gone from half that of Britain to 50 percent more; its production of chlorine by the modern electrolytic method was three times that of Britain; and its share of the world’s dyestuffs market was an incredible 90 percent.

The greatest breakthrough in theoretical chemistry in the twentieth century was achieved by one man, Linus Pauling, whose idea about the nature of the chemical bond was as fundamental as the gene and the quantum because it showed how physics governed molecular structure and how that structure was related to the properties, and even the appearance, of the chemical elements. Pauling explained the logic of why some substances were yellow liquids, others white powders, still others red solids. The physicist Max Perutz’s verdict was that Pauling’s work transformed chemistry into ‘something to be understood and not just memorised.’54

Born the son of a pharmacist, near Portland, Oregon, in 1901, Pauling was blessed with a healthy dose of self-confidence, which clearly helped his career. As a young graduate he spurned an offer from Harvard, preferring instead an institution that had started life as Throop Polytechnic but in 1922 was renamed the California Institute of Technology, or Caltech.55 Partly because of Pauling, Caltech developed into a major centre of science, but when he arrived there were only three buildings, surrounded by thirty acres of weedy fields, scrub oak, and an old orange grove. Pauling initially wanted to work in a new technique that could show the relationship between the distinctively shaped crystals into which chemicals formed and the actual architecture of the molecules that made up the crystals. It had been found that if a beam of X rays was sprayed at a crystal, the beam would disperse in a particular way. Suddenly, a way of examining chemical structure was possible. X-ray crystallography, as it was called, was barely out of its infancy when Pauling got his Ph.D., but even so he quickly realised that neither his math nor his physics were anywhere near good enough to make the most of the new techniques. He decided to go to Europe in order to meet the great scientists of the day: Niels Bohr, Erwin Schrödinger, Werner Heisenberg, among others. As he wrote later, ‘I had something of a shock when I went to Europe in 1926 and discovered that there were a good number of people around that I thought to be smarter than me.’56

So far as his own interest was concerned, the nature of the chemical bond, his visit to Zurich was the most profitable. There he came across two less famous Germans, Walter Heitler and Fritz London, who had developed an idea about how electrons and wave functions applied to chemical reactions.57 At its simplest, imagine the following: Two hydrogen atoms are approaching one another. Each is comprised of one nucleus (a proton) and one electron. As the two atoms get closer and closer to each other, ‘the electron of one would find itself drawn to the nucleus of the other, and vice versa. At a certain point, the electron of one would jump to the new atom, and the same would happen with the electron of the other atom.’ They called this an ‘electron exchange,’ adding that this exchange would take place billions of times a second.58 In a sense, the electrons would be ‘homeless,’ the exchange forming the ‘cement’ that held the two atoms together, ‘setting up a chemical bond with a definite length.’ Their theory put together the work of Pauli, Schrödinger, and Heisenberg; they also found that the ‘exchange’ determined the architecture of the molecule.59 It was a very neat piece of work, but from Pauling’s point of view there was one drawback about this idea: it wasn’t his. If he were to make his name, he needed to push the idea forward. By the time Pauling returned to America from Europe, Caltech had made considerable progress. Negotiations were under way to build the world’s biggest telescope at Mount Wilson, where Hubble would work. A jet propulsion lab was planned, and T. H. Morgan was about to arrive, to initiate a biology lab.60 Pauling was determined to outshine them ad. Throughout the early 1930s he released report after report, all part of the same project, and all having to do with the chemical bond. He succeeded magnificently in building on Heitler and London’s work. His early experiments on carbon, the basic constituent of life, and then on silicates showed that the elements could be systematically grouped according to their electronic relationships. These became known as Pauling’s rules. He showed that some bonds were weaker than others and that this helped explain chemical properties. Mica, for example, is a silicate that, as all chemists know, splits into thin, transparent sheets. Pauling was able to show that mica’s crystals have strong bonds in two directions and a weak one in a third direction, exactly corresponding to observation. In a second instance, another silicate we all know as talc is characterised by weak bonds all around, so that it crumbles instead of splitting, and forms a powder.61

Pauling’s work was almost as satisfying for others as it was for him.62 Here at last was an atomic, electronic explanation of the observable properties of well-known substances. The century had begun with the discovery of fundamentals that applied to physics and biology. Now the same was happening in chemistry. Once more, knowledge was beginning to come together. During 1930–5, Pauling published a new paper on the bond every five weeks on average.63 He was elected to the National Academy of Sciences in America at thirty-two, the youngest scientist ever to receive that honour.64 For a time, he was so far out on his own that few other people could keep up. Einstein attended one lecture of his and admitted afterward that it was beyond him. Uniquely, Pauling’s papers sent to the Journal of the American Chemical Society were published unrefereed because the editor could think of no one qualified to venture an opinion.65 Even though Pauling was conscious of this, throughout the 1930s he was too busy producing original papers to write a book consolidating his research. Finally, in 1939 he published The Nature of the Chemical Bond. This revolutionised our understanding of chemistry and immediately became a standard text, translated into several languages.66 It proved crucial to the discoveries of the molecular biologists after World War II.

The fresh data that the new physics was producing had very practical ramifications that arguably have changed our lives far more directly than was at first envisaged by scientists mainly interested in fundamental aspects of nature. Radio, in use for some time, moved into the home in the 1920s; television was first shown in August 1928. Another invention, using physics, revolutionised life in a completely different way: this was the jet engine, developed with great difficulty by the Englishman Frank Whittle.

Whittle was the working-class son of a mechanic who lived on a Coventry housing estate. As a boy he educated himself in Leamington Public Library, where he spent all his spare time devouring popular science books about aircraft – and turbines.67 All his life Frank Whittle was obsessed with flight, but his background was hardly natural in those days for a university education, and so at the age of fifteen he applied to join the Royal Air Force as a technical apprentice. He failed. He passed the written examination but was blocked by the medical officer: Frank Whittle was only five feet tall. Rather than give up, he obtained a diet sheet and a list of exercises from a friendly PE teacher, and within a few months he had added three inches to his height and another three to his chest measurement. In some ways this was as impressive as anything else he did later in life. He was finally accepted as an apprentice in the RAF and although he found the barrack-room life irksome, in his second year as a cadet at Cranwell, the RAF college – at the age of nineteen – he wrote a thesis on future developments in aircraft design. It was in this paper that Whittle began to sketch his ideas for the jet engine. Now in the Science Museum in London, the paper is written in junior handwriting, but it is clear and forthright.68 His crucial calculation was that ‘a 1 oomph wind against a machine travelling at 6oomph at 120,000 feet would have less effect than a 2omph head wind against the same machine at 1,000 feet.’ He concluded, ‘Thus everything indicates that designers should aim at altitude.’ He knew that propellers and petrol engines were inefficient at great heights, but he also knew that rocket propulsion was suitable only for space travel. This is where his old interest in turbines resurfaced; he was able to show that the efficiency of turbines increased at higher altitudes. An indication of Whittle’s vision is apparent from the fact that he was contemplating an aircraft travelling at a speed of 500mph at 60,000 feet, while in 1926 the top speed of RAF fighters was 150 mph, and they couldn’t fly much above 10,000 feet.

After Cranwell, Whittle transferred to Hornchurch in Essex to a fighter squadron, and then in 1929 moved on to the Central Flying School at Wittering in Sussex as a pupil instructor. All this time he had been doggedly worrying how to create a new kind of engine, most of the time working on an amalgam of a petrol engine and a fan of the kind used in turbines. While at Wittering, he suddenly saw that the solution was alarmingly simple. In fact, his idea was so simple his superiors didn’t believe it. Whittle had grasped that a turbine would drive the compressor, ‘making the principle of the jet engine essentially circular.’69 Air sucked in by the compressor would be mixed with fuel and ignited. Ignition would expand the gas, which would flow through the blades of the turbine at such a high speed that not only would a jet stream be created, which would drive the aircraft forward, but the turning of the blades would also draw fresh air into the compressor, to begin the process all over again. If the compressor and the turbine were mounted on the same shaft, there was in effect only one moving part in a jet engine. It was not only far more powerful than a piston engine, which had many moving parts, but incomparably safer. Whittle was only twenty-two, and just as his height had done before, his age now acted against him. His idea was dismissed by the ministry in London. The rebuff hit him hard, and although he took out patents on his inventions, from 1929 to the mid-193os, nothing happened. When the patents came up for renewal, he was still so poor he let them lapse.70

In the early 1930s, Hans von Ohain, a student of physics and aerodynamics at Göttingen University, had had much the same idea as Whittle. Von Ohain could not have been more different from the Englishman. He was aristocratic, well off, and over six feet tall. He also had a different attitude to the uses of his jet.71 Spurning the government, he took his idea to the private planemaker Ernst Heinkel. Heinkel, who realised that high-speed air transport was much needed, took von Ohain seriously from the start. A meeting was called at his country residence, at Warnemünde on the Baltic coast, where the twenty-five-year-old Ohain was faced by some of Heinkel’s leading aeronautical brains. Despite his youth, Ohain was offered a contract, which featured a royalty on all engines that might be sold. This contract, which had nothing to do with the air ministry, or the Luftwaffe, was signed in April 1936, seven years after Whittle wrote his paper.

Meanwhile in Britain Whittle’s overall brilliance was by now so self-evident that two friends, convinced of Whittle’s promise, met for dinner and decided to raise backing for a jet engine as a purely business venture. Whittle was still only twenty-eight, and many more experienced aeronautical engineers thought his engine would never fly. Nonetheless, with the aid of O. T. Falk and Partners, city bankers, a company called Power Jets was formed, and £20,000 raised.72 Whittle was given shares in the company (no royalties), and the Air Ministry agreed to a 25 percent stake.

Power Jets was incorporated in March 1936. On the third of that month Britain’s defence budget was increased from £122 million to £158 million, partly to pay for 250 more aircraft for the Fleet Air Arm for home defence. Four days later, German troops occupied the demilitarised zone of the Rhineland, thus violating the Treaty of Versailles. War suddenly became much more likely, a war in which air superiority might well prove crucial. All doubts about the theory of the jet engine were now put aside. From then on, it was simply a question of who could produce the first operational jet.

The intellectual overlap between physics and mathematics has always been considerable. As we have seen in the case of Heisenberg’s matrices and Schrödinger’s calculations, the advances made in physics in the golden age often involved the development of new forms of mathematics. By the end of the 1920s, the twenty-three outstanding math problems identified by David Hilbert at the Paris conference in 1900 (see chapter 1) had for the most part been settled, and mathematicians looked out on the world with optimism. Their confidence was more than just a technical matter; mathematics involved logic and therefore had philosophical implications. If math was complete, and internally consistent, as it appeared to be, that said something fundamental about the world.

But then, in September 1931, philosophers and mathematicians convened in Königsberg for a conference on the ‘Theory of Knowledge in the Exact Sciences,’ attended by, among others, Ludwig Wittgenstein, Rudolf Carnap, and Moritz Schlick. All were overshadowed, however, by a paper from a young mathematician from Brünn, whose revolutionary arguments were later published in a German scientific journal, in an article entitled ‘On the Formally Undecidable Propositions of Principia Mathematica and Related Systems.’73 The author was Kurt Gödei, a twenty-five-year-old mathematician at the University of Vienna, and this paper is now regarded as a milestone in the history of logic and mathematics. Gödel was an intermittent member of Schlick’s Vienna Circle, which had stimulated his interest in the philosophical aspects of science. In his 1931 paper he demolished Hilbert’s aim of putting all mathematics on irrefutably sound foundations, with his theorem that tells us, no less firmly than Heisenberg’s uncertainty principle, that there are some things we cannot know. No less importantly, he demolished Bertrand Russell’s and Alfred North Whitehead’s aim of deriving all mathematics from a single system of logic.74

There is no hiding the fact that Gödel’s theorem is difficult. There are two elements that may be stated: one, that ‘within any consistent formal system, there will be a sentence that can neither be proved true nor proved false’; and two, ‘that the consistency of a formal system of arithmetic cannot be proved within that system’.75 The simplest way to explain his idea makes use of the so-called Richard paradox, first put forward by the French mathematician Jules Richard in 1905.76 In this system integers are given to a variety of definitions about mathematics. For example, the definition ‘not divisible by any number except one and itself’ (i.e., a prime number), might be given one integer, say 17. Another definition might be ‘being equal to the product of an integer multiplied by that integer’ (i.e., a perfect square), and given the integer 20. Now assume that these definitions are laid out in a list with the two above inserted as 17th and 20th. Notice two things about these definitions: 17, attached to the first statement, is itself a prime number, but 20, attached to the second statement, is not a perfect square. In Richardian mathematics, the above statement about prime numbers is not Richardian, whereas the statement about perfect squares is. Formally, the property of being Richardian involves ‘not having the property designated by the defining expression with which an integer is correlated in the serially ordered set of definitions.’ But of course this last statement is itself a mathematical definition and therefore belongs to the series and has its own integer, n. The question may now be put: Is n itself Richardian? Immediately the crucial contradiction appears. ‘For n is Richardian if, and only if, it does not possess the property designated by the definition with which n is correlated; and it is easy to see that therefore n is Richardian if, and only if, n is not Richardian.’77

No analogy like this can do full justice to Gödel’s theorem, but it at least conveys the paradox adequately. It is for some a depressing conclusion (and Gödei himself battled bouts of chronic depression. After living an ascetic personal life, he died in 1978, aged seventy-two, of ‘malnutrition and inanition’ brought about by personality disturbance).78 Gödel had established that there were limits to math and to logic. The aim of Gottlob Frege, David Hilbert, and Russell to create a unitary deductive system in which all mathematical (and therefore all logical) truth could be deduced from a small number of axioms could not be realised. It was, in its way and as was hinted at above, a form of mathematical uncertainty principle – and it changed math for all time. Furthermore, as Roger Penrose has pointed out, Gödel’s ‘open-ended mathematical intuition is fundamentally incompatible with the existing structure of physics.’79

In some ways Gödel’s discovery was the most fundamental and mysterious of all. He certainly had what most people would call a mystical side, and he thought we should trust [mathematical] intuition as much as other forms of experience.80 Added to the uncertainty principle, his theory described limits to knowledge. Put alongside all the other advances and new avenues of thought, which were then exploding in all directions, it injected a layer of doubt and pessimism. Why should there be limits to our knowledge? And what did it mean to know that such limits existed?