“It has long been an axiom of mine that the little things are infinitely the most important.”– Sherlock Holmes [1].
8.1 The Storm
The Blue Morpho, or Morpho peleides, is an exquisite tropical butterfly inhabiting the rainforests of South America. Its wings gleam with a sapphire iridescence, a result of the diffraction of the forest light by the millions of tiny scales covering their surfaces. Nature has deemed that a creature so beautiful should not be doubly gifted, and to compensate for its splendour has cut its lifetime short, from egg to adult spanning a mere 115 days.
Beauty, however, has little impact on predators, such as the jacamar or the flycatcher, two of the insectivorous birds who make the forest their home. To them, the blue colour is a beacon advertising food, not unlike the neon-lit logo of a fast food chain.
And so it was that on a still afternoon in the year of 1880, in what is now known as the Common Era, a lone Blue Morpho, in the prime of its short existence, was forced into a fight to the death with a hungry marauder, many times its body weight. A brave little butterfly, it fought valiantly, flapping its blue wings in a frenzy of frustration and indignation. “Why now? What could possibly be the purpose of such a short life?” were its dying thoughts.
However, if it could have foreseen the consequences of that afternoon, the Morpho might have railed less at the injustice of fate. Some natural systems are so delicately poised that a minute perturbation to one small component of them can start a cycle of ever-increasing changes, until the consequences become devastating [2]. In this instance, the death throes of the beautiful insect triggered tiny air currents that, ever so marginally, perturbed the finely-balanced prevailing meteorological conditions. Winds changed their directions slightly, isobaric weather patterns around the earth were influenced, and inexorably the conditions began to form for the development of a prodigious tempest in an altogether different part of the world.
The effects extended as far as Foxhill, a hamlet south of Nelson in the north of New Zealand’s South Island, where a clap of thunder shattered the late evening tranquillity. Martha Rutherford raced to the linen cupboard to haul out an armful of bedsheets. She signalled to her daughters to help her, and before long all mirrors in the house had been safely shrouded against lightning, as was the custom of the time in a thunderstorm. The boys closed the curtains on the windows, and securely stowed silverware and other metallic objects in drawers and cupboards. There was little prospect of further sleep, as the violent electrical discharges gathered in fury outside.
Martha hated storms, but her husband, James, pooh-poohed her, and led his large family, or those who were not too afraid, out onto the verandah to watch the fiery display, and listen to the sounds of the tempest. James was a Scottish wheelwright who had immigrated with his family to New Zealand in 1842. He was a man with an interest in nature, and he demonstrated to Ernest, his nine-year-old son, how to estimate the distance to the lightning strikes by counting the seconds between each flash and the following thunderclap. He explained the difference between sheet lightning, fork lighting and ball lightning.
Before long, the flashes and thunderclaps became simultaneous. A fork of lightning struck a tree in a neighbouring field and drove the livestock into a panic. They careered madly around their yards, colliding with fences and gates. The sheer ferocity of nature’s display made a deep impression on the young Ernest.
Much damage was done that night. Stock lay dead from lightning strikes and many trees were uprooted. However, as is the lot of country folk the world over, disasters were put aside, the routine of daily life resumed, and memories faded.
For one small boy, however, the storm was a watershed, and the path of his life took a new direction. Ernest became introverted and thoughtful. Sometimes he would sit at the table with knife and fork in hand, and stare into space, contemplating some problem of a scientific nature. His siblings drew their father’s attention to him with their friendly banter: “Dad, look at Ern. He’s away again.”
In the following years, the young Rutherford developed into his country’s greatest scientist, and departed from the land of his birth, as so many of his generation did, to pursue his career overseas. There he achieved the Nobel Prize, the greatest of all accolades, and became recognised as “the father of nuclear physics.” His work motivated others, and in the dark period of the Second World War, it was turned to the construction of the ultimate weapon.
And so it came to be that on the 6th August, 1945, sixty-five years after the death of that lonely butterfly in an Amazonian rainforest, an American B-29 bomber took off from Tinian, one of the Northern Mariana Islands, and set a course for Hiroshima. This time the fiery tempest that was unleashed changed not just one man’s destiny, but the course of human history for all time.
***
In this Chapter, we see how Rutherford’s experimental work set the basis for a deep understanding of the interactions inside the atom, and led eventually to one of the most accurate theories in the whole of physics, a theory with a precision equivalent to predicting the distance from the earth to the moon with an accuracy of half a millimetre.1 And yet, hidden away in the very core of this masterwork is a caveat, an essential, rather dodgy, sleight-of-hand, to remind us that we are but human, and our knowledge is therefore imperfect.
First, however, it is necessary to refresh and extend our understanding of two concepts that we have encountered already in previous chapters: particles and fields.
8.2 Particles
In Chap. 5, we have seen how the idea that matter in the universe is composed of elementary particles, called atoms, has been around at least since the time of the early Greeks. By the start of the 20th Century, it was recognised that electromagnetic radiation also exhibits corpuscular properties in some circumstances. Any doubt about the atomic structure of matter—and there had been many doubters because of the difficulty of observing atoms—had been dispelled by Einstein’s paper in 1905 explaining Brownian motion (see Chap. 5).
The 19th Century had also seen the discovery of the electron, the fundamental particle of electricity. In 1897, the British physicist J. J. Thompson reported in The Philosophical Magazine experiments on the strange cathode rays emitted from a heated metal in a vacuum. His ingeniously designed experiments used deflections of the cathode rays by electric and magnetic fields to verify that the rays were indeed comprised of electrified particles, now known as electrons, and went on to determine the ratio of the electric charge of the particles to their mass.
A number of other rays were also discovered towards the end of the 19th Century. These were X-rays, produced by Wilhelm Röntgen in 1895, and the alpha, beta and gamma rays emitted during the radioactive decay of uranium, and discovered in 1899 and 1900 by Henri Becquerel, Paul Villard and Ernest Rutherford, working independently. Further experiments to determine the electric charge-to-mass ratio of the alpha and beta particles found that the positively charged alpha particle was most probably a helium atom with its two electrons stripped away, and the negatively charged beta particles were electrons. Gamma rays were a very energetic form of electromagnetic radiation. Both X-rays and gamma rays, and also visible light, heat rays (infra-red radiation) and chemical rays (ultra-violet radiation), are now known to be made of photons of different frequency (see Fig. 5.1).
So at the end of the 19th Century there was a widespread belief that apart from a few wayward examples (electrons, and the particles emitted in radioactive decay), the fundamental building blocks of matter are atoms. Such a belief did not survive for very long. Indulging in a pursuit that is fairly common among curious children, physicists began smashing objects together to see what lay inside.

Trajectories of alpha particles scattered from a gold nucleus in the Rutherford Model of the atom
This model immediately leads to a novel question: what is the atomic nucleus made of? From a series of experiments, Rutherford inferred that the nucleus was constructed of two new particles: protons and neutrons. The proton is the positively-charged nucleus of the hydrogen atom; the neutron is a very similar particle to the proton, but is uncharged. Its existence was confirmed in 1932 by James Chadwick. In the same year, the positive electron, or positron, was discovered by Carl Anderson. This was another particle whose existence had been postulated, in this case by theorist Paul Dirac, before its actual discovery.
Dirac developed a theory that integrated the quantum mechanics of Schrödinger and Heisenberg with Einstein’s special relativity for a charged particle. He was struck by the beauty, which for a physicist usually means symmetry (and relative simplicity), of his equation. He noticed that the equation would work just as well for positively charged particles as for the negatively charged electrons. He then proposed that for every particle there exists a corresponding anti-particle, with nearly identical properties to the original particle, except that it has the opposite electric charge. This led to the concept of antimatter, which consists of anti-particles in the same way that matter is constructed from ordinary particles.
As a result, in the first few decades of the 20th Century physicists (and chemists alike) had arrived at a fairly consistent picture of the structure of matter. All substances were composed of chemical compounds, whose basic building blocks were molecules, which were themselves constructed from an arrangement of atoms. The simplest possibility was a material (known as an element) constructed from only one type of atom. The atoms were not indivisible, as had been assumed by Democritus and Dalton, but were composed of a nucleus surrounded by a cloud of electrons. The nucleus was made up of neutrons and protons. It appeared that protons, neutrons and electrons, together with photons, were all that was necessary to explain the world around us. Physicists at this time had reason to feel well-satisfied, even smug with their achievements.
However, this happy state of self-congratulation lasted only for a short time before other particles began to be discovered, demanding an explanation for their existence. Also, one would expect the positively charged protons to repel each other electrostatically and fly apart, rather than suffer confinement within the bounds of the nucleus, unless there were some stronger, as yet unknown, nuclear forces binding them together.
Before addressing these problems, however, we must digress to explore how particles interact with each other. Besides the direct collision of one particle with another, it was clear that particles must also interact in a more indirect manner. This phenomenon was dubbed action at a distance, and it takes place when the like poles of two magnets repel each other, or two electrically charged objects deflect each other. It involves ideas gradually developed over many years, until coming to fruition in the 19th Century with the new concept of a field.2
We have already explored gravitational “action at a distance”, aka the gravitational field, in the last Chapter, and seen how Einstein explained this phenomenon as a consequence of the warping of the fabric of space–time by the large mass of heavy objects. An analogous explanation does not exist for electromagnetic fields, or the fields produced by nuclear forces. This is one reason why it has proved so difficult to unify Quantum Mechanics and the Theory of Relativity into a theory of everything (TOE), describing all the fundamental natural forces.
8.3 Fields and Relativistic QM
Action at a distance was never a concept embraced readily by physicists. Newton’s theory of gravity was criticised by Descartes and Leibnitz on the grounds that he had not explained the nature of gravity, which appeared in his writings as almost supernatural. A quarter of a century after the initial publication of his ideas in Principia Mathematica, Newton addressed these criticisms. He admitted he did not understand how gravity exerted its pull on objects across empty space.
Similar conceptual difficulties arise when considering the mutual interaction of magnets and of static electric charges. Gradually, in the nineteenth century, these non-local influences were attributed to fields which surround the particles. For instance, an electric field emanating from a charged particle is capable of influencing another charged particle some distance removed. Conversely, the field produced by the second particle can influence the first, resulting in a chicken-and-egg dilemma.
Now let us suppose that the first electrically charged particle is moved. How does the electric field in the vicinity of the second particle behave? Does it change instantaneously as the first particle moves, or is there a time lag? These questions were addressed by Gauss and Faraday, before being resolved by James Clerk Maxwell, as we have seen in Chap. 5, in a seminal paper in the second half of the 19th Century.
Maxwell found that not only did the electric field propagate out from a moving electric charge with a definite speed, but the moving electric charge generated a magnetic field, which also propagated. These propagating intertwined fields became known as electromagnetic radiation and the study of their properties was called Electromagnetism.
Maxwell went even further: he was able to predict the speed of propagation of the electromagnetic radiation from two independent physical constants which could be measured in the laboratory. The value he obtained for this speed was close to another well-known physical constant, the speed of light. Not believing this to be just a coincidence, he then made the bold assertion that light was actually a form of electromagnetic radiation. His work united the three separate disciplines of electricity, magnetism and optics under the banner of electromagnetism, and became one of the bedrocks of what is now known as classical physics.
We now turn our attention to a phenomenon known as the line spectra of atoms, whose explanation provided some of the most convincing evidence for the veracity of Quantum Mechanics. In the non-relativistic Quantum Mechanics of Schrödinger, the energies of the electrons in the cloud surrounding the nucleus of an atom may only have certain particular values, or energy levels. The value of the energy in these levels is obtained from the solution of the fundamental equation of QM, the Schrödinger Equation. Electromagnetic radiation is emitted in the form of discrete frequencies, when electrons transition from a higher to a lower energy state. This is quite different from the predictions of classical physics, where a spectrum continuous across all frequencies, is expected.

Common salt (sodium chloride) sprinkled into a gas flame produces a bright yellow flare, characteristic of the decay of excited sodium atoms

A schematic picture of electron transitions between energy levels in an atom. Note that not all transitions (e.g. the dashed ones) are possible, as some are forbidden by the Pauli Exclusion Principle (see text)
Now, just as water runs down a hill to take up a position of low potential energy, so electrons in the electron cloud would be expected to cascade down into the lowest possible energy level. The reason this does not happen is embodied in an Exclusion Principle proposed by Wolfgang Pauli in 1925. This principle states that two identical electrons cannot occupy the same energy level simultaneously.
There are rules arising from the Pauli Exclusion Principle for the way that electrons may place themselves systematically in the atomic energy levels. We need not go into these here. However, suffice it to say that these rules explain the Periodic Table of the Elements, discovered empirically by Mendeleev in the 19th Century, which predicts the properties of the chemical elements, including those of some elements that had not been discovered at that time.

An example of the fine structure produced in two spectral lines from sodium by the application of a magnetic field. The upper half of the photograph shows two lines when no magnetic field is applied, and the lower half demonstrates the appearance of additional lines when it is applied. Image in Public Domain (Image in Public Domain (due to age): https://commons.wikimedia.org/wiki/File:ZeemanEffect.GIF (accessed 2020/9/4))
When Dirac’s extension of Schrödinger’s equation to include relativistic effects was applied to this problem, it was startlingly successful. His equation explained the fine structure in the line spectra even when the atoms were placed in a magnetic field. As a bonus, it predicted the intrinsic angular momentum carried by every electron. This quantity was dubbed the electron’s spin, in analogy with the angular momentum of a spinning ball.
There is a danger, however, in using models based on our everyday experience in domains that are far removed (in terms of size and velocity) from our daily life. It soon became apparent that this classical picture was an inaccurate description of the electron. Attempts to measure the size of the electron revealed that if the electron were a spinning ball, its surface would have to be moving faster than the velocity of light to produce the observed intrinsic angular momentum. This would be in conflict with the Theory of Relativity. The electron is now believed to contain no internal structure, and have zero size: it is, in fact, the archetypal mathematical point, albeit carrying angular momentum, electric charge and mass. In this sense it is truly a fundamental particle.

The Cheshire cat fades away, leaving behind only his grin. Alice’s Adventures in Wonderland by Lewis Carroll, 1865. Image by Sir John Tenniel (Image: public domain due to age https://commons.wikimedia.org/wiki/File:Alice_par_John_Tenniel_24.png (accessed 2020/05/30))
In the preceding paragraphs, we have given names to three intangible quantities (i.e. angular momentum, electric charge and mass), as if by so doing we could gain an understanding of what they are. In Chap. 2 we have discussed the difficulties associated with a search for understanding, and what we, as physicists, mean by this term. Angular momentum, electric charge and mass are found in physical equations that accurately predict the results of experiments and observations spanning the full gamut of spatial dimensions, from the smallest fundamental particles to clusters of galaxies spread throughout the universe. This wide-ranging predictive capability is an example of what a physicist means by understanding.
So if the Schrödinger Equation has been so successful at explaining the electronic structure of atoms, how does the Dirac Equation fare, including as it does aspects of the Theory of Relativity? We have already noted its success in explaining the mysterious fine structure that occurs in energy levels in the presence of a magnetic field.
Another basic difference between the predictions of Dirac’s theory and Schrödinger’s non-relativistic QM became immediately apparent: the Dirac theory allows negative energy solutions. But what does this mean: an electron with negative energy? It must have been tempting at first to simply dismiss these solutions as non-physical, and sweep them under the carpet. Dirac, however, was of a different mind. He decided to assume that negative-energy solutions were real and explore the implications of this assumption.
Firstly, he investigated what would be the effect of the energy carried by a photon impinging on the electronic system described by his equation. Non-relativistic QM allows an electron to be knocked from a lower to a higher level, or even out of the atom altogether. The latter process, where atomic electrons are completely stripped from atoms, is known as ionization. These two phenomena are also possible in Dirac’s theory; it would not be a tenable physical theory if this were not the case.

Pair Production: An incoming photon (γ) excites an electron out of the negative energy (blue) “Dirac Sea”, leaving behind a hole (positron) and a positive-energy free electron
We have now come to the end of the road for the Dirac equation. It was designed for, and was brilliantly successful at describing the behaviour of a single electron. However, to fully account for pair production we need a theory that can describe the electron–electron interaction, and the electron-photon interaction. Such a theory, called Quantum Electrodynamics (or QED for short), will be outlined in the next Section. It is very relevant, even in view of our simplified approach in this book, for three reasons: first, because it is arguably the most precise theory ever to be developed in any field of science, second because it can be applied to all interactions in nature except gravity and radioactivity, and third because it has been used as a template for another theory, the Standard Model of Fundamental Particles, which is currently the most widely-accepted explanation for the “zoo” of newly discovered particles, that we shall encounter in Chap. 9.
8.4 Quantum Electrodynamics (QED)
QED owes its development largely to three physicists, who shared the 1965 Nobel Prize for their “fundamental work in quantum electrodynamics, with deep-ploughing consequences for the physics of elementary particles”. These scientists, who worked independently, were Sin-Itiro Tomonaga, Julian Schwinger and Richard P. Feynman, and their approaches to the problem were quite different.
It is not uncommon in physics that different ways of tackling a problem lead to the same answer. For instance, Newton and Lagrange approached classical mechanics very differently. Newton employed forces, velocities and acceleration, which are vectors,3 whereas Lagrange used a sophisticated mathematical technique, known as the Calculus of Variations. Almost two centuries later, Quantum Mechanics was formulated using differential equations by Schrödinger and matrices by Werner Heisenberg. Both of these apparently disparate methods have been shown to be mathematically equivalent. In the case of QED, the approach developed by Feynman is the most transparent and will be discussed in the following.
Let us begin with a simple illustrative example: the reflection of light from a mirror (see Fig. 8.7). In the classical representation of Maxwell, light is a wave and its reflection is no more difficult to explain than the reflection of ripples from the side of a pond. But, as we have seen in Chap. 5, in QM light is composed of a stream of particles (photons), whose distribution is controlled by a probability.

Upper figure: rays of light travelling from A to B, after reflection from a mirror. Lower figure: the path length of the various rays, depending on their contact point with the mirror
The lower half of the figure is a graph of the path length travelled by all rays travelling from A to B via reflection from the mirror, and holds the key to the reconciliation of the quantum and classical results. From this plot, it is clear that the ray reflected at point C on the mirror’s surface has the shortest path length. Let us call this ray the primary ray. Rays close to the primary ray have path lengths almost equal to that of the primary ray. As we move away from the primary ray, the path length increases rapidly.
As we have seen in Chap. 5, the distribution of photons in the quantum mechanical picture is derived from the solution of a wave equation. All possible reflected paths from A to B must be considered. The paths of photons close to that of the primary ray have similar path lengths, and so their probabilities add up; in other words, these rays reinforce each other. The path lengths of photons far from the primary ray increase rapidly. The contribution from one ray in this region is cancelled by that from a neighbour with a path length one half-wavelength different from its own. The net effect is that the contributions from rays in this region to the overall probability cancel each other. The rays close to the primary ray therefore form overwhelmingly the major contribution to the total probability distribution, and rays impinging on the mirror away from C can normally be disregarded.
Is there any way we can allow rays far away from the primary ray to make their presence felt? Yes, there is. We can scratch lines across the mirror with a spacing such that only those rays with a path length equal to a multiple of the wavelength of the incident light are reflected. (We are assuming here that the incident light is monochromatic.) In this case, the probabilities from these rays will add up, and all rays will make a contribution to the reflected light arriving at point B. Such a scratched mirror is called a diffraction grating.
We have discussed this example in some detail to make a point that is necessary in what follows: all possible paths contribute to the probability of finding a photon arriving at point B. However, contributions from particular paths often cancel each other, leaving the majority of the contribution to relatively few of the possible paths.
We now return to what is the core of QED, the interaction of electrons with each other, and with photons. In so doing we will encounter some strange effects that will stretch our credulity. Some readers may be unwilling to accept these effects because they are too “crazy”. However, Richard Feynman summed up his reaction to this objection: “It’s the way nature works. If you don’t like it, go somewhere else. Go to another universe where the rules are simpler, and philosophically more pleasing” [3]. The fact is that the accuracy of QED in describing natural phenomena, with the exceptions of gravity and nuclear phenomena, is mind-boggling, and we cannot just discard it because we find its principles contrary to our preconceptions.

Examples of Feynman diagrams
Figure 8.8a displays the classical situation, with the electron remaining at the same position in space as time progresses. (In these diagrams, time is plotted vertically, and spatial distance horizontally, even if the axes are not specifically plotted). In Fig. 8.8b, a photon has been spontaneously produced by the electron and has been reabsorbed a short time later. (The photon’s path is represented by the wiggly line). Figure 8.8c shows a more complex case where two photons have been produced, and reabsorbed. There are an infinite number of such possibilities. These diagrams are called Feynman diagrams, after their inventor.
As we saw earlier, if its energy is sufficiently high, a photon can knock an electron out of the Dirac Sea, and leave behind a hole, which is interpreted as a positron. The corresponding Feynman diagram is shown in Fig. 8.8d. The electron and positron may recombine (or annihilate) a short time later to produce another photon, as shown in Fig. 8.8e.
In Fig. 8.8e we have taken advantage of the symmetry of the laws of physics with respect to the reversal of time. If we take a video of the collision of two billiard balls on a table, it is not possible just by examining it to decide whether the video is running forwards or backwards. An analogous symmetry in the equations of QM means that we cannot differentiate between an electron moving backwards in time and a positron moving forwards. In Fig. 8.8e we have observed this convention by depicting a positron as a backwards-moving electron.
Figure 8.8 contains only a few samples of Feynman diagrams. Their number is limitless, and in analogy with the rays in Fig. 8.7, all of them must be considered when calculating the quantum probabilities associated with any event. This may seem an impossible task. However, as we shall see later, many of these diagrams make negligible contributions to the quantum probability, and can be safely neglected, in analogy with the neglect of outlying rays in Fig. 8.7.
To conclude, we arrive at a picture in which the space surrounding an electron is alive with photons potentially being created and absorbed all the time, and electron–positron pairs being continuously created and annihilated. The particles that are created and recombine around the original particle are known as virtual particles. They differ from real particles in that, as a consequence of the Heisenberg Uncertainty Principle that we encountered in Chap. 5, they do not necessarily obey conservation laws. If a particle is short-lived, the time when it is in existence can be specified quite precisely. The Uncertainty Principle tells us that in this case the particle’s energy cannot be measured accurately (which is contrary to the Law of Conservation of Energy), and QM tells us that all significant graphs must be considered when taking the sum over all probabilities, not just those where energy is conserved.

Collision between two electrons with exchange of a virtual photon
It is now time to put the theory that we have outlined to the test of experiment. Charged particles, such as the electron, which also have angular momentum, or spin, behave as though they are tiny magnets when placed in a magnetic field. Physicists say that these particles have a magnetic moment. This behaviour is predicted by Dirac’s theory, and observed by experiment. The fact that the theoretical and experimental values for the magnetic moment agree closely was regarded as a triumph of Dirac’s relativistic quantum mechanics.
However, as experiments became more accurate, a small discrepancy of about one part in a thousand emerged between Dirac’s theoretical predictions and the experimental measurements. Such a difference, expressed as a fraction of the Dirac prediction, was called the anomalous magnetic moment, and given the symbol αe. The current experimental value of αe is 0.001 159 652 180 73(28). The number in parentheses is the estimated experimental error in this result. To obtain this value requires measuring the electron magnetic moment to a precision of one part in a trillion (1 in 1012).
The first improvement on Dirac’s theoretical value (which was 2) for the electron magnetic moment was made by Julian Schwinger, who obtained αe = 0.001 161 4 in 1948. This result effectively made use of only the most important Feynman diagram. As more complex Feynman diagrams were added, this theoretical prediction became more and more refined. Remember: in principle all of the infinite number of Feynman diagrams should be included. However, the contributions from Feynman diagrams become less and less important as the diagrams become more complex, which makes it possible to terminate the summation at some stage. Currently the best analytical result for αe is 0.001 159 652 181 643 (764). The number in parentheses is the possible error attributed to neglected Feynman diagrams.
The agreement between theory and experiment here is astonishing. However, in all honesty it must be admitted that a skilled piece of legerdemain has taken place. The theoretical values depend on a fundamental physical constant known as the Fine Structure Constant (see Chap. 3), which, to add to the confusion, is represented by the symbol α. Physicists have not found a way to compute the value of α from first principles, and must rely on obtaining it from experimental observations, by adjusting it until the QED predictions agree with experimental measurements.


obtained from the magnetic moment experiments. (As we mentioned already in Chap. 3, the fine structure constant is usually expressed as a reciprocal.) The consistency of these two results is a demonstration of the internal consistency of QED. In addition, there are other physical quantities that can also be used to determine α, but the above two provide the most accurate values.
The low value of α (approximately 1/137) is the reason that higher order Feynman diagrams can be neglected in the summation process described above. Each Feynman diagram contains the factor α multiple times, equal to the order of the diagram. As a consequence third order diagrams have three α factors, equal to 1/(137 × 137 × 137), or 1/2,571,353. Orders higher than this can be safely neglected.
Before concluding this Chapter we must draw attention to a very large elephant that has been hulking around the offices and laboratories of physicists since the middle of the last century. So accustomed have they become to its presence that they pursue their studies amidst the elephant’s trampling limbs and swinging trunk with scarcely a moment of concern. They have even given it a name: renormalisation.

This procedure is known as renormalisation. The electron mass also requires a similar sleight-of-hand to obtain the experimentally observed value.
We have discussed in Chap. 3 the difference between physicists and mathematicians in their attitudes to truth. No mathematician would consider subtracting two infinities from each other as anything but nonsense. To be honest, many physicists agree with them, but would be unwilling to confess it in public. Nobel Laureates can afford to be more forthright:
“I must say that I am very dissatisfied with the situation, because this so-called ‘good theory’ does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it turns out to be small—not neglecting it just because it is infinitely great and you do not want it!” – Paul Dirac, Nobel Laureate, 1933 [4].
“But no matter how clever the word (renormalisation), it is still what I would call a dippy process! Having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent. It's surprising that the theory still hasn't been proved self-consistent one way or the other by now; I suspect that renormalization is not mathematically legitimate.” – Richard Feynman, Nobel Laureate, 1965 [5].
However, once the renormalisation procedure is carried out, the agreement of experiment with the predictions of QED is so astonishingly good that there can be little doubt that QED is largely correct. The infinities seem to arise from virtual particles with very high energies. The suggestion has been made that QED breaks down at these energies, in the same way that Newtonian mechanics breaks down at speeds close to the speed of light. Time will eventually tell. In the meantime, as long as QED produces the highly accurate results that it does, it will continue to be applied unreservedly.
In the next Chapter, we will discuss the attempts to extend the methodology of QED to forces other than the electromagnetic interaction, which will lead us to the Standard Model of Fundamental Particles.