image

PART 3

THE UNIVERSE

Physics, mathematics, and the universe—these three words form the angles of a tangled and intimate set of relations. Einstein and Pauli the physicists, Gödel and Russell, the mathematicians—each worked within a science that attempts to describe the actual world. That, at least, was the purpose of mathematics at its inception (Euclid's geometry) and the effect of physics in the nineteenth and early twentieth century.

THE LOGIC OF PARADOX

BEFORE WE TURN TO RELATIVITY, quantum mechanics, and the search for a unified theory, we shall take a brief detour into another world altogether—that of mathematical logic. Like physics, the world of mathematics underwent revolutionary changes throughout the nineteenth century and into the twentieth. In so doing, it virtually merged with the doctrines of analytical philosophy and logicism. Few players in the twin worlds of mathematics and logic were more influential than Russell and Gödel.

There is good reason for starting with mathematics. True, physics began from observations of the visible world. Yet it evolved through mathematics. From the late nineteenth century on, mathematics became an essential tool of the physicist. Though mathematics was never absent from early modern physics—Newton invented calculus, after all—in the twentieth century, mathematics overtook empiricism as the primary method for generating physics. What Newton could observe (albeit through eyes made keen by the imagination) in a falling apple or a setting moon no longer mattered in twentieth-century physics.

Mathematics does not describe the physical world per se. It does, however, problem-solve in the realms of space, number, form, and change. Through mathematics, Einstein explored four-dimensional geometries never seen on land or sea. Today, the mathematics of string theory yields nine space dimensions. These are not observable phenomena. The nine space dimensions cannot even be explained properly in nonmathematical terms.1 We are led to these proposals not through observation, but through mathematics. Einstein, a born physicist schooled in nineteenth-century empiricism, approached mathematical formalism with trepidation: “As far as the laws of mathematics refer to reality, they are not certain; as far as they are certain, they do not refer to reality.”2

For pragmatic physicists, mathematical formalism either works or not. Mathematics is a tool. Why does it work? No one has a satisfactory answer. In his celebrated paper, “The Unreasonable Effectiveness of Mathematics in the Natural Sciences,” the physicist Eugene Wigner pondered the seeming miracle of the mathematics-physics connection:

The mathematical formulation of the physicist's often crude experience leads in an uncanny number of cases to an amazingly accurate description of a large class of phenomena. This shows that the mathematical language has more to commend it than being the only language which we can speak; it shows that it is, in a very real sense, the correct language.3

Mathematics, like experimentation, sometimes yields surprising or even unwanted results, as if it, too, were beyond human control. In 1928, the British physicist Paul Dirac formulated an equation only to find that it predicted a hitherto unknown and startling particle, the antielectron (or positron). One might even say that it was not Dirac, but his equation (via a minus sign), that discovered anti-matter.

Alongside the brief and elegant Dirac equation, general relativity, with its phalanx of equations, is positively epical. It describes not a particle, but the structure of space-time in the universe. It is, nevertheless, a theory tied to observable phenomena, though the observation took place by way of Einstein's “thought experiment” as he imagined himself flying on a beam of light. But, as with Dirac, Einstein's relativity equations were wiser than their maker. As he pondered general relativity, Einstein realized, to his dismay, that the equations described an expanding universe. That was not his intent. To remedy matters, he proposed an emergency fix, a “cosmological term” or constant to keep the universe static. Not only was this fix ill received; it was also wrong. Twelve years later, Edwin Hubble proved that, far from being static, the universe was expanding. The cosmological constant was, in Einstein's view, the “greatest blunder” of his life.4 It was a blunder born of his preference for the physical and observable over the mathematical. In time, Einstein grew more trusting of mathematical formalism, but only because there seemed no other way to pursue his unified field project.

For Russell and Gödel, no such “practical” matters intruded into mathematics. Still, their work would lead to very practical ends. Out of Russell's system of logical notation and, even more importantly, Gödel's incompleteness theorems emerged the foundations for the computer revolution.

By the time Russell came to Princeton in 1943, he had, by his own admission, left mathematics and mathematical logic far behind.5 Still, his Principia Mathematica expressed the sheer prowess of predicate logic as much through its comprehensiveness as through its innovations. These included an improved notational system and a comprehensive “type” theory based on a hierarchy of “classes.” The Principia Mathematica inspired successive generations of twentieth-century philosophers: Wittgenstein, Rudolf Carnap, A. J. Ayer, W. V. Quine, and, indeed, the whole of twentieth-century analytical philosophy.

Twenty years later, Gödel, fresh from his dissertation on the completeness of first-order logic, formulated two proofs on second-order logic. (First-order logic differs from second-order in its relative power: First-order logic deals only with individuals or types; second-order logic deals with propositions about individuals or types.) Gödel's two proofs became known as his “incompleteness” theorems. They brought about a paradigm shift as radical as those of “relativity” and “uncertainty.” At first, the shift was scarcely noticed. When Gödel made his announcement at a conference on epistemology, only one participant, the brilliant polymath John von Neumann, had an inkling of what the proofs implied. Only very slowly did their depth and breadth sink in. Still, true to the paradigm of paradigm shifts, Gödel's theorems met with resistance from a whole generation of logicians. No wonder: In two proofs, he had demonstrated without any doubt that (1) any consistent formal system that includes the arithmetic of counting numbers (that is, arithmetic using simple cardinal numbers—1, 2, 3, etc.) is incomplete, as there are statements in the system that cannot be proved or disproved, and (2) a formal system that includes the arithmetic of counting numbers cannot be proved consistent within the system itself.

Gödel's proofs rocked the mathematical world, but it is not useful to exaggerate their effect. They did not cast logic or mathematics onto the garbage heap. On the contrary, logical systems were useful before Gödel's proofs, and they remained useful afterwards. Mathematics continued to depend on axioms and systems that, although “incomplete,” worked quite well. But absolute consistency and completeness, much sought as measures of the strength of mathematical systems, could not be found. Like “un-certainty,” Gödel's “incompleteness” suggests the limits of what can be formalized. It is always possible, according to Gödel's proofs, to find an axiom that is true but that cannot be proven within the arithmetic system. “Incompleteness” has as its positive formulation “inexhaustibility,” argues the Swedish mathematician and computer scientist Torkel Franzen. Gödel himself recognized the philosophical implications, carefully italicizing what cannot be:

It is this theorem [the second incompleteness theorem] which makes the incompletability of mathematics particularly evident. For, it makes it impossible that someone should set up a certain well-defined system of axioms and rules and consistently make the following assertion about it: All of these axioms and rules I perceive (with mathematical certitude) to be correct, and moreover I believe that they contain all of mathematics. If somebody makes such a statement he contradicts himself. For if he perceives the axioms under consideration to be correct, he also perceives (with the same certainty) that they are consistent. Hence, he has a mathematical insight not derivable from his axioms.6

If a formal system of arithmetic cannot be complete, nor proven consistent within itself (the negative formulation), then it must be (in theory) always open to another axiom, ad infinitum (the positive formulation).7

To explain “incompleteness,” we must look (briefly) to its place in philosophical history and (more briefly still) at what the proofs achieved.8 Incompleteness takes its place in—or, more precisely, responds to—a line of philosophical thought that began with Gottfried Leibniz, a true polymath whose expertise ranged from Chinese history to library science. Leibniz was Newton's contemporary and greatest rival: They discovered calculus simultaneously and independently. Leibniz once postulated that space was relative; Newton won that one, and space remained absolute until Einstein. As a logician, though, Leibniz was a towering presence. Into logic, Leibniz injected mathematics. The result was a symbolic logic that would unite mathematics and philosophy.

Important though Leibniz seems to us today, his works fell into some obscurity after his death. He was rediscovered by Kant, the last great Enlightenment philosopher, and then again at the end of the nineteenth century. In 1900, Bertrand Russell published The Philosophy of Leibniz. It was, as Russell writes, a “reconstruction of the system which Leibniz should have written”9 had he not been writing philosophy primers for princes. Thus did Russell identify Leibniz as the progenitor manqué of symbolic logic.

Logic became a vibrant and fashionable field towards the end of the nineteenth century: Giuseppi Peano, Ernst Schröder, and Charles Peirce, along with Gottlob Frege, were instrumental in its development. Their project was to rid mathematics of its cobwebs and clutter—ambiguities that in the past had worried no one. Now, in the era of science, mathematics must be systematized and established within the realm of logic. It must be demonstrated, as Russell said, that “all pure mathematics follows from purely logical premises and uses only concepts definable in logical terms.”10 Logicism was born.

Much influenced by Peano, Frege, and nineteenth-century formalism, Russell and his fellow Englishman Alfred North White-head launched the ambitious, almost foolhardy project that Frege's system of symbols began. Their plan: to derive all mathematical truths from a set of axioms and inference rules written in symbolic language. The idea grew out of the International Congress of Mathematics of 1900, held in Paris. At that same conference, though not noted by Russell in his autobiography, another germ was planted. It was the celebrated Hilbert “challenge” for young mathematicians: twenty-three mathematical problems that called out for solution. What Hilbert hoped for was a mathematics without paradox.

Paradox there was. In 1902, Gottlob Frege had just published the first volume of his two-volume treatise Grundgesetz der Arithmetik (The Basic Laws of Arithmetic). In it, he proved that mathematics could be reduced to logic—or so he thought. Russell would prove otherwise by coming up with what was soon dubbed Russell's paradox.

The paradox came to him while he was at work on what would become his monumental Principia. Suddenly, he experienced “an intellectual setback.”11 Thinking about sets, he began to wonder about the “sets which are not members of themselves.” That would seem a simple concept. The set of all red convertibles is not a member of itself. However, the set of sets with more than one member is a member of that very set. What, then, of the set of all sets that are not members of themselves? Is the set of such sets also a member of that set? If so, then it posed a contradiction: If a set of all sets that are not members of themselves is a member of itself, then it is not actually a set of nonmembers, and vice versa. He pondered the contradiction for months, hoping to find a way out. In the end, he broke the news of the paradox to Frege in a letter. Frege's response, though gracious, left no doubt that he felt his life's work had been cast into disarray.

Nothing bedevils mathematicians like paradox. Hilbert attempted desperately to stave off all paradox or, better yet, banish it from mathematical systems. Russell, too, looked in horror upon the paradox he had discovered. His answer was laid out in a work that consumed him and Whitehead for the better part of a decade. Principia Mathematica began as a projected one-year, one-volume work. It grew hydra-headed into three volumes (a fourth was planned, but never realized).12 In it, Russell and Whitehead devised a system of symbolic notation, demonstrated the power of logicism, introduced notions of prepositional function and logical construction, and set the stage for Gödel's metamathematics. To get around his own paradox, Russell proposed a hierarchy of sets: from simple members, to sets, to sets of sets, and so on. No intermingling was permitted in the hierarchy, thus preventing a “set-of-all-sets not-members-of-themselves” logical catastrophe.

Logicism, thus bolstered, spread throughout the philosophical world. In Vienna, especially, there developed a philosophical school of mathematical logic called the Vienna Circle. Its purpose was to found a modern approach to logic that would rid philosophy of metaphysics. Its founding members were Moritz Schlick (who would later be assassinated by a former student on the steps of the University of Vienna), Hans Hahn, Herbert Feigl, and Rudolf Carnap. As the circle grew in number and stature, it moved from smoke-filled Vienna cafés to the university. Meetings were held regularly on Thursday evenings.

In the midst of these antimetaphysicians came a Platonist in sheep's clothing: the young Kurt Gödel, invited as a matter of course by his dissertation adviser, Hans Hahn. He attended regularly for two years, beginning in 1926. As he sat in the shabby classroom observing the birth of logical positivism, Gödel kept his own counsel. His contributions were brilliant, but few. In the end, he let his proofs speak for him. It is ironic, even paradoxical, that the man who undermined the logic of mathematics was schooled by such devout logicists.13

Sometime in 1928, Gödel's attendance tapered off. He was at work on his dissertation, which proved the completeness of first-order (i.e., limited) logic. Then, he seems to have turned to what would become the first incompleteness theorem. On October 7, 1930, at a conference in Königsberg, Gödel delivered his seminal paper. It was the third and final day—an inauspicious time at any conference, usually reserved for low-impact papers on obscure topics or for organizational housekeeping. Always a man of few words, Gödel whittled his announcement down to a mere sentence. His “shining hour,” notes Rebecca Goldstein, was more like “30 seconds tops.” Subdued, uncharismatic, and, at the time, unknown, Gödel was unlikely to have made an impression. No mention of his sentence made it into the conference proceedings. Greatmen were at the conference: Rudolf Carnap, the father of logical positivism; Friedrich Waismann, Hans Reichenbach, and Hans Hahn, all members of the Vienna Circle; and John von Neumann, a student of David Hilbert, the grand old man of mathematics and head of mathematics at the University of Göttingen.

The Conference on Epistemology of the Exact Sciences, organized by a group of Berlin positivists, was too modest in scope to have drawn the great Hilbert, widely thought to be the greatest mathematician of his time. Hilbert did attend the umbrella conference of the Society of German Scientists and Physicians, held concurrently in Königsberg. Still, on that third day, as Gödel spoke, Hilbert's presence loomed large. Twice in the previous decades, Hilbert had issued formal challenges to fellow mathematicians and logicians. First, in 1900, speaking at the Third International Congress of Philosophy, he had listed twenty-three critical “problems” that remained “unsettled” and exhorted true mathematicians to find the solutions. The most critical of these problems for our purposes was number two: Prove that the axioms in arithmetic are consistent and that, therefore, arithmetic is a formal system without contradiction. Then, at the 1928 International Congress of Mathematicians, held in Bologna, he lectured on “Problems in laying the foundations of mathematics.” Now, in addition to the question of “consistency,” Hilbert raised another fundamental question: Is it possible, using the axioms of a system, to prove or refute any proposition that is made in the language of that system? Without such proof, Hilbert acknowledged, mathematical logic was without bedrock.

At the heart of Gödel's proof is the Liar's paradox: “This statement is false”—a version of the Cretan Epimenides paradox: “All Cretans are liars.” The important thing about the paradox, for Gödel, is its circular, self-referential structure. By definition and design, the paradox references itself. If Epimenides the Cretan is a liar, then the statement “All Cretans are liars” must be false. So Cretans must be truthful—but if so, Epimenides’ proposition, that Cretans are liars, must be true. Likewise for the Liar's paradox: If the proposition “This statement is false” is true, then the statement must be false. In logic, a “well-formed proposition” is either true or false. The paradox does an end run around this either-or structure. The hermetic seal that ensures a logical system is threatened by paradox. Nearly thirty years earlier, Russell had stopped Frege's philosophical program in its tracks with the paradoxical “set of all sets that are not members of themselves”—either it is or it is not; if it is, then it is not, and vice versa. In the context of “true or false,” then, the paradox always contradicts logic. But what if, instead of “true or false,” we substitute “provable or not provable”? The paradox is then stripped of its “content,” as it were, and made analytical. That is what Gödel did in his proofs.

Gödel thus answered Hilbert's “completeness” question with a resounding “No.” Arithmetic may be consistent, but it is not possible to prove that consistency using the tools of arithmetic. In a way, Gödel's discovery seems to work well in the world of common sense: It is not possible to see the whole if one is part of the whole. Only from without can we discern the trees as a forest.

Unlike Heisenberg's “uncertainty,” word of which had sped around the world of physics, it took several years before “incompleteness” found its audience. This may have been because the proofs were very difficult to understand, even for mathematicians. But resistance, as we have seen, played a part. When Gottlob Frege received Bertrand Russell's bombshell letter regarding the paradox that upended Frege's arithmetic program, the hapless Frege responded within a day: “Your discovery… left me thunderstruck,” he confessed. Yet he called it “a very remarkable one… [that] may perhaps lead to a great advance in logic, undesirable as it may seem at first sight.”14 It was, as Russell later said, a testament to Frege's “dedication to truth. His entire life's work was on the verge of completion… and upon finding that his fundamental assumption was in error, he responded with intellectual pleasure clearly submerging any feelings of personal disappointment.”15 By contrast, Hilbert's initial response to Gödel was anger, according to Paul Bernays.16 He made no attempt to contact Gödel or respond to the proof. Yet he must have understood how deleterious the impact on his wish to solidify mathematical logic.

When, more than a decade later, Gödel and Russell met (how-ever briefly) and corresponded (however obliquely), it seemed that Russell did not understand the proofs—indeed, in a letter written in 1963, he confessed to their having “puzzled” him.17 By mis-chance—or, more accurately, by means of Gödel's nearly pathological perfectionism—we will never know how Russell might have responded to Gödel's critique of him. In November 1942, Paul Arthur Schilpp, editor of The Library of Living Philosophers, asked Gödel to contribute to a volume dedicated to Russell's philosophy. Gödel accepted, to Schilpp's (initial) delight, then proceeded to tinker with the initial draft until the end of the following September, by which time Russell had finished his responses to the other essays and could spare “no leisure” for a reply to this late-comer. Gödel's effort elicited no more than a brief note of general praise for Gödel and an off-handed acknowledgement that the “Principia Mathematica was completed thirty-three years ago, and obviously, in view of subsequent advanced in the subject, it needs amending in various ways.” Gödel's hopes for a “discussion” were dashed.

THE MECHANICAL WORLD

In 1912, Bertrand Russell asked the question: “Is there any knowledge in the world which is so certain that no reasonable man could doubt it?”18

This was, of course, the same question Descartes had asked three centuries earlier. To escape from the deadening authority of traditional theology, metaphysics, and morality, Descartes began by doubting everything: the existence of his body, of other people, of the world, of his own sanity. His “methodological skepticism” left him with but one certainty: He was thinking. Hence, “I think, therefore I am.” The operations of his mind, distinct from all other matter and perception, were his starting point. From there, Descartes meant to rebuild philosophy afresh by asserting substantive dualism. He split all reality into mind on one side and matter on the other, with no bridge between them (although Descartes provisionally invoked God to fill the gap). During the philosophical wars that followed, Cartesian rationalists and empiricists like John Locke waged their battles against traditional authority with a flaming, righteous sword. Matter was to be explained only by a deterministic physics, without appeal to morality, religion, aesthetics, or other such “mental” qualities (“value-free,” as is now said).

In these wars for truth, modern science stood for doubt. Descartes urged us to doubt all—and doubt became the modern mode. Physics is a highly organized method for doubting appearance. Its purpose is to question what it encounters until what remains is what must be. A microscope strips away what we see to reveal the unseen. The seemingly “real” world is replaced by the scientifically “real” one. That scientifically “real” world, so different from that of the spirit, seems to work with regularity, according to invariable rules. Thus was born the underpinnings of the mechanical worldview—the “classical physics” which held sway until the twentieth century.

In 1874, Max Planck, the young son of a theology professor, entered the University of Munich. He had just graduated from the Königliches Maximilians Gymnasium, where he had excelled at mathematics and astronomy, and where he learned the fundamentals of physics, particularly the law of conservation of energy.

It is not unusual for students to be guided by the biases and visions of their advisers. Planck was the exception. In no uncertain terms did his University of Munich physics professor, Phillipp von Jolly, warn him away from the field. All had been discovered, said Jolly. All that remained in physics was the mopping up of a few dusty corners. Planck's stubbornness was later vindicated when he became the default founder of quantum theory and a Nobel laureate.19

Jolly was hardly alone. Most physicists agreed—physics was a done deal. After all, Newton's equations had taken care of gravity and motion. As for the remaining forces, in 1864, James Clerk Maxwell presented his famous set of eponymous equations to the Royal Academy. They identified light as an “electromagnetic wave,” and they laid down the basic laws for the forces of electricity and magnetism. The primary forces of nature were thus accounted for. What more could physics do?

RELATIVITY OF TIME AND SPACE

Mathematics is curiously intimate with, and revealing of, physical reality. In modern physics, we have discovered the universe's geometric structure and its knotted energies more by way of skeletal equations than by giant telescopes. The more physics uses mathematics, the more physical reality seems to oblige by offering its deepest secrets.

Common sense tells us that physics depends upon empirical data. Galileo's Tower of Pisa, Newton's prism, Young's double-slit box, Foucault's pendulum: Classical physics looked to the physical world for inspiration and confirmation. After Einstein, whose special and general relativity theories were, as he said, “wrested from Nature,” all of this would change. Theoretical physics is subject to the “lure of numbers,” argues David Lindley. Although physical theories require experimental verification, mathematical structure can make or break a hypothesis. For Lindley, Einstein's “general theory of relativity is the prime example of an idea that convinces by its mathematical structure and power.” The theory has been tested, most notably by Sir Arthur Eddington's measurements of light bending around an eclipse on the island of Principe off Equatorial Guinea. Yet its authority, Lindley believes, comes primarily from its “beautiful theoretical framework.”20

How did mathematics gain the upper hand in physics? Perhaps it always had an advantage. Plato's Republic disdains the empirical: “Geometry is knowledge of the eternally existent.”21 Aristotle fixed his gaze scrupulously at the physical world. From Plato we inherit the suspicion that numbers are magical. Only when Descartes made the connection between “mathematics and the sensory world,” as Arthur I. Miller says, did mathematics (and especially geometry) suddenly emerge as a tool for deciphering the physical world. Pythagoras found harmony in the integers in music and in the spheres. Thus it seemed, too, for quantum mechanics. As Miller points out, Max Born called Bohr's analogy of the solar system to the atom a kind of “magic.”22

Modern science has long been rightly seen as a dissolvent of all certainties—especially physics, which posited uncertainty and wave-particle dualities, split the once-solid atom, and discovered the esoteric geometries of space-time. Yet physics remains a citadel of eternity in its ever-unchanging numbers. As we shall see, physicists have, in their theories of relativity and uncertainty, discovered mathematical formalisms called “universal constants”: the speed of light and Planck's constant, for instance. That is some consolation of an “eternal” kind, though ordinary humanity may not be much moved.

The word “relativity” eventually weighed upon Einstein like an albatross of imprecision. It implies opposition to “absolute”—yet Einstein's relativity theories are anchored to the absolute of absolutes: the speed of light. That speed alone remains absolute; time and space are relative to it. In later years, when Einstein fought his rearguard action against the relativism of quantum mechanics, his early discoveries came back to haunt him. “God does not play dice with the universe,” he said. But a devil's advocate might say, with Banesh Hoffmann, that Einstein had loosed the demons himself. His quantification of light via Planck is often deemed the inaugural leap into quantum theory. His special theory, with its equivalences of mass and energy, has as its legacy particle physics, the arena for quantum mechanics. Einstein died believing quantum physics to be incomplete in its description.

It is the blessing of youth that its energy is greater than its foresight. Einstein's miracle year produced four extraordinary papers: the first on light quanta, the second on the size of molecules, the third on Brownian motion and the existence of the atom, and the fourth on moving bodies—“special relativity.”

In 1896, the discovery of radioactivity inaugurated a search for the nucleus. In 1898, Marie Curie found two radioactive elements, and Ernest Rutherford started sorting out the alpha, beta, and gamma rays from radiation. In 1903, with Frederick Soddy, Rutherford explained radioactive decay, and, in 1911, he finally discovered the atomic nucleus. This set off the next wave of discoveries: Bohr's quantum theory of the atom in 1913; Chadwick's discovery of the neutron in 1932; artificial fission by the Joliot-Curies; and the explanation of nuclear fission by Hahn in 1939.

In 1905, the young Einstein had launched both relativity and quantum physics. (Planck had discovered the quantum phenomenon, but Einstein started quantum physics by applying the concept to light quanta.) Relativity, special and general, were Einstein's singlehanded achievement. But very few physicists specialized in that until after Einstein's death. Einstein worked on it by himself.

Quantum physics, however, attracted a crowd and needed them: The implications went in every direction. Einstein himself remained a most important contributor, continuing to publish important work on quantum problems even while laboring away at general relativity. In March 1916, he finally published the complete gravitational theory; in July, he began to publish three papers on quantum theory. Late in life, he told a friend that he had thought a hundred times more about quantum physics than about relativity. As usual, his thinking took a quite individual turn.

Accounts of Einstein's work usually pass quickly over this longest and, in some ways, most ambitious part of his career. For one thing, this period can be dismissed as evidence of his decline in genius. Indeed, this last effort turned out to be a failure, having added little to the progress of physics. It opened no paths for the future. Recent unifying attempts go in an entirely different direction.

But the question of what happened in Einstein's search for unity may cast light on a neglected side of science. Science is collective and cumulative. Its processes ensure that even its surpassing contributions will ultimately “fail.” We rarely see this side of science. Instead, science is presented as a series of dramatic breakthroughs, new pathways, inventions, new frontiers. True, the biologist Robert Hooke and the chemist Robert Boyle were once in the vanguard of discovery; now they have moved back into the fabric of the grand design. Historians of science know that Boyle discovered the relationship between pressure and volume (Boyle's law), but how many working physicists could fairly describe the achievements of the Swedish physicist Svante Arrhenius, whose work on ions predicted the greenhouse effect? Does it matter? Physics is in many ways a self-erasing discipline, concerned only with the latest leading edge of research.

In later life, Einstein was overtaken by history twice. First, by his personal history: In his forties and fifties, his gifts, quickness, and prowess inevitably faded. This happens to everyone. His extraordinary discovery of general relativity may well have made him too confident that he could then master the intricacies of a unified theory. But scientists are overtaken by history in the special way just noted: Sooner or later, the most surpassing achievements will be modified, supplanted, or rebuilt. Newton's gravitational theory eventually became a special case of Einstein's general relativity. If that could happen to Newton, it could happen to Einstein—and indeed, Einstein predicted that it would.

In the world of drama, only Shakespeare can be said to rival Sophocles. Literary works are unique and their truths timeless. But the same cannot be said of science. If Einstein had not modified Newton, someone else would have sooner or later, with whatever variations. Science guarantees that all its members will be challenged and essentially usurped—though it might be more accurate to say superseded, displaced, corrected, or improved. Einstein was challenged in just this way when quantum mechanics emerged in 1925, for its view of the universe contradicted Einstein's most fervent beliefs. If gravity is ever joined successfully to quantum mechanics, even the theory of relativity may well be modified.

Are we to imagine that if Einstein at fifty had retained his youthful powers of imagination, he would have been able to find a unifying theory? That does not seem realistic. No matter how much genius was applied, the time was not ripe: Too little was known about electromagnetism and the fundamental forces of the atom. Strong and weak nuclear forces had yet to be discovered. Yet Einstein, trusting to his formalisms and intuition, dismissed the new evidences of quantum mechanics.

ON THE QUANTUM PATH

In 1905, Einstein, working on a problem called the “photoelectric effect,” wrote a paper that some say gave birth to the quantum revolution.23 This paper, modestly titled “On a Heuristic Viewpoint Concerning the Production and Transformation of Light,” offered to solve problems rather than build theory. Still, its language portended radical change:

It seems to me that the observations associated with black-body radiation, fluorescence, the production of cathode rays by ultraviolet light, and other related phenomena connected with the emission or transformation of light are more readily understood if one assumes that the energy of light is discontinuously distributed in space. In accordance with the assumption to be considered here, the energy of a light ray spreading out from a point source is not continuously distributed over an increasing space but consists of a finite number of energy quanta which are localized at points in space, which move without dividing, and which can only be produced and absorbed as complete units.24 [emphasis added]

“Discontinuously… not continuously distributed… quanta”—these are words that fly in the face of Newton and his classical world. It was as if, in his most productive year, Einstein spoke what much later he would, like Shelley's Prometheus, “re-pent me.”

Like all science, quantum physics was built on the shoulders of history. As we have seen, by 1900, the world of physics had split into warring camps or worldviews, each still under the sway of classical Newtonian physics. On one side was the Enlightenment faction, which believed the world a clockwork mechanism. On the other side were the converts to electromagnetic theory, which under Maxwell had wedded electricity and magnetism into a unified theory. Yet problems remained that could not be explained by either side. Among them were three bedeviling conundrums: black-body radiation, the photoelectric effect, and bright-light spectra, which could neither be explained by classical physics nor ignored (certainly not by young physicists out to make their name). Solving them led inexorably to the quantum revolution.25

The first inkling of quantum theory came from a lab in Berlin. In 1900, Max Planck was a forty-year-old physicist with expertise in chemistry and thermodynamics. He was also a fervent believer in the second law of thermodynamics, which states that in a closed system, entropy (loosely translated as “disorder,” but also meaning heat loss) increases and, once achieved, cannot be reversed. It was Planck's appreciation of this law and his refusal to give it up that led him to the black-body solution.

Planck was one of the few theoretical physicists amid the cadres of experimental scientists populating German universities. He was to some extent off the academic radar, and thus had the freedom to contemplate problems that spanned disciplines. Although focused on thermodynamics, he knew of electromagnetism. Maxwell, remember, had demonstrated that light is an electromagnetic wave. Planck believed in Maxwell's findings. More obviously, he noted in the black-body question the intersection of heat (his field), light, and electromagnetism.

Planck set out to examine black-body radiation in the context of the second law. Again, the black-body problem had resisted explanation by classical physics, but held out much practical promise. The reason is that radiation is emitted from the black body in the form of light—specifically, color. For centuries, potters had observed that heat within their kilns turned colors, like a spectrum. From blue through white, each successive color indicates hotter temperature. What can black-body radiation tell us about the behavior of radiation?

As was their wont, German physicists tackled the problem experimentally. They created a black body—an enclosure that would absorb all the electromagnetic radiation it could—with a small hole through which electromagnetic waves could escape. Then they observed the color distribution of radiation coming through the hole. They hoped in this way to study the electromagnetic waves within, just as Maxwell had studied heated gases. To be sure, the question was of more than theoretical interest. Electricity was big business at the turn of the century. If a means for measuring emitted energy could be discovered, electrical companies would be able to quantify their product and provide the greatest amount of power using the least energy.

Two formulas emerged. Unfortunately, one worked well for high frequencies, but not for low frequencies; the other worked only for low frequencies. In fact, at higher frequencies, the second formula produced an impossible result. Light, as had been established, comes in waves, and waves, unlike particles, can multiply infinitely—they just get closer and closer together. As the waves moved closer and closer at higher frequencies, the power (or temperature) would, theoretically, enter the ultraviolet zone and beyond, to infinity. It would become an “ultraviolet catastrophe,” emitting radiation with infinite power! Fortunately, nothing like that happens in real life. The black-body heat finds equilibrium, just as Maxwell's gas had. The problem was how to formulate an equation that would explain what was happening.

Planck tried formulae that were tied to the second law of thermodynamics, using standard theories of radiation. Nothing worked. At last, he tried a thought experiment. What if, instead of waves, the black-body chamber was full of oscillating and discrete charges? As the interior heats up, the charges would continue to oscillate at all of the possible frequencies. Planck reworked his formula to fit the experimental results, using a constant to make the equation work. With great consternation, he pondered the result. Only by imagining the electromagnetic waves as discrete elements, using statistics, as Maxwell had with heated gas, and ascribing to the resulting discontinuity a constant (h), could he fit the formula to reality. Planck, forever the enemy of what would become quantum physics, had “quantized” radiation, at least within the black body. At the time, however, he preferred to think of his “constant” as a useful trick rather than a key to atomic architecture.

Ironically, it was Einstein, implacable foe of later quantum theory, who established a theoretical basis for Planck's constant. In essence, he quantized light. He also solved a second conundrum: the photoelectric effect. As Planck was pondering the black-body problem, another German, Philipp Lenard, was experimenting with cathode rays and light beams. He tried shining a light of a single frequency onto a thin metal foil. The result was startling, to say the least. Out of the foil came electrons. The light had somehow ejected electrons. If light were a wave and the electrons were ejecting because they were being disturbed by the energy of the light, then it follows that if the light were of greater intensity, the electrons would carry more energy when they were ejected. But Lenard found otherwise: At low frequencies, even with a very bright beam, no electrons were ejected; at increasingly higher frequencies, the energy of the ejected electrons remained the same. Nothing in classical physics explained Lenard's findings.

The explanation came in Einstein's second paper published in 1905. If, Einstein argued, we consider light not as waves but as photons, we can then explain the photoelectric effect—the emission of electrons that occurs when light is shined on metal. If light acts not as a continuous wave, but as a collection of particles, then the photoelectric effect is nothing more than photons colliding with electrons—tiny particle colliding with tiny particle. His idea of photons explained another problem: that of cooling bodies, which gave off heat not in a neat, continuous way, but discontinuously, “jumping” from temperature to temperature. Newtonian physics could not account for this phenomenon. Quantum physics did. For his discovery of photons—not for relativity—Einstein won the Nobel Prize.

It was not long before someone—it turned out to be the French aristocrat turned experimental physicist Louis de Broglie—asked the obvious question: If light waves behaved like particles, could particles of matter behave like waves?

Before we hear de Broglie's answer, we must take a detour into the atom itself. As Einstein was investigating the large story of gravity and the universe, others searched in the opposite direction, trying to understand the architecture and behavior of invisible particles.

The idea of the atom was first proposed in fifth-century Greece by the philosopher Democritus. If one chisels away at a rock, he reasoned, one is left, eventually, with a fragment so small that it cannot be divided again. These are atomos—the Greek word for “indivisible.” In the battle over scientific theory, Democritus lost out when Aristotle sided with Empedocles, who defined matter in terms of the four basic elements of fire, water, air, and earth. The atom was lost for more than a millennium. When it resurfaced, in the seventeenth and eighteenth centuries, science found its way back to the atom through the successive findings of Nicolaus Copernicus, Isaac Newton, Christian Huygens, Robert Boyle, Daniel Bernoulli, Joseph Priestley, and Antoine Lavoisier. In 1778, Lavoisier renamed the gas Priestley had isolated “oxygen.” It was the first element to be isolated and named.

Then came John Dalton, a teacher and scientist in Manchester, a city at the heart of the English Industrial Revolution. Blessed with typical British weather, Manchester was an ideal location for Dalton, a keen observer who kept meticulous notes, to study fog. He knew from Lavoisier that oxygen combined with hydrogen to make water. In fog he found clarity: Water could behave as air, just as it could ice. What made this possible? The answer was—atoms. In air, the atoms were spaced far apart; in solids, atoms bunched together. For the next century, scientists discovered, analyzed, and classified elements. Still, the atomic structure, by definition invisible, remained a mystery.

Toward the end of the century, the veil began to lift. At Cambridge, a young mathematician, J. J. Thomson, was put in charge of the Cavendish Laboratory. Under Thompson, the Cavendish flourished, attracting first-rate students and researchers. Its fame was solidified in 1897, when Thompson discovered the electron (which he called “corpuscle”) by isolating the particles that make up cathode rays. With the venerable Lord Kelvin, Thompson proposed a rather chunky atomic structure, a souplike concoction with floating electrons, dubbed the “plum pudding model.”

As one might expect, the plum pudding model found few backers besides Thompson and Kelvin. Fortunately, the Cavendish nurtured great students. In 1895, when applicants from abroad were first admitted, Ernest Rutherford, fresh off the boat from New Zealand, appeared at the door.26 The experience of working with Thompson changed Rutherford's life. He became an atomic specialist, landed at the University of Manchester, and, in 1909, conducted a “most incredible” experiment. With his students Hans Geiger and Ernest Marsden, he shot alpha particles (bundles of neutrons and protons emitted by radium) through a thin sheet of gold foil. Most of the particles passed through. A few, though, bounced back. The plum pudding model had no hard centers to stop the alpha particles. How to model this phenomenon? Rutherford borrowed the image of the solar system, with electrons circling an interior nucleus. The Rutherford model was not without problems. Still, it “worked,” just as Newton's gravity had. By envisioning the solar system model, Rutherford and his students measured the nuclei of different elements. They could now explain atomic number and nuclear weight with much greater clarity. Over the next few years, Rutherford looked deep into the atom, and in 1917 he became the first scientist to “split” the atom by bombarding a nitrogen nucleus, transforming it into oxygen and emitting hydrogen. The “solar system” model is still with us. It is useful and easy to visualize.

As Rutherford had revised Thompson's plum pudding model, so would a Rutherford student rethink the atom as solar system. Niels Bohr came to Manchester in 1911, armed with a complete set of Dickens from which to learn English. He had a doctorate from Copenhagen University and an impressive background in electron theory. Little wonder that he had sought out Rutherford's laboratory. Rutherford was an ideal teacher—cheerful, avuncular, and inspirational. His laboratory, if a bit rollicking, teemed with ideas and energy. He was known to sing “Onward Christian Soldiers” to his student-troops, his booming voice preceding him as he swept from room to room.

At Manchester, Bohr tackled the inherent problem of the solar system model with typical Continental audacity. He knew that Rutherford's model was wrong according to classical physics. An electron circling the nucleus would emit energy (because of angular momentum) and thus fall into the nucleus. The atom would collapse, and matter would not exist. Bohr stabilized the model by abandoning classical physics. His electrons would move in fixed orbits around the nucleus. Each orbit corresponded to an energy level. The lowest energy level was closest to the nucleus.

To reach these conclusions, Bohr himself made a quantum leap. If, rather than continuously emitting energy, the energy loss, like Planck's quanta, is discrete and particle-like, not continuous and wavelike, then electrons would emit fixed amounts of radiation when they move from one orbit to another. This “jump,” Bohr reasoned, is as discontinuous as Planck's black-body charges and Einstein's photons. The momentum of a particle changes (rises or falls) in discrete quantities. In other words, like Isaac Asimov's spaceships, electrons “jump” instantaneously through space, from one orbit to another. When an electron jumps from a higher energy orbit to a lower one, it emits light. When an electron jumps from a lower energy orbit to a higher one, farther from the nucleus, it does so because it has absorbed energy from some other source. This happens, for instance, when a chlorophyll molecule in a maple leaf or the metal hood of a black SUV absorbs light. The chlorophyll molecule absorbs heat and converts it into food for the tree; the black SUV atoms radiate heat, electron by electron, sufficient to fry an egg. They do so not by emitting heat continuously, but discontinuously, by emitting “quantum” amounts of heat generated when an electron “jumps” from a higher to a lower energy state.

Discontinuity is the key concept here. No longer was physics solely within the classical realm. The quantum moment had arrived. Into the fray stepped a new generation of young theorists unattached to classical physics and chafing at its inadequacies. Of these, Pauli, Heisenberg, Paul Dirac, Louis de Broglie, and Max Born stood out. In rapid succession, from 1914 to 1927, came the building blocks of quantum physics: confirmation of stationary solid states (James Franck and Gustav Hertz); confirmation that matter was both particle and wave (Arthur Compton and de Broglie); Pauli's exclusion principle; matrix mechanics; and two sets of statistics for counting particles (Bose-Einstein and Fermi-Dirac).

Far from settling matters, though, these discoveries demolished Bohr's atomic model. Its death knell sounded in 1924, when de Broglie's doctoral thesis proved that matter was not just particles, but also waves. He did so in part by applying to all matter the lessons of Einstein's photon. The “pilot” waves that follow matter through space are not incidental, but have frequencies directly related to the particle's motion.

Matter, like radiation and light, now possessed this dual nature. No longer was physics divided into two camps, as de Broglie remarked in his Nobel Prize speech. The two conceptions of physics—matter, governed by Newtonian mechanics, and radiation, envisioned as traveling waves—were now “united.”27 Bohr's model, shackled to the image of electrons as particles, no longer stood. Its demise plunged the subatomic world into the same state of disarray that had befallen macrophysics with Einstein's theories of relativity. Suddenly, our intuitive sense of the world no longer held true. Beneath (and above) our world of appearances, there exist wholly different worlds. In one, all motion is relative except for the speed of light. In the other, particles are waves and the reverse, obeying laws that contradict even Einstein's revolutionary laws.

THE COPENHAGEN INTERPRETATION

Into the breach of Bohr's atomic theory, now in tatters, stepped Werner Heisenberg. Fresh from a year of apprenticeship with Max Born, Heisenberg was acknowledged to be a brilliant theorist with an aversion to experimental physics.28 He and Pauli met Bohr at the Göttingen lectures of 1922. So taken was Bohr with Heisenberg's questions that he proposed a walk up Hain Mountain. During that afternoon, wrote Heisenberg, “my real scientific career… began.”29 It was the first of many conversations, often heated, between the father of quantum physics and the daring and inspired Heisenberg. Pauli often served as intercessor when disputes between the two threatened progress. It worked. Together, Heisenberg and Bohr forged a complete theory that would become known as the “Copenhagen interpretation.”

Still, it was a tangled relationship—one that became more tangled in 1941 when Heisenberg and Bohr took their famous evening stroll through a park in German-occupied Copenhagen. At that meeting between the German patriot and the Danish Jew, Heisenberg did or did not try to extract from Bohr atomic secrets; did or did not hope to discover the extent of the Allies atomic program; did or did not suggest the immorality of atomic weapons. What happened during that evening stroll has been a matter of dispute ever since and fodder for Michael Frayn's play “Copenhagen.” (That their meeting took place in a woodland is in itself interesting. Nature was the backdrop for several “leaps” in quantum theory, most famously when Heisenberg's hay fever forced him into seclusion on a North Sea island, where he pondered atomic structure and thought up matrix mechanics, the first formulation of quantum mechanics.)

In the happy years of the 1920s, Bohr played the diffident, sometimes disapproving father, Heisenberg the rebellious and brilliant prodigy. In his three years at Copenhagen, from 1924 through 1927, Heisenberg proved his worth. His first major contribution was a formula that figured the energy states within an atom. Max Born and Pascual Jordan extended the formula into a true matrix mechanics with which all frequencies of the spectrum could be figured. Heisenberg's second contribution was less formalistic and much more incendiary. The uncertainty principle by its very nature contradicted classical physics and, in a way, challenged the very essence of modern science. Throughout the development of classical physics, it was assumed that perfectly accurate measurement was possible, in ideal conditions. Only the crudeness of our measuring apparatus stood between our results and the object's true dimensions. Heisenberg said no, it is impossible to “see” sufficiently into the atom to know for certain what processes—specifically, wave or light—one is measuring. Further, whatever means we use to measure will inevitably disturb the element under scrutiny. Thus, the “observer” will affect and distort the “observed.”

For Bohr, Heisenberg's uncertainty principle was too limited. It focused only on conditions that occurred during observation, and in effect ignored the “wave” state. Bohr insisted (vehemently) on much more. Little was gained, he argued, from ignoring empirical evidence.

The wave-particle duality that so vexed quantum physics had a very solid empirical history. It began with an experiment, now replicated in physics classrooms throughout the world, called the “double-slit experiment.”

In 1801, Thomas Young (a physicist, physician, and Egyptologist who in his spare time deciphered the Rosetta Stone) devised a mechanism to analyze light: “I made a small hole in a window shutter, and covered it with a thick piece of paper, which I perforated with a fine needle,” he told the Royal Society in 1803. Thus began what has been called the most beautiful experiment in physics.30 When Young “split” the sunlight by dividing it with a thin card, he observed, projected onto the wall, “fringes of colour on either side of the shadow”—clear evidence of interference or diffraction. He was astonished. Light, according to Newton, was made of particles. Yet as it traveled past Young's thin card, it diffracted, just like a wave breaking on a jetty. Startling as this was for Young, a 1927 variation on the double-slit experiment came up with an even more astounding result: Electron beams from a nickel crystal produced diffraction. Matter, like light, was shown to behave like waves.

Young's experiment was the basis for much resistance to Einstein's theory of photons. After all, Young had proved that light was wave, not particle. Einstein countered with proof of light's particle nature. In 1915, Robert Millikan, after ten years of experimentation, reluctantly concluded that Einstein's equations were correct. In 1923, Arthur Compton confirmed the particle nature of electromagnetic quanta by observing the scattering of electrons from X-rays.

Thus, the paradox: Light and, as de Broglie proved, matter are both wave and particle. If physics had challenged our intuitive sense of the world with relativity, it now seemed to have done away with intuition altogether. After all, these states—wave and particle—are quite different. A wave is continuous and nonlocal, spread out over a large area. Particles are discrete, indivisible, and local. Richard Feynman calls the uncertainty principle a “logical tightrope on which we must walk if we wish to describe nature successfully.”31 We must think anew, says Feynman, “in a certain special way” to avoid “inconsistencies.” The awkward phrasing is unusually revealing. Quantum physics challenges not only our intuitive perception, but the limits of language. When Bohr and Heisenberg argued over the uncertainty principle, more was at stake than the utility of the principle itself. In order for a theory to “work,” it must not only explain evidence; it must also gain acceptance among practitioners. Bohr was older than Heisenberg and certainly more empathetic to the general state of alarm over quantum mechanics. Perhaps, too, he was more understanding of our psychological need to visualize (that is, imagine, in its etymological sense) our world. Something in addition to de Broglie's elegant proofs was needed to bridge the gap between particle/discontinuity (the side favored by Heisenberg) on the one side and wave/continuity on the other.

The answer, Bohr came to believe, was “complementarity.” Rather than see particle and wave in opposition, as mutually exclusive, might we not accept both as true? Particle and wave are interdependent; so, too, are classical physics and quantum theory. Here, seemingly, was an attempt to forge connections. Yet beneath the gentle-sounding word, complementarity posed a radical idea. Not only was exact measurement impossible, but the ambiguity lay in the properties themselves. Even before measurement, the atomic system is uncertain. All we can hope to attain by way of information lies in the realm of probability.

We have seen that quantum physics developed not through the genius of a single thinker, but through a series of conversations among both believers and nonbelievers. The most intense of these took place between Einstein and Bohr. It began well before Bohr's landmark presentation of the Copenhagen interpretation in 1927. Indeed, Einstein had contributed to the theory himself, not only by “discovering” the quanta, but also in collaborating with S. N. Bose to develop Bose-Einstein statistics (a set of formulae defining the statistical distribution of particles called bosons).

Einstein, unwilling to set foot in Fascist Italy, did not attend Bohr's lecture in Como. Within a few weeks, however, Bohr was scheduled to speak at the 1927 Solvay Conference in Brussels. Einstein came to the conference full of misgivings. What he heard was Bohr's paper, “The Quantum Postulate and the Recent Development of Atomic Theory.” He was horrified. In Bohr's words, “Einstein… expressed a deep concern over the extent to which causal account in space and time was abandoned in quantum mechanics.”32 Thus began the famous dialogue between Bohr and Einstein, one that would help Bohr refine his theory of complementarity and spur Einstein ever forward in his search for a theory that would subsume quantum mechanics and encompass both atomic physics and astrophysics.

EINSTEIN AND UNIFIED THEORY: CHASING THE RAINBOW

By 1927, Einstein had begun his assault on quantum mechanics. He never let up. It led to the most famous dispute of twentieth-century physics. Two eminent physicists, Albert Einstein and Niels Bohr, remained locked in battle for years, raising questions that remain unsettled even today.

Einstein's stubborn criticism struck many, particularly of the younger generation of Pauli and Heisenberg, as folly, waste, even provocation. In their view, the greatest physicist and boldest pioneer of the age had become a reactionary. The young Robert Oppenheimer visited Princeton in 1935 and described Einstein as “completely cuckoo.”33

Yet the old Einstein was only acting like the young Einstein. In 1905, he had opposed the orthodoxy of those who thought that light flowed through the invisible material called “aether.” Now, he was a “heretic,” cast out by the new quantum orthodoxy. He became a prophet in the wilderness, preaching the need to refound all of physics on a revivified “classical” basis, closer to Newton than to Heisenberg. This time, though, he would not succeed.

He failed in part because subatomic physics was in its infancy, unable to yield sufficient data. Physics had to wait years before “powerful atom smashers would clarify the nature of subatomic matter,” notes Michio Kaku in his reverential Einstein's Cosmos.34 Had Einstein had our wealth of data, Kaku hazards, he might have succeeded. As it was, knowing nothing of the bosons, gluons, muon neutrinos, partons, leptons, and quarks—the whole atomic stew, as it were—he could not form a “picture” to guide his mathematics.35

Yet Einstein's grand project had an agenda, one that skewed his method. He not only disagreed with the new quantum theory; he detested it. Not only did quantum mechanics turn the universe into a game of dice by replacing causality with probability, but it also seemed to him inelegant and ungainly. He was especially repelled by Heisenberg's uncertainty principle and its implied threshold beyond which we must remain ignorant. In his effort to unify electromagnetism and gravity, Einstein remained within the fold of classical physics. However revolutionary his own notion of the relativity of space and time had been, his unified field theory would succeed not by accepting the newest revolution, but by subsuming it.

With anyone less sane and generous than Einstein—and Bohr—the dispute over quantum mechanics could have turned acrimonious. The issues were not technical, but philosophic. Einstein on the one hand, and Bohr, Pauli, and Einstein's close friend Max Born on the other, were arguing from their deepest convictions about what physics should be. When Paul Ehrenfest had to choose Bohr over Einstein, he began to sob. Yet no real personal bitterness or resentment surfaced in this dispute, which, while intense, was respectful, though it lasted for over thirty years. Unable to change each other's mind, Einstein and Bohr finally talked past each other. By the late 1930s, they were exhausted. Afterwards, when they met, they exchanged pleasantries, but no more. Einstein and Born continued to argue for decades. In 1926, Born wrote about the new quantum mechanics to Einstein, and got this reply:

Quantum mechanics is certainly imposing. But an inner voice tells me that it is not yet the real thing. The theory says a lot, but does not really bring us closer to the secret of the “old one.”36

Einstein's confident appeal to an “inner voice” upset the rigorous Born, but their friendship never wavered. In 1944, Einstein wrote Born that they had become “antipodean” in their scientific beliefs: “You believe in a God who plays dice, and I in complete law and order in a world which objectively exists.”37 Born answered a month later, suggesting that Einstein was not really up to date with the arguments. “[G]ive Pauli a cue,” wrote Born, “and he will trot them out.” On one of Born's later manuscripts on the topic, Einstein wrote in the margins such remarks as “Blush, Born, blush!” and “Ugh.” To which the astonished Born replied: “Do you really believe that the whole of quantum mechanics is just a phantom?”38

In their letters, mutual affection mingles with frustration, especially from Born. In 1948, Einstein and Born exchanged what appear to be plaintive cries in the night:

(Einstein): I am sending you a short essay which, at Pauli's suggestion, I have sent to Switzerland to be printed. I beg you please to overcome your aversion long enough in this instance to read this brief piece as if you had not yet formed any opinion of your own, but had only just arrived as a visitor from Mars…. I am… inclined to believe that the description of quantum mechanics… has to be regarded as an incomplete and indirect description of reality, to be replaced at some later date by a more complete and direct one…. (Born):…. Your example is too abstract for me and insufficiently precise to be useful as a beginning…. I am, of course, of a completely different opinion from you. For progress in physics has always moved from the intuitive towards the abstract…. Quantum mechanics and the quantum field theory both fail in important respsects. But it seems to me that all the signs indicate that one has to be prepared for things which we older people will not like.39

They kept sparring, and in March 1954, Pauli finally intervened. He was visiting in Princeton again, and Einstein had given Pauli an article Born had written. Pauli, as usual, was blunt in his letter to Born:

[Einstein] is not at all annoyed with you, but only said you were a person who will not listen. This agrees with the impression I have formed insofar as I was unable to recognize Einstein whenever you talked about him in either your letter or your manuscript. It seemed to me as if you had erected some dummy Einstein for yourself, which you then knocked down with great pomp.40

Pauli's explanation of Einstein's “classical” view was “simple and striking,” Born remarked, but as usual, no mind was changed. As quantum theory developed from 1925 until his death in 1955, Einstein went his own way, ignoring the new findings and searching for a unified theory according to his original plan.

Only in Newton do we find a parallel for the first half of Einstein's life: his discovery at age twenty-six of relativity, the identity of mass and energy, and the quantum nature of light, and at age thirty-six of gravitational theory. But no one, not even Newton, rivals the later Einstein. Newton spent the last half of his life pondering theology and running the Royal Mint. Einstein was in his mid-forties when he began his search for the unified field. At that age, Bohr, Planck, and Rutherford, their great discoveries behind them, were slipping away from science into administration or teaching. To the day he died, Einstein toiled away at a problem more ambitious than any he had faced before, as if he still had his greatest discovery in front of him.

Today, Bohr's Copenhagen interpretation, as is the common fate of orthodoxies, has undergone dramatic revision. In a sense, quantum mechanics has been reworked into a near-unified theory called the “standard model,” a quantum field theory that is consistent for special relativity and quantum mechanics—that is, consistent for every point in space. Although early proponents of quantum mechanics tried their hand at field theories, nothing succeeded fully until Richard Feynman and others successfully manipulated the necessary mathematics. The result was quantum electrodynamics, or QED, which described the electromagnetic field and the interactions of particles precisely to ten decimal places.

When Einstein started his quest, he meant to unify gravitation and electromagnetism. Since then, the additional forces, strong and weak, have been unified with electromagnetism under QED. Gravity is the odd force out. It has yet to be folded into the other three forces. Any “grand unified theory” (dubbed GUT) or “theory of everything” must connect all four forces: gravitation, electromagnetism, and the two forces, strong and weak, that stabilize the nucleus. These forces have rightly been called primordial. They have existed from the beginning, or as near to it as can be imagined. When matter came to be, it found those primordial forces waiting to turn elemental matter into the universe we know. We do not know why gravity or electromagnetism or the strong or weak forces exist; like the universe, they simply do.

The holy grail of unified theory continues to attract believers. String theories, attempts to tweak the standard model into a form compatible with gravity, have long been contenders. Some hope that an ultimate “superstring” theory will be a “theory of everything.” Not yet. Among other problems, no variation of string theory can be verified experimentally. Michio Kaku dismisses the need for experiments:

The real problem is purely theoretical: if we are smart enough to completely solve the theory, we should find all of its solutions…. So far, no one on Earth is smart enough.41

In a way, modern physics has always been driven by the ambition to unify. In the view of one recent account, “modern physics is best summed up a series of unifications.”42 In the late Middle Ages, it was assumed that matter on the earth and moon differed from that of the celestial bodies, that each domain was governed by different laws, that each element of the “universe,” from the earth to the farthest quasar, was sui generis. Modern physics was born with Copernicus, Kepler, Galileo, and Newton. Each searched for a single system that would apply to all space, time, and matter.

For a time, the solution was at hand. Newton's mechanical system, complete and quite successful, prevailed for more than two centuries. Then, in 1819, Hans Christian Oersted, a Danish professor, noticed the deflection of a compass needle towards a source of electricity. Magnetism and electricity were, somehow, one and the same. Everything changed. Electromagnetism, theorized in its classical form by James Clerk Maxwell, involved phenomena quite different from Newton's solid bodies. Maxwell even demonstrated that light is an electromagnetic wave. Newton's mechanical model began to crumble around the edges. The struggle to reconcile electromagnetism with the forces of gravity, space, and time gave rise to relativity on the one hand and new atomic theories on the other.

Pass a magnet over a heap of iron filings, and the filings line up in a certain direction, as if an enveloping force has moved them all—which is exactly what has happened. The field is the magnet's sphere of influence. Gravity is another such field; its power extends to all objects in its region, pressing us to the earth and preventing us (and everything in our world) from levitating. The clustering of the filings around the magnet's poles illustrates the anatomy of a field. The lines of force tell the filings where to move and how quickly. The field itself generates the power that moves the filings. In Newton, particular forces “told” each body how to move and at what speed and strength—like billiard balls struck by a cue. Maxwell's field, however, simplifies things greatly: It is both particle and force at once—the vehicle and the source of energy.

Einstein sided with Maxwell and against Newton. Indeed, relativity was born from Faraday and Maxwell's “new type of law.” In Newton, one body was assumed to affect another instantaneously, no matter how vast the distance. But how could a force be transmitted “instantly” when nothing can go faster than the speed of light? And besides, what “medium” carried forces from the sun to the earth or from the earth to the moon? Until Einstein, physicists postulated an ether through which electromagnetic waves and light traversed. From Maxwell, Einstein began to see that there were a myriad of “local” fields everywhere in space, transmitting energy—a crucial step in the journey toward general relativity.

Einstein's quest for a unified theory was driven by his belief in the harmony of nature: Gravity and the atomic dimension had to be the same in some deep underlying way. That same conviction had sustained Einstein during his ten-year search for general relativity. No less a physicist than Max Planck told Einstein that his attempt to generalize relativity was futile. As Einstein biographer Abraham Pais remarks:

There is no evidence that anyone shared Einstein's views concerning the limitations imposed by gravitation on special relativity, nor that anyone was ready to follow his program for a tensor theory of gravitation. Only Lorentz had given him some encouragement.43

Despite the doubts of others, Einstein constructed his theory of general relativity. The self-confidence it gave him can hardly be overestimated. He expected to prevail again in his quest for a unified theory. In 1949, though stymied for a quarter century, he still evoked the heady experience that culminated in general relativity. After cataloguing the “impossibles” evoked by the quantum partisans—that is, the numerous ways in which Einstein's search for unity could not reconcile with the evidences of quantum mechanics—Einstein stubbornly returned to his thesis:

All these remarks [i.e., against a unified field theory built on relativity] seem to me to be quite impressive. However, the question which is really determinative appears to me to be as follows: What can be attempted with some hope of success in view of the present situation of physical theory? At this point it is the experiences with the theory of gravitation which determine my expectations. These equations give, from my point of view, more warrant for the expectation to assert something precise than all the other equations of physics.44

Yet it was a false hope, or so it now seems. Einstein wanted to ground his unified theory upon general relativity—that is, his explanation of the force of gravity. Yet in recent history, it has become clear that the ground floor must be quantum physics, not gravity.

As he kept trying, he reversed a bedrock assumption that governed the first half of his career. Before he took up unified theory, Einstein was an empiricist who trusted only his intuition about physical reality, not the beauty or inner consistency of equations. His 1905 paper on special relativity contained very little mathematics—and simple mathematics, at that. Indeed, when a small academic industry soon began to formalize and refine the mathematics of special relativity, the young Einstein mocked the efforts as “superfluous learnedness.”45 In 1918, his mathematical friend Besso apparently suggested that, in the discovery of general relativity, mathematics had been more important than empiric knowledge. Einstein was irked and flatly disagreed:

You allude to the development of relativity theory. But I think that this development teaches us nearly the opposite: if a theory is to inspire confidence, it must be founded on facts susceptible of being generalized…. Neverhasa useful and fertile theory been found by purely speculative means.46

In 1918, the mathematician Hermann Weyl made a try at unifying gravity and electromagnetism, and amazingly found apparently successful equations. Einstein crushingly replied: “Your argument has a wonderful homogeneity. Except for not agreeing with reality, it is certainly a magnificent achievement of pure thought.”47 Again and again, Einstein championed “reality,” “facts,” and “experience.”

But as his search for a unified theory dragged on, his views changed. He began to speak of the “pure thought” of mathematical formalism as a uniquely privileged approach to the reality of physics. Here is the former empiricist preaching the new message in a 1933 lecture at Oxford:

I am convinced that we can discover by means of purely mathematical constructions the concepts and laws connecting them with each other, which furnish the key to the understanding of natural phenomena…. Experience remains, of course, the sole criterion of the physical utility of a mathematical construction. But the creative principle resides in mathematics. In a certain sense, therefore, I hold it true that pure thought can grasp reality, as the ancients dreamed.48

The phrase “purely mathematical constructions” comes as a shock. In 1921, still skeptical, he had put the matter with his customary lucidity: “[A]s far as the propositions of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.”49 Now, twelve years later, he had come to saying the opposite: The more certain the equations, the more they refer to reality. Reversing the view he had expressed to Besso in 1918, he claimed that general relativity did spring mainly from its formalisms: “Coming as I did from skeptical empiricism of [Mach's] type… the gravity problem turned me into a believing rationalist, i.e., into a person who seeks the only reliable source of truth in mathematical simplicity.”50

Einstein had not turned into a mathematician. He had changed his method, not his goal. Mathematics for him was never the point, only a tool for doing physics. When the French mathematician Elie Cartan suggested a promising but complex theory, Einstein replied with his inimitable humor:

For the moment, the theory seems to me to be like a starved ape who, after a long search, has found an amazing coconut, but cannot open it; so he doesn't know whether there is anything inside.51

Still, the popular cartoon image of Einstein as a wild-eyed scientist standing before a blackboard crammed with incomprehensible equations springs from his work on general relativity. Such an image was not possible before Einstein. General relativity required such new and rarefied mathematics that in 1914, the great German mathematician David Hilbert said half-jokingly that “physics has become too difficult for the physicists.” After general relativity, theoretical physics and abstruse mathematics were ever more closely wedded.

In 1919, when proof came that the sun's field curved light, Einstein entered the pantheon alongside Newton. Satisfying enough for anyone else, but not for Einstein. He was unhappy that general relativity had produced another dualism at the center of physics—this time, matter versus field. Einstein found such incoherence in the structure of physics “intolerable.”

Could we not reject the concept of matter and build a pure field physics? What impresses our senses as matter is really a great concentration of energy into a comparatively small space. We could regard matter as the regions in space where the field is extremely strong.52

He thus sought an even more heroic theory, one able to dissolve matter into pure field laws. To do this, he had to enlarge general relativity, just as he had generalized special relativity.

Einstein's effort to find a unified “theory of everything” so clearly mirrors his earlier work that a brief look back at a few points is indispensable:

The famous perplexities of special relativity (1905)—peculiar clocks, shrinking distances—arise from a situation we do not encounter in our commonsense world. If we drive our car from New York, we know how many miles we are from that city, where it is, and how long it will take to drive back. We move about; New York is fixed in place.

But what if New York were also moving, as well as all the landmarks in between? How would we know exactly where we are and when something happens? Or, put otherwise: How can we do the physics? Special relativity accepts that all matter is in constant motion: Galaxies, planets, and all observers thereon are moving relative to what they observe. In this universal flying circus, there is no privileged space from which to measure, and no “absolute” time from which to count. Since no one can freeze all motion to get things utterly straight, each observer inevitably sees “simultaneous” events differently. Einstein's genius was to understand how we can nonetheless get an accurate measurement of time and distance, without which physics is stymied. First, he proposed that differences can be aligned to ensure that the same laws of physics take the same form wherever observed.53 Second, he proposed that the speed of light does not change, allowing us to measure intervals between events reliably. From these postulates came an epochal redefining of time, space, and measurement, along with famously surprising insights into strangely behaving clocks, slowed time, twins who live faster or slower, and the equivalence of mass and energy: e=mc2.

Special relativity, however, is limited to uniform speed, which is partly why railroad trains or spaceships are handy examples: They are man-made objects whose speed can be precisely controlled. But Einstein worried: “What has nature to do with our coordinate systems and their state of motion?”54 Indeed, as objects move through the universe, they “fall” and thus accelerate—and vice versa: The two motions are really equivalent. Once acceleration appears, so does gravity. Everything that falls also accelerates thirty-two feet per second during each second traversed. But since special relativity says nothing about acceleration, it also says nothing about gravity, and thus applies only when gravity is absent or negligible (as in subatomic dimensions, which is why atomic physics like Dirac's equation deals only with special relativity).

It is important to note that special relativity remains within the bounds of Euclidean geometry—as did all physics before Einstein. General relativity changed all that in 1915. Euclidean geometry is not only flat (recall high school math), it is also empty; it can hardly describe how monstrous caldrons like our sun pour such fiery energy around them that their gravitational fields skew nearby space. Einstein needed different geometry to describe a universe of energetic intensities and distended curvatures that seem positively surreal next to Newton's clockwork universe. Unlike Euclidean geometry's rigid structures, Einstein's geometry had to bend, flex, or “dimple” according to the energy or mass within it, a geometry not a backdrop for events, but actively part of the events it measures.

In Riemann's non-Euclidean geometry, Einstein found just what he needed. He adapted it to measure how huge masses mold curvatures in space through which lesser bodies “fall.” Here, bodies are not pushed or pulled by an outside force, as in Newtonian gravity; they simply “fall” in as straight a line as curved space-time allows—a geodesic, akin to a great circle drawn on the earth's globe.

But general relativity in turn was limited to bodies large enough to feel the power of gravitation. What of the subatomic world? Einstein's inevitable next move was to try joining gravity to the electromagnetic force which reigns in the subatomic dimension. Soon, he took his first stab at a unified field theory.

On July 15, 1925, the German quantum physicist Max Born wrote to Einstein: “I am tremendously pleased with your view that the unification of gravitation with electrodynamics has at long last been successful.”55 Decades later, Born mused: “In those days we all thought that his objective… was attainable.” Born soon came to believe that Einstein's search was “a tragic error.”56

Einstein's early attempt assumed that there were two fundamental forces: gravitation, which assemble the planets and galaxies, and electromagnetism, from which all matter is built. (Remember that we now know there are two additional forces.) How do these forces compare? One aspect is their relative strength. Gravity is a very weak force, but gathers strength as it deploys across the vastness of space.57 The more matter it attracts, the more cumulatively powerful it becomes—until finally it can gather together and swing around the very galaxies. The electromagnetic force is enormously stronger than gravity, by a factor of ten followed by forty-two zeros—luckily so, since the electromagnetic force binds electrons to the nucleus.58 If gravitational force were stronger in relation to electromagnetic force, matter might dissolve away, and us with it.59 Further, the strong electromagnetic force prevents negatively charged electrons from repelling each other so violently that they tear atoms and all matter to bits. The electromagnetic force subdues the anarchic tendencies of matter and brings stability to the atomic dimension. Thus can we bathe in waves and particles, light and radioactivity.

How these fields interact was a puzzle, but they must be related; it would be a strange universe otherwise, thought Einstein. If they differ in strength, they are similar in how they work. Both fields are generated by an excitation of matter: gravitation from excited mass (or energy), the electromagnetic field from excited electric charges. It would make sense in the process of unifying if we were to fit such hints and patterns together, as when looking for family resemblances by a common bulk of chin or curve of lip. Einstein had to match mathematical shapes or quirks of matter suggesting kinship: The bones of an equation about gravity might resemble one about electromagnetism, a frequency in an equation about electromagnetism might seem like an oscillation in one about gravity.

Yet the clues led to more puzzles. Thus, the electron's charge might play the same role in electromagnetism that mass does in gravity. However, relativity proved that mass varies with velocity, whereas electromagnetic charge never changes (charge is “conserved”). Even Einstein's success with gravity hindered as much as it helped. Having geometrized gravitation, he now sought an even more general geometry that, while fitting gravity, would include electromagnetism as well. Riemann's geometry worked beautifully for enormous bodies, but could not be applied to atomic phenomena. The charged electron seemed to need a very different geometry than gravity's mutable, swaying geometry—but what sort? Electrons do not really “orbit” the nucleus, as in popular imagination: They move in no usual spatial sense, but rather “up” and “down” in energy levels (quantum “leaps”). They are at once particle and wave, they spin (but only at fixed, quantized levels) and have angular momentum—and this only begins to broach the difficulties.

Einstein had two choices. He could keep the Riemannian framework, but expand the number of dimensions to five or more. Or he could keep the four dimensions, but find a substitute for Riemann's geometry. At one time or another, decade after decade, he pursued both of these possibilities. He explored four- and five-dimensional continuums, differential geometries, gauge transformations, absolute parallelism. He took apart his final field equations for general relativity, assigning the symmetric part to gravitation and the antisymmetric part to the electromagnetic field. He spent the years puzzling at chalk marks on the blackboard.

In 1925, when Einstein set aside his other work to find a unified theory, all seemed in place for success. General relativity provided the geometrizing approach he meant to extend to electromagnetism; he had been mulling the problem for seven years, since 1918; no physicist alive had a deeper intuition of what was physically possible or necessary, or the limits within which the new theory must work. He was still in his prime at forty-six. Max Born, no soft touch, predicted that “physics will be done in six months.”60

But unified theory was not to become another stroke of genius and insight. It was more like an aging engine fitfully turning over. Periodically, he would declare victory. After his first serious attempt in 1925, Einstein said: “I believe I have found the true solution.”61 He soon decided that he was wrong. He tried other approaches in 1927, and again in 1928—the latter broadcast as a victory by newspapers to eager readers. Einstein had to dampen the enthusiasm. In 1929, he again believed that he had prevailed, and even gave lectures in France and England. He retracted in 1931. In 1945, at age sixty-six, he published his final equations, but hardly with the overwhelming confidence he had expressed about general relativity. When queried by reporters, he said, “Come back and see me in twenty years.” He revised the equations in 1949 and 1954.62 Always, colleagues and friends bewailed his efforts. In 1932, Pauli was already complaining that Einstein's

never-failing inventiveness as well as his tenacious energy in the pursuit of [unification] guarantees us in recent years, on the average, one theory per annum.63

Einstein himself wondered how definitive even his final equations were. Perhaps, he said wryly, his critics were right, and the equations did not “correspond to nature”—the ultimate defeat.64 Pauli talked to him in 1954, and said that Einstein admitted

with his old directness and honesty, that he had not succeeded in proving the possibility of a pure field theory of matter. He regarded the problem as undecided.65

His final equations, appearing at last in 1945, had a muted reception. That year, the atom bombs exploded, the war ended, and physicists sought jobs in the booming field of particle physics. Einstein had become world famous for general relativity. As for unified field theory, it was beyond the horizon. He continued to work on unified field theory until the day he died.

Most physicists now see Einstein's theory as an intellectual feat but irrelevant to physics. A good theory should be able to predict important new insights and express earlier ones in some fertile new way, as general relativity did with Newton. Einstein's attempts do neither. Nor could he take into account the strong and weak nuclear forces. Today, the main contender for unifying all four forces is superstring theory. Most of its proponents pay homage to Einstein as a man “ahead of his time.” Still, his disdain for quantum mechanics might well have distanced him from today's unifiers. He tried to circumvent quantum physics by geometrizing electromagnetism in gravitation's image. String theorists have taken the inverse route by quantizing gravitation.

Meanwhile, Einstein's theory exists as a historical artifact in a scientific limbo. Physics does not linger over might-have-beens or maybes, unless they promise discoveries and insights. Careers are short, and the mainstream is where working theories are usually found. Einstein redirected the mainstream in his early work. But his attempts at a unified field theory banished him into the hinterland.

Postscript: The philosopher of science and theology Stanley Jaki, writing of current attempts to unify the forces, suggests that Gödel's theorem might apply. If so, says Jaki, such unification is impossible. Any consistent system, Gödel reminds us, cannot be complete in its own terms.66 As Einstein strolled to his Princeton office with his friend Gödel, might they have mused about such a limitation on our knowledge of the world?

THE PERSISTENCE OF NATURE

What might Russell have gleaned from his Princeton afternoons at Einstein's house? Much more than he let on, perhaps. Pauli and Gödel were quite biased in favor of metaphysics—as we know, their interests extended to archetypal mythology (Pauli) and the paranormal (Gödel). In a 1946 letter to his colleague Markus Fierz, Pauli spoke of “the idea of the reality of the symbol.”67 Psychology was his link to the “real” world of symbols. It would be difficult to imagine “premises” more distant from Russell's.

Yet Russell's quarrel with Einstein must have been by far the richest and the most important. In a “Note on Non-Demonstrative Inference and Induction,” which Russell dictated to his wife in 1959, he offers a tantalizing clue: “My beliefs about induction underwent important modifications in the year 1944, chiefly owing to the discovery that induction used without common sense leads more often to false conclusions than to true ones.” He goes on to distinguish between pure induction and what he calls “scientific common sense.” Induction, indeed, does not figure in the “extralogical postulates” required by scientific inference. Here, Russell shows himself to be an empiricist (as ever) with a difference: Induction is “invalid as a logical principle” because it so easily falls into fallacy. Russell's examples of induction going wrong include the following: “No man alive has died, therefore probably all men alive are immortal.”68

Had Russell in fact given up empiricism, Einstein would have been delighted. In his contribution to the Library of Living Philosophers volume on Russell, published in 1944, Einstein bluntly objected to the Humean “fear of metaphysics” in Russell. Of course, Einstein was quite right. After abandoning Plato in his youth, Russell never let go of the empirical impulse. In a way, the gulf between Russell and Einstein was not enormous. Neither subscribed to what Einstein called the two illusions: “the unlimited penetrative power of thought” and “naïve realism, according to which things ‘are’ as they are perceived through our senses.”69 Einstein agreed that “thought acquires material content only through its relationship with… sensory material”—a statement that sounds suspiciously acquiescent to empiricism. But he rejected any attempt to base thought upon material reality, arguing that the “free creations of thought” are sufficiently valid if they are merely “connected with sensory experiences.” That is, thought is not created out of material things or the perceptions of material things. But thought can contribute to knowledge only if it coincides with the “sense experience” that comes to us from what is material. Einstein was, to borrow his own words, on the thought side of the “gulf—logically unbridgeable—which separates the world of sensory experiences from the world of concepts and propositions.”70

The format of each volume of The Library of Living Philosophers requires that the subject “reply” to each essay. Russell dutifully replied to Einstein's contribution. Russell's few words are respectful and pointed. He agreed with Einstein that the “fear of metaphysics is the contemporary malady”—lamentable especially for the tendency of contemporary philosophers to swallow empiricism wholesale, without “prob[ing] questions to the bottom.” Still, Russell approached the “gulf” between metaphysics and empiricism with a “bias… towards empiricism.” He is thus quick to refute Einstein's assertion that number is an example of the “free creations of thought.”71 As one contrary instance, Russell offered the obvious correlation between the decimal system and our ten fingers. For Einstein, desperately clinging to the hope of a mathematical solution to his unified field theory, Russell's empirical bent must have been an unpleasant reminder of those abandoned “generalizable facts” upon which his relativity theories were based.72

In 1949, Russell wrote “Einstein and the Theory of Relativity” for a BBC broadcast. In it, Russell praised modern physics for its “desire to avoid introducing into physics anything that, by its very nature, must be unobservable.” The consequence has been more abstraction in physics, as no longer are we permitted “to make pictures to ourselves of what goes on in atoms, or indeed of anything in the physical world.” The tongue-in-cheek of this quip aside, Russell put his finger on the paradox of evidence in physics. What Russell wanted was less of the unobservable to count as science: “[S]o long as the technique of science can survive, every diminution in the number of unobservables that are assumed is a gain. In this sense, Einstein took a long step forward.”73 Russell had recast Occam's razor* to fit modern physics, but how that law of parsimony can be reconciled with scientific creativity, much less a “theory of everything,” is hard to know.

If Einstein did waver in his commitment to experience, he never fully gave it up. Asked by Scientific American to explain his most recent unified field theory in nontechnical terms, Einstein obliged. The result, a difficult and abstract article published in 1950, conceded as much:

The skeptic will say, “It may well be true that this system of equations is reasonable from a logical standpoint. But this does not prove that it corresponds to nature.” You are right, dear skeptic. Experience alone can decide on truth.74

 

* William of Occam admonished, “Pluralitas non est ponenda sine necessitate,” which can be translated, loosely, as “Strip away unnecessary things.”