No human inquiry can be a science unless it pursues its path through mathematical exposition and demonstration.
—Leonardo da Vinci
o our utter frustration, Leonardo produced only a few paintings—although all are miraculous in quality. He may well have been the greatest painter, but painting was not his first love. It is likely, as many art historians have conjectured, that painting was too easy for him. His first love was science, fueled by an unquenchable passion to understand nature. Accordingly, it would have been in the science and technology developed in the past five hundred years that he would have found deepest fascination. He frequently argued that painting was a science. At one level, at least, there exists truth of the converse of Leonardo’s claim—science is a form of art—and he especially would have appreciated the beauty it can reveal.
There is an elegant beauty in science as well as in the scientist’s view of the universe that few laymen experience, reality at this level often presenting itself as abstract solutions to arcane equations. Paralleling the development in art, the description of nature given by modern physics in the twentieth century is more abstract than in the paradigms that prevailed earlier. Leonard Shlain remarks, “the epiphany that inspired the book [Art and Physics] was a connection between the inscrutability of art and the impenetrability of new physics,” an event, he says, “that occurred at the same instant.”1 My own feeling is that both became more abstract, perhaps even counterintuitive, but that the mathematical “impenetrability” he ascribes to physics (even for the enlightened layman) was there from Newton’s time onward. In the eighteenth century the Lagrangian and Hamiltonian formulations of Newton’s classical mechanics; and in the nineteenth century, the mathematical descriptions of electricity and magnetism, were as hopelessly inaccessible to the layman as the mathematical machinations of modern physics are today. Although the mathematical beauty of the universe may remain esoteric and inaccessible, some aspects of its physical beauty—made visible by the complex instruments of the scientist as well as the largely qualitative descriptions by a number of uncommonly good facilitator scientists—are accessible to everyone.2 It is unfortunate that such elucidation was not available at the time when D. H. Lawrence wrote in Pansies: Poems:
I like relativity and quantum theories
because I don’t understand them
and they make me feel as if space shifted about
like a swan that can’t settle,
refusing to sit still and be measured;
and as if atoms were an impulsive thing
always changing its mind.
Albert Einstein, with his faith in the mathematical underpinnings of natural law as well as in the internal consistency of his own mathematical derivation, displayed from the start a deep conviction of the validity of relativity. But in 1922, while commiserating about the realities of nationalism that pervaded even the world of science, Einstein pronounced: “If my theory of relativity is proven successful, Germany will claim me as a German and France will declare that I am a citizen of the world. Should the theory prove untrue, France will say that I am a German, and Germany will declare that I am a Jew.”3
The year 1905 proved to be Einstein’s annus mirabilis, witnessing the publication of five papers, three of which were of Nobel Prize quality. The special theory of relativity was published that year with the arcane title “The Electrodynamics of Moving Bodies.” Einstein, in attempting to establish the generality (and universality) of the laws of physics for all inertial frames (or frames of reference in uniform motion), postulated a pair of relatively simple principles. The first, also known as the principle of relativity and accepted since the seventeenth century, posited that an observer in an inertial frame could never perform an experiment entirely within his frame to detect the motion of his frame. The second and more daring (because of its counterintuitive nature) was: “the speed of light is the same for all inertial observers, regardless of their relative velocities.” The two postulates suggest the existence of an infinity of equivalent frames. The consequence of the theory is that the fundamental indefinables of physics—length, mass, and time—all emerge as relative, and the speed of light c becomes the only absolute. Although each of the indefinable fundamentals has a unique correct value in its own inertial frame, measurements by observers in all other frames, using their own measuring tools, yield an infinity of different but still correct values. Specifically, length (or distance) contracts, time dilates (or slows down), and mass increases as the velocity increases. An object simultaneously contracts as its mass increases, resulting in its density increasing at a compounded rate. At 87 percent the speed of light, an object contracts to half its original length, its mass doubles, and its density is quadrupled. Along with a number of other equally startling relativistic effects, there is the equivalence of energy and mass, enshrined in the simple formula E=mc2.
The publication of the special theory effectively threw down a gauntlet, challenging all other theoretical physicists and mathematicians to formulate a general theory—a theory valid for accelerated frames as well. It is the formulation of the general theory by Einstein (1915) that attests to the man’s transcendent genius and his recognition as the rightful heir to the mantle of Isaac Newton. It is general relativity that establishes the equivalence of gravitation and acceleration.
The underlying postulate of the general theory of relativity is that the effects of gravitation and those of accelerated frames are the same when observed locally, a principle known as the “equivalence principle.” In two enclosures—one a stationary elevator on earth, and the other, an identical enclosure located in a rocket ship accelerating in interstellar space (far from the gravitation of celestial bodies)—a number of simple thought experiments, Gedankenexperimenten, are performed. We shall assume that the acceleration of the rocket ship is exactly 9.8 m/sec2, just as it is for bodies free-falling on the earth. The occupant of the stationary elevator can stand on a bathroom scale and look down to determine his weight. The occupant of the rocket ship accelerating at 1G would determine his weight to be identical to his weight in the stationary elevator. In a second Gedankenexperiment the observer in the stationary elevator could hold a pair of apples in his outstretched hands, and when he released them, watch them accelerate downward at 9.8 m/sec2. And the occupant of the elevator or enclosure in the rocket ship could release a pair of apples from his hands and watch the apples “fall” (in reality, watch the floor ascend) at 9.8 m/sec2. From his vantage point, or “frame of reference,” the effects observed would be the same—or almost the same. A subtle difference would exist, in that the trajectories of the apples in the rocket ship would be precisely parallel all the way down, whereas the trajectories of the apples in the gravitational field, the “parallel lines,” could be extrapolated and found to converge at the center of the earth, 6,400 kilometers below the floor (just as in the case of Eratosthenes’s posts used in determining the radius of the earth).
Einstein’s field theory equations predicted that a curvature of space-time would occur with mass as the cause of the curvature, or “warping.” Light passing in the vicinity of a celestial body traverses the area with a curved trajectory, following the curvature of space-time.4 The path that light takes is found to be the one that minimizes its time of travel. Within two years after the publication of the general theory of relativity, English and German scientists collaborating under the leadership of the Cambridge astrophysicist Sir Arthur Eddington journeyed to South Africa to perform an experiment testing the predictions of the general theory. A total solar eclipse was due in that area, and a star hidden by the solar disk was predicted by relativity to become visible as the result of the lightbending effect of the sun’s gravitational field. Right on cue the moon blocked the disk of the sun, and the star directly behind the sun became visible on the periphery of the solar disk. The star’s light rays passing by the sun had been bent and redirected to the predicted spot in South Africa where the eclipse occurred. Of course, extrapolating that image of the star’s observed location put the star trillions of miles away from its actual known location. A massive body, deflecting light in this manner, operates as a “gravitational lens.”
In 1986 Princeton University astrophysicists photographed the same galaxy at two points slightly separated from each other due to the presence of an unseen mass intervening between the earth and the observed galaxy. (In reality, the actual galaxy was hidden precisely behind the massive invisible object acting as the gravitational lens.) Ten years later, the Hubble telescope photographed a perfect gravitational lens effect—an entire circular image. A galaxy, with its light radiating in all directions, was seen as a circular image. A massive object, invisible to the telescope and positioned in line and midway between the earth and the galaxy, had focused the rays from the galaxy to one spot, the earth.
General relativity also predicted that in an accelerating rocket (a pseudogravitational field) time would pass at a faster rate in the nose than in the tail section. Similarly, in a gravitational field time would pass at a faster rate in the penthouse of a tall building than in the basement, and accordingly the occupants would age faster in the penthouse than in the basement. It was not until 1960 that Robert Pound and Hans Rebka of Harvard University performed an experiment confirming the effect of gravitation on the passage of time.5
Just two years after Einstein published the general theory of relativity, he began to investigate its application to cosmology. The first rigorous solution to Einstein’s equation was to come from the thirty-five-year-old Russian mathematician Alexander Friedman, just a year before his death in 1925, but the solution turned out to be unsettling: it predicted an unstable universe—one that is expanding or contracting, and decidedly not static. Since no physical evidence existed at the time suggesting that the universe might be anything other than stable, Einstein felt it necessary to introduce a cosmological constant, a proverbial fudge factor, into his equation, which would assure a steady-state condition. A curmudgeon might say, “A doctor buries his mistakes, an architect hides his in ivy, and a theoretical physicist introduces constants.” In 1929 Hubble discovered that indeed all intergalactic separations were increasing, and the greater their distance from each other, the faster they were increasing. In short, the universe was expanding. Einstein realized immediately that he should have trusted the mathematics of the “bare” equation in the first place, and he rescinded the constant.
During the following decades there still remained scientists skeptical of the notion of an expanding universe—the aftermath of a primordial explosion predicted by general relativity. The most vocal opponent of an expanding universe was the British astrophysicist Fred Hoyle, who advocated a steady-state picture, or one of a constantly self-replenishing universe. Ironically, it was Hoyle who derisively characterized the picture of the expanding universe as “the big bang,” introducing an expression that stuck.
Then in 1965 Arno Penzias and Robert Wilson, physicists working at Bell Laboratories in Murray Hill, New Jersey, detected background microwave radiation, as an all-pervasive relic of the primordial explosion, poetically called the “whisper of creation.” The discovery of the background radiation made a compelling case for the big bang theory, but introduced a vexing new question. From this radiation that was so uniform, or monochromatic, how could a “lumpy universe” possibly evolve? Twenty-seven years passed before striations and blotches appeared out of the uniformity, the explanation for the eventual lumpiness of the universe. The Cosmic Background Explorer (COBE) Satellite in 1992 discovered variations on the order of 10-5 parts, or one part in 100,000. As for the most immediate of “lumps,” our own solar system—the sun, the neighboring planets, the earth with all its inanimate and animate creatures, the book you are holding in your hands—was once the material of a very large star. In exploding (about 4.7 billion years ago) it dispersed its material that eventually accreted into the solar system some 4.5 billion years ago by the mediation of gravity. As the recycled material of a star, you and I are all its children!
In the past forty years data has been accumulating and removing major question marks, but at the same time spawning new questions. Is this a one-time universe with a beginning and an end? Or is it a repeating universe, continually recycling itself—expanding then contracting? (During recent times evidence has been accumulating for an ever-expanding universe—but again with a complication. For a while it increased, while decelerating; then it began to increase at an accelerating rate. This is where we find ourselves now.) Are the constants of nature truly constants forever, or can they be changing (this might explain the observed decelerated growth rate of the early universe morphing into an accelerating one). There are also other questions: Was there anything before the big bang—all evidence of an earlier universe having been destroyed by that cataclysmic event? Can there be other, parallel universes? Does extraterrestrial life exist, and if so, does extraterrestrial intelligent life exist? We have been able to detect other solar systems, but we have not yet detected any sign of extraterrestrial life.
With the launching of the Hubble telescope and other orbiting space laboratories, and indeed with the coming of age of telescopy in spectral regions far from the visible, the observational evidence is for the first time catching up with the speculative models percolating in mathematical physics.6 Matter in the bulk—planets, stars, galaxies, and even interstellar gases—are ultimately all composed of atoms, molecules, nuclei, and nucleons (protons and neutrons); nucleons, in turn, are composed of quarks. And it is with spectroscopy that we try to understand their internal mechanisms. Spectroscopy, based on the study of radiation in a variety of regions of the electromagnetic spectrum, represents an immensely powerful probe, but one that is basically indirect. Its operation has been likened to listening to the sounds emitted by pianos dropped from a roof, and—from the sounds they produce as they come crashing to the ground—attempting to understand the structure of a piano.
Einstein’s special and general theories of relativity resolved the mathematical incompatibility between the two great edifices of Newton’s mechanics and Maxwell’s electrodynamics, and in the process had delivered a new picture—of a four-dimensional space-time in which the dimension of time and the three dimensions of space are inextricably linked. The special theory represented a paradigm shift in describing nature; the general theory was an entire paradigm upheaval. The description of reality in Einstein’s picture is so different than that of Newton’s that Einstein was moved to write an apology to his great predecessor: “Newton, forgive me; you found the only way which, in your age, was just about possible for a man of highest thought and creative power.”7 Alexander Pope’s little couplet honoring Newton inspired a sequel epigram by J. C. Squire to herald Einstein’s contribution: “It did not last: the Devil howling, ‘Ho!’ / ‘Let Einstein be!’ restore the status quo.”
The better part of a century has passed since Einstein formulated his two theories—the special and general theories of relativity—and they have proved their power and resilience repeatedly. They are established laws of nature. To use the misnomer “theories” reflects a poor habit among scientists. Conversely, labeled as “laws” are Wien’s blackbody law and Rayleigh-Jeans’s blackbody law, both seminal hypotheses, but neither of them correct. Together they provided salutary influences in Planck’s formulation of his blackbody law that led to the birth of quantum mechanics.
The development of physics in the twentieth century is evocative of the development of art in the Italian Renaissance. Between 1900 and 1932 physics saw a Renaissance, and during a very short period spanning 1924 to 1928, when quantum mechanics was being formulated, a High Renaissance. Jacob Bronowski in The Ascent of Man described the scientific discoveries of this short period as “the greatest collective piece of art of the twentieth century.” In the essay “What is Quantum Mechanics?” the modern physicist Frank Wilczek wrote, “the founders of quantum mechanics were guided by analogy, by aesthetics, and—ultimately—by a complex dialogue with Nature, through experiment.”8 The context of the statement is Paul Dirac’s inspired ideas in formulating the mathematical operators of quantum mechanics, and indeed introducing the axioms underlying the theory. Perhaps more than any other physicist, Dirac believed in the inherent beauty of mathematics as the fertile ground to seek the laws of nature.
Age is, of course, a fever chill
That every physicist must fear.
He’s better dead than living still
When once he’s past his thirtieth year.
—Paul Dirac (1902–1984)
This verse reflects a sentiment known to mathematicians and physicists: that unlike the practitioners of most other sciences, one’s creative peak occurs early. It is echoed by Albert Einstein, who said, “A person who has not made his great contribution to science before the age of thirty will never do so.” Isaac Newton was twenty-three and twenty-four during his annus mirabilis, Albert Einstein twenty-six during his. (Music is another field that manifests this pattern; and another is lyric poetry, as opposed to, say, novel writing.) Although the poetic legacy of Paul Adrien Maurice Dirac will never rival his scientific legacy, he was ideally qualified to make this pronouncement regarding the mathematical sciences. His scientific legacy places him among the finest scientists in history.
Born in 1902 in Bristol, England, Dirac matriculated at twenty-one in the graduate program at Cambridge University with plans to pursue research in the hot new area of relativity. But upon his arrival at the venerable institution, he switched on the advice of his new research advisor, Ralph H. Fowler, to the newly emerging field of quantum theory. The theory had its seeds in Max Planck’s paper, published in 1900, explaining the characteristics of the radiation emitted from a heated and glowing body. It had demonstrated that electromagnetic radiation, including ordinary light, is emitted in packets, called “quanta of radiation” or “photons.” Slow to be recognized for its significance, this idea received legitimacy from Einstein, who invoked it in explaining the photoelectric effect in 1905, a mathematical explanation that earned him the 1921 Nobel Prize. (Planck, meanwhile, had received the Prize in 1918.)
In 1913 Niels Bohr had summoned quantum theory in explaining the workings of the hydrogen atom: electrons orbited the atomic nucleus in concentric circular orbits, each of precisely prescribed radii. When an electron dropped or decayed from one orbit to a lower one, energy would be emitted from the atom in the form of a photon. Conversely, in absorbing a photon, the atom would be “excited,” with an electron jumping from a lower orbit to a higher one. In 1915 Arnold Sommerfeld refined the explanation of the Bohr model by introducing a touch of relativity into the mix in allowing elliptical orbits along with the circular ones prescribed by the Bohr model. In its refined form, the theory worked impressively for the simplest atom, hydrogen, something that neither Newtonian mechanics nor Maxwell’s electrodynamics had been able to do. Planck’s quantization of energy and the Bohr theory were invoked by Einstein in 1917—in this instance to predict the phenomenon of stimulated emission of radiation. With this conjecture Einstein prefigured the idea behind the MASER and LASER9 by an astonishing four decades, the practical devices first coming to fruition in the late 1950s. Bohr received the Nobel Prize in Physics a year after Einstein, in 1922. By 1915, however, it had already started to become frustratingly evident that the prevailing theory had serious shortcomings. Now referred to as the “old quantum theory,” it failed to explain the behavior of atoms beyond hydrogen, and certainly also the behavior of molecules and atomic nuclei. This impasse was to last for the better part of a decade.
In 1924 a young student at the Sorbonne, Louis de Broglie, submitted a doctoral thesis with an intriguing hypothesis—that “matter,” in the corpuscular form that we know it, could be regarded as having a dual particle/wave nature, just as had been established for light. Devoid of any substantive physical or mathematical basis, the thesis could easily have been rejected—tossed onto the pile of discredited pseudoscientific theories regularly springing up in bohemian Paris. What gained de Broglie’s conjecture a second look, however, was the young man’s aristocratic, indeed royal, birth: Louis de Broglie was a prince. And although he was a retiring and shy prince, his brother—also a physicist—was most certainly not. It was the brother, a more forceful and loquacious man, who brought pressure to bear on the head of the physics department, Langevin, to consider Louis’s hypothesis carefully.
Langevin’s dilemma was particularly delicate: the theory could be wrong, and its acceptance by the department would make the Sorbonne faculty the laughing stock of the international physics community. Or the theory could turn out to be correct, and their rejection of it would make them look just as silly. Fortunately for Langevin and the future of the physical sciences, his good friend Albert Einstein was passing through town, and Langevin saw an opportunity to see the dilemma resolved. “Of course, we know what to do with the thesis,” he explained, “but we would appreciate your suggestions.”
Einstein, like any good academic pressed to answer a problem that he was unable to answer immediately, asked for additional time. “Let me sleep on it,” he replied, hoping perhaps for some delayed insight, or that the question would go away. Einstein was neither able to sleep, nor did the problem go away. He told Langevin the following day that this was indeed an “interesting hypothesis” and he would need even more time.
Einstein sent de Broglie’s thesis to his friend Pieter Debye, the head of the physics department at the University of Zurich. Debye, an experimental physicist, not up to the task, passed the thesis on to Erwin Schrödinger, a young Austrian-born physicist in his department coming up for tenure. “Take this paper and evaluate it. You can discuss it at the department seminar next month.” As Debye walked away, he paused, displaying clear puzzlement. Then throwing up his hands, he declared, “I don’t even know what kind of wave theory this is. It has no equations.” After a pause, he strode back, “See if you can write some equations.” Schrödinger was not particularly eager to take on the task, but the request was coming from the head of the department. And he was up for tenure.
Mathematically gifted and extraordinarily colorful, Schrödinger, it is said, retired for two weeks to a mountain chalet in the resort of Arosa, Switzerland. He took along his young mistress, and two weeks later returned with the resolution: there appeared nothing wrong with de Broglie’s conjecture. Schrödinger also brought back with him from the mountaintop a tablet with a pair of “Commandments”—two wave equations to mollify Debye.10 Ironically, the theory was presented at the department seminar without unusual enthusiasm. These equations, however, soon to become known as Schrödinger’s time-dependent and time-independent equations, were emerging as the starting point in solving most problems in quantum theory just as Newton’s second law represented the starting point in solving most problems in classical mechanics. The endorsement by Schrödinger of de Broglie’s hypothesis was transmitted with a certain alacrity to Einstein. And Einstein, in turn, dispatched a message to Langevin at the Sorbonne, “Give the boy his Ph.D. He can’t do much damage with a Ph.D. in physics!”
In 1925 Schrödinger published his theory of wave mechanics. The de Broglie hypothesis was to prove so seminal that just a year after Einstein recommended the acceptance of the thesis by the Sorbonne, in 1925, he proceeded to nominate de Broglie for the Nobel Prize.11 In a parallel development in the same year, the twenty-three-year-old German physicist Werner Heisenberg, approaching physical reality from a different vantage point, formulated the “uncertainty principle,” and with significant input from two other youthful physicists—Max Born and Pasqual Jordan—the next year followed up with the formulation of “matrix mechanics.” The Heisenberg uncertainty principle is a quantifiable and precise mathematical statement placing a lower limit on the product of the uncertainty in position and the uncertainty in momentum (momentum is mass multiplied by velocity). In short, he demonstrated that it is impossible to know simultaneously and with perfect precision both the position and the momentum of a body. The principle finds alternative expressions, including this form: “Before the detection and measurement of the system (atom, particle, photon, etc.), it does not even exist. Once a measurement has been made and a value obtained, any subsequent measurement will not yield the same value as the earlier measurement.” In the sense that the measuring instrument disturbs what it is measuring, the uncertainty principle has found its way into the vocabularies of other fields, although not quantifiably. It appears as a frequently quoted and intuitively obvious principle among social scientists, especially pollsters. But the sociological interpretation is not at all the same as that in physics. Before opinion polls are taken, opinions presumably already exist, and these indeed may change when the poll results are published. Ultimately, however, in physics the uncertainty principle is couched in the wave-particle duality of matter at the ultra-microscopic scale.
By 1926 the drama heightened. There existed two alternative approaches to the physics of phenomena at the atomic scale—each describing correctly those systems more complex than the hydrogen atom. But they appeared to be based on entirely different fundamental principles—their success providing the only common ground. Was it a coincidence that they both worked?
Within a year the twenty-four-year-old Dirac—taciturn and reclusive, and operating independently of the cadre of continental physicists—demonstrated the mathematical equivalence of the two forms of quantum mechanics in developing a fundamental, axiomatic theory. Such syntheses of two fields had occurred only three or four times before in the history of science. In the seventeenth century Newton had synthesized the mechanics of terrestrial and celestial phenomena (1687). In the nineteenth century Maxwell had synthesized electricity, magnetism, and optics (1864); and later in that century Boltzmann and Gibbs independently formulated statistical mechanics, unifying classical mechanics and thermodynamics, the physics of heat. Finally, in the early twentieth century, applying the mathematics of Minkowski, Einstein inextricably linked the three dimensions of space and the one dimension of time with the special theory of relativity. And ten years later, invoking non-Euclidean geometry, he offered an improved theory of gravitation with his general theory of relativity.
Quantum mechanics, called the greatest scientific theory ever formulated, represented the crowning glory of the physics renaissance. The theory’s simple axioms, its internal consistency, its mathematical elegance, and ultimately its stunning success all combine to render the theory a human creation of pure beauty, an artistic masterpiece. In the formalism of quantum mechanics, physical observables—energy, momentum, position, etc.—all have associated with them a mathematical operator (itself not measurable) defined by a mathematical operation (an eigenvalue equation). The eigenvalue equation yields physical observables in the form of eigenvalues that are unique and measurable. And the equation also yields wave functions (eigenfunctions) that are again not measurable; but these eigenfunctions can be used in computing probabilities for various parameters: where the particles might be dwelling, their momenta, and so on. De Broglie’s original hypothesis of particles behaving as physical waves was only partially correct, but its incomparable salutary consequence was no less than the launching of quantum mechanics. The theory abounds with perplexing implications such as particles with insufficient energy to hurdle a barrier “tunneling” through the barrier without losing any energy, of a camel passing through the eye of a needle, or even a particle being in two different places at the same time. The wave function’s role as a probability amplitude, preempting de Broglie’s proposal of a physical wave, came largely from the work of Born, who in 1925 had first introduced the expression “quantum mechanics.” While on the Cambridge University physics faculty, Born received the Nobel Prize (1954) for his contributions to the development of the theory. Although one of the pioneers of modern physics, he is less well known in our culture than his granddaughter, actress/singer Olivia Newton-John. The probabilistic nature of the theory, as opposed to the deterministic cause and effect scheme characterizing classical physics, never sat well with the old guard in the field. Indeed, it was Einstein, who emerged as the most vociferous opponent of the new theory based on the uncertainty principle, uttering the famous vituperative, “God does not play dice with the universe.” That triggered a riposte from Bohr, Einstein’s close friend and contemporary, but in this instance an ardent patron of the young quantum mechanics, “Albert, stop telling God what he does or does not do!” After experience with the theory for three-quarters of a century, there is general consensus among physicists that a superior theory simply does not exist.
In 1928 Dirac formulated his magnum opus, the relativistic wave equation. The solutions of this equation predicted the existence of antimatter as opposed to matter—of positively charged positrons, antielectrons, as the antimatter conjugates of ordinary negatively charged electrons, of negatively charged antiprotons as the antimatter conjugates of positively charged protons, and so on. Just four years after the theory was published, Carl David Anderson, a young physicist at Caltech, detected the positron. When pure energy (in the form of gamma radiation raining down on the earth as cosmic radiation) passed through a steel plate, it suddenly “materialized,” transforming into an electron-positron pair of particles. Conversely, an electron-positron pair—in colliding—were seen to destroy each other in “pair annihilation,” with the attendant release of a pair of gamma-ray photons (in opposite direction) of total energy E=2mc2. For his discovery, a confirmation of Dirac’s prediction, Anderson received the 1938 Nobel Prize in Physics. It took until the mid-1950s, when particle accelerators with sufficient accelerating energy could be built, however, before antiprotons and antineutrons could also be produced.
Because of the theory’s mathematical rigor, the undergraduate physics student may not even encounter Dirac’s name. The graduate student, however, cannot avoid it. The idea that someone can produce work of such seminal significance and not be recognized except by professionals in the field presented no problem to the enigmatic Dirac personality. He was so retiring and shy that he contemplated turning down the Nobel Prize. Only after being advised that he might become an even greater celebrity by doing so did he relent and accept. Other great physicists have been accorded units after their names: a newton of force, a joule of energy, a pascal of pressure, and an ampere of current. Dirac, extraordinarily parsimonious with words, inspired the whimsical unit, the Dirac: “one word per year.” Of course, there are multiples of the Dirac—KiloDiracs, and MegaDiracs, etc.—but such multiples were rarely uttered by Dirac himself. During the nearly four decades that he spent at Cambridge University, he held the chair of the Lucasian Professor of Mathematics, the same chair Isaac Newton had held, and which Stephen Hawking holds today.
A conference held in 1927 at the Solvay Institute, Belgium, had as its mission to examine the philosophy underlying this new theory—arcane, abstract, counterintuitive, and yet so very successful. Toward the end of the deliberations the participants in the conference gathered for a “family portrait” (Figure 12.1). Included in the photograph is the handful of very young, very talented theoretical physicists who created quantum mechanics—de Broglie, Schödinger, Heisenberg, Dirac. Present in the photograph also are Albert Einstein, the head of the counter-revolution, and Niels Bohr, serving as “Father Confessor”12 for the young physicists. As a group picture of the pioneers of modern physics—natural philosophy—the photograph represents the twentieth century’s answer to Raphael’s School of Athens. The twenty-nine individuals in the photograph received a total of twenty Nobel Prizes, Madame Curie accounting for two (one in physics and another in chemistry).
Figure 12.1. Participants in the 1927 Solvay Conference. Those seated in the front row represent mostly the old guard. Starting from the second person at the left are Max Planck, Marie Curie, Heinrik Lorentz, Albert Einstein, and Paul Langevin. Second row from left, the first person is Pieter Debye; the fifth is P.A.M. Dirac; the seventh, Louis de Broglie; next to de Broglie is Max Born, and finally, Niels Bohr. Third row, sixth from left is Erwin Schrödinger; eighth, Wolfgang Pauli; and ninth, Werner Heisenberg. (1927 Photophraphie Benjamin Couperie, Ecole Internationale de Physique Solvay; courtesy AIP Emilio Segré Archives)
At the threshold of the twenty-first century the refinement of the laws of physics continues. The last word on the subject is still not written. Physicists see the universe as the product of four fundamental forces: gravitational, electromagnetic, and two types of nuclear—strong and weak—forces. It is believed that in the very beginning (before 10-43 seconds, known as “Planck time”) there was only one primordial force, and four separate forces sprang from it very quickly. In the big bang scenario an intractable course sets in, the universe expanding (at the speed of light from a point of singularity), attended by an inexorable decrease in temperature. The lyrical expression “whisper of creation” describes the leftover radiation from the big bang, now having cooled to about 3° kelvin (−270° C or −454° F). Einstein spent the last three decades of his life, including twenty-five years at the Institute for Advanced Study in Princeton, trying to formulate a unified field theory synthesizing the four forces. When he died in 1955 he was nowhere near realizing the dream. That oft-quoted line attributed to Einstein—“The Lord God is subtle, but malicious he is not!”—delivered in a lecture at Princeton University and alluded to earlier, has an appropriate sequel. Einstein, frustrated with results he was obtaining from some of his equations, commented in a letter to his friend, Valentine Bargmann, “I have second thoughts. Maybe God is malicious!” His meaning here is that often we are deluded into believing we have finally understood something of fundamental significance, when in reality we are far from understanding anything.13
Two of the four fundamental forces of nature—electromagnetic and the weak force—were unified as the “electroweak force” by the standard model, published by Sheldon Glashow, Abdus Salam, and Steven Weinberg, who shared the 1979 Nobel Prize in Physics. At present, three of the four forces have been unified quite effectively, the rogue force avoiding unification with the other three being gravitation. A monumental obstacle still impedes further progress: quantum mechanics, magnificently successful in dealing with atomic phenomena, is incompatible with general relativity, which is spectacular in explaining phenomena on the cosmic scale. It is hoped that superstring theory (or one of its variations) may just rise to bring together harmoniously the two great theories in a holy grail theory of everything (TOE). Then gravitation will take its place with the other three forces—in a re-creation of the primordial force from which all four separated. We have reached this point after only five hundred years of scientific inquiry that began with Leonardo watching pendulums oscillating and objects falling from a tower, and a hundred years later, with Galileo repeating the same experiments, and publishing his results.
A general principle for dynamic systems exists that holds for art as well as science. In the present context one might say, “Today’s physics is tomorrow’s history of physics.” Whatever form our present description of reality takes, there most likely will be the need to refine the descriptions again and again. Einstein’s relativity represented a refinement, but there is no reason to think that even Einstein’s is the final version. Certainly a theory bringing gravity into the fold will have to accommodate quantum mechanics, and Einstein’s theory will have to be modified. I would like to think that if Leonardo were living now, the issues of fundamental science would still preoccupy his interests.
Founded in 1930 by the Bamberger family (of department store fame), the Institute for Advanced Study in Princeton, New Jersey, has been one of the great think tanks of America. Abraham Flexner, who had earlier been a driving force in elevating the Johns Hopkins University Medical Center into a preeminent institution, was visiting the University of Oxford in the late 1920s. Oxford is a federation of thirty-odd colleges, and Flexner had come upon one college—All Souls (founded in the fifteenth century)—that had scholars but no students. He was impressed by the notion that cutting-edge scholarship could be carried out by scholars who were unencumbered by the need to teach classes, hold office hours, and otherwise perform routine academic tasks. Upon his return to the United States Flexner convinced the Bambergers of the benefits of establishing a similar institution in America. The Bambergers became enthusiastic, granting Flexner five million dollars for the enterprise and purchasing seven hundred acres of choice Princeton landscape (Figure 12.2).14
Figure 12.2. Einstein Drive, the Institute for Advanced Study, Princeton, seen from the intersection of Einstein Drive and Maxwell Lane (lithograph by the author, 1987)
Flexner was appointed the first director of the Institute, and one of his initial projects would be to try to recruit Albert Einstein, recognized already as the most famous living scientist. Flexner immediately journeyed back to Oxford, where he found Einstein “hanging out between jobs”15 and offered him a position at the new institution. Einstein responded that he “had offers from Princeton University, Oxford University, and Caltech to join their faculties, but [was] still undecided.” He added that he “was leaning toward the offer from Princeton University.” (Princeton was the first academic institution to accept relativity.) “But then,” he asked, “at your place [also to be located in the town of Princeton] I would not have to teach?” Assured that he would have no classes (Einstein was a notoriously poor lecturer) he accepted the job. Then the question of salary became an issue. “How much do you think you would require for a salary?” Flexner asked. Einstein, with his considerable mathematical prowess, answered, “Three thousand dollars a year would be just right.” “That cannot be,” countered Flexner, “we are paying everyone else $16,000. We should pay you at least as much.” Einstein protested, “$3,000 would be satisfactory.” Flexner, reluctantly agreed to pay Einstein only $3,000 a year. But fortunately for the Einstein family, Mrs. Einstein renegotiated, and Einstein started receiving the standard $16,000.
Einstein, from the very beginning of his tenure at the institute, served as a magnet to attract the finest theoretical physicists and mathematicians of his time. Over their careers many of the most original thinkers in the field—including recipients of the Nobel Prize in Physics: Bohr (father Niels and later, his son, Aage), Schrödinger, Dirac, Pauli, Salam, Gell-Mann, Yang, Lee, and numerous others—have spent time at the institute. Three of the twentieth century’s greatest mathematicians, Janesh “Johnny” von Neumann, Kurt Gödel, and Andrew Wiles (who in the 1990s solved one of the longest-standing problems in mathematics—Fermat’s last theorem) spent considerable time at the Institute. Now into its eighth decade, it continues to serve as an ideal setting for members to interact with other gifted individuals in the field. String theorist Edward Witten, who has been called the “Michael Jordan of Physics,” is now a professor at the institute; recently retired is another “physicist’s physicist,” Freeman Dyson.
In pursuit of my own modest research in theoretical physics, I was at the institute during two stints, 1974–75 and 1982–83. While there, I got to know Helen Dukas, who had served as Einstein’s secretary for many years. For the opportunity to revel in “old Einstein stories,” Ms. Dukas proved a treasure trove. She told me anecdotes that I had not heard before, and she confirmed or denied the ones that I had heard elsewhere. She had earlier collaborated with Einstein’s one-time assistant, Banesh Hoffman, in writing an unusually good book about the great scientist.16 The opening quote of the book reflects Einstein’s unassuming and modest character. But it is a profound pronouncement of resignation that all other modern scientists would view with empathy: “One thing I have learned in a long life: that all our science measured against reality, is primitive and childlike—and yet it is the most precious thing we have.”
Three hundred years ago the imperious Newton, in an unusual self-effacing mood, observed, “I do not know what I may appear to the world; but to myself, I seem to have been only like a boy playing on the sea-shore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.” In another rare modest moment, he commented, “If I have seen further it is by standing on the shoulders of giants.”17 The idea of the nature of science as a cumulative process is one that every scientist would embrace. And it would be a rare scientist who would not rush to concede his own relative insignificance in the scale of the cosmos.
Many different models have been proposed as paradigms to understand nature. Each may be useful to explain one physical phenomenon or another, but no model has risen to be general enough to explain all physical phenomena. A single be-all, end-all model still remains the quest. But even if such a model or theory is one day formulated, it will still be just that, a theory to explain nature. It would not be nature itself. So, in our quest we will continue to approach the unfathomably large scales of time and space and the infinitesimally small subatomic world—both beyond the capabilities of our physical senses—in terms that are close to our experience. “It seems that the human mind has first to construct forms independently before we can find them in things,” wrote Einstein. This sentiment was expressed especially eloquently by the poet William Blake, a humanist who ironically neither understood nor showed any fondness for science. Yet his lines from “Auguries of Innocence” could serve as a timeless credo for the scientist, and so too for the artist:
To see a world in a grain of sand,
And a heaven in a wild flower,
Hold infinity in your hand,
And eternity in an hour.