THE TWENTIETH-CENTURY REVOLUTIONS IN PHYSICS

THE THEORY OF RELATIVITY

The idea of the “relativity of motion” long predates Einstein—in fact, it goes back to Newtonian physics. It is the idea that velocity is not the property of one thing, but a relationship between two things. That is, it is meaningless to ask what the velocity of something is; one must ask what its velocity is with respect to something else. (A useful analogy is that one cannot meaningfully ask about the angle one line makes, but only about the angle between two lines.) In other words, in Newtonian physics velocity is a “relative” concept. This may seem to contradict what was said earlier about Newton’s laws of motion allowing one to determine, by means of an analysis of forces, whether a ball is going around a man or the man around the ball. However, there is no real contradiction: the analysis of forces does not reveal which objects are moving, but rather which objects are accelerating; and acceleration is an absolute concept in Newtonian physics. (It makes sense to talk about the acceleration of a single object, since acceleration is the velocity of an object at one instant with respect to itself at another instant.)

If one is on a train and it suddenly starts to accelerate, one can tell, even if one’s eyes are closed, because one feels a force or jolt. However, if the train is not accelerating, one feels no such force, and one cannot tell whether the train is standing still or gliding perfectly smoothly with constant velocity. In fact, it is meaningless to ask whether the train is “really” moving in and of itself; one can only ask whether it is moving with respect to the platform or some other object. In the same way, if there are many objects moving uniformly with respect to one another (none of them accelerating), it makes no sense in Newtonian physics to ask which ones are really moving. What one does in practice is to choose an object on the basis of convenience and measure all motion with respect to it. This is what we meant before by picking a “frame of reference.” If one is sitting in the train, it is convenient to measure motion with respect to the train; if one is standing on the platform, it is convenient to measure motion with respect to the platform. However, the “principle of relativity” says that fundamentally it does not matter what frame of reference is chosen.

What does it mean to say “it does not matter”? It certainly matters in terms of how things appear to move. To the man on the train the platform is moving, while to the man on the platform the train is moving. To put it more technically, the coordinates and momenta of objects will be different in different frames of reference. And in Newtonian physics there is a precise rule, called the “Galilean transformation law,” that tells one exactly how they are different. (This rule corresponds to our everyday experiences and intuition. For example, if a car goes by you at 40 miles per hour, and another goes by you in the same direction at 75 miles per hour, then to someone in the first car, the second car will appear to be going 35—75 minus 40—miles per hour. That is what the Galilean transformation law says, and it seems to be just common sense.)

Instead, the statement “it does not matter what frame of reference is chosen” means that whatever frame one uses to measure coordinates and momenta, the coordinates and momenta will obey the same equations. In other words, the motion of particular objects will look different in different frames, but the laws of physics will have the same mathematical form in any frame. That is the key point, and the real essence, of the principle of relativity.

Maxwell’s theory of electromagnetism seemed to violate the principle of relativity. It looked as though Maxwell’s equations were only true if coordinates and momenta were measured in one special frame of reference. And that would appear to give a natural definition of “absolute velocity,” namely velocity as measured in that special frame. It seemed, therefore, that there had to be something wrong with either the hallowed principle of relativity or Maxwell’s theory of electromagnetism.

This is where Einstein entered the picture in 1905. His aims were actually very conservative: he did not want to abandon either the principle of relativity or Maxwell’s theory. And this forced him to take a bold step. He suggested that the Galilean transformation law—the one that seems so commonsensical—is wrong, and that the correct transformation law is the one that had been formulated by a Dutch physicist named Hendrik Lorentz.

If coordinates and momenta in different frames of reference are related by the Lorentz transformation law, then it turns out that Maxwell’s equations work in any frame of reference. Thus, the principle of relativity and Maxwell’s theory can be reconciled. However, there is a major catch: Newton’s laws no longer work in every frame! In other words, Einstein succeeded in saving Maxwell, but at the expense of Newton. Consequently, Newton’s laws had to be changed.


EINSTEIN, ALBERT (1879–1955) was born in Ulm, Germany. Contrary to romantic myth, he excelled in mathematics and physics in school, although, due to a lack of interest, he did less well in subjects that required more memorization. He entered the Swiss Federal Polytechnic School in 1896 and received his diploma in 1901. Unable to land an academic position he began work at the Swiss Patent Office. Einstein had a deep understanding of the physics of his day and the major theoretical issues confronting it, which he had been pondering for years. This bore fruit in Einstein’s “miracle year,” 1905, when he published three epoch-making papers: his paper proposing the theory of special relativity; his paper on the effect called “Brownian motion” (which showed that atoms are real—something still not universally accepted at that time); and his paper on the “photoelectric effect,” wherein he helped lay the foundations of quantum theory by demonstrating that light acts as a particle rather than as a wave in certain situations. From 1909 to 1914 he held professorships in Zurich and Prague. In 1914 he became a professor at the University of Berlin, where he remained until Hitler came to power, at which time he renounced his German citizenship and accepted a position at Princeton University’s Institute for Advanced Study. His theory of gravity—the theory of general relativity—was published in 1916. It is one of the great monuments of the human intellect. Einstein knew that he “stood on the shoulders of giants” (as Newton had said of himself). In his study he kept portraits of three men: Newton, Faraday, and Maxwell.


What was it about Newton’s laws that had to be changed? It was not his three famous laws of motion (including F=ma), nor the laws of conservation of energy and momentum, nor the “principle of least action.”

All these things remain true in Einsteinian physics. Really, only one thing had to change, and that was the geometry of space and time. The old Galilean transformation law is based on the ideas that three-dimensional space is Euclidean in its properties, and that time is altogether distinct from space. But those ideas turned out to be wrong. The Lorentz transformation laws said—though no one had grasped their real meaning until Einstein—that space and time together make up a four-dimensional manifold that has a different kind of geometry. In this new geometry, even the Pythagorean theorem has to be modified.

We can get some idea of how time is related to space in Einstein’s theory by first considering how the three dimensions of space are related to each other. I can choose my three basic space directions (or “axes”) to be “forward,” “rightward,” and “up.” (That is to choose a frame of reference in space. It allows me to measure something’s position, by saying that it is, for instance, twenty feet in front of me, thirty to the right, and ten above my head.) However, if I turn my body a little to the left, so that I am facing in a different direction than before, the direction I used to call forward, I would now have to describe as partly forward and partly rightward. In an analogous way, in Einstein’s theory the “time direction” of one frame of reference becomes, in another frame of reference, partly the time direction and partly a space direction. So time and space must be thought of as four basic directions in a single four-dimensional “space-time.” In this profound sense Einstein’s theory unified space and time.

The theory of relativity led to other unifications as well. “Energy” and “mass” turned out to be, in a sense, the same thing (which is the meaning of the famous formula E = mc2). And the three-component electric field and three-component magnetic field of Maxwell’s theory turned out to be facets of a single, six-component “electromagnetic” field, rather than distinct entities. In fact, what is purely an electric field in one frame of reference is partly electric and partly magnetic in another frame.

As we have already noted, Newton’s theory of gravity also had to be modified. This too involved a new assumption about the structure of space and time, namely that the fabric of four-dimensional space-time is curved. It is this curving or warping that is responsible for all gravitational effects in Einstein’s theory of “general relativity.” This makes gravity more like electromagnetism, in that gravity is no longer understood to be based on “action at a distance,” as Newton said, but on fields. (The “gravitational field” at a given location is the amount that space-time is warped there.) These gravitational fields, like Maxwell’s electromagnetic fields, have lives of their own and can have waves propagating in them. The fact that all forces are now understood to come from fields creates the possibility of a “unified field theory” of all forces. Einstein sought such a theory in his later years without success. However, great progress has been made on this problem in recent decades.

HOW “REVOLUTIONARY” WAS RELATIVITY?

In what sense were Einstein’s theories of special and general relativity “revolutionary”? They certainly led to conclusions that were profoundly counterintuitive and very surprising. For instance, they showed that it is not absolutely meaningful to say that two events happen at the “same time”: it depends on the frame of reference. However, they did not completely overthrow the physics that went before; far from it. Indeed, as we saw, Einstein was led to his theory of special relativity precisely by his effort to maintain Maxwell’s theory of electromagnetism and the old principle of relativity at the same time. Not surprisingly, therefore, Maxwell’s theory was left completely untouched. And the principle of relativity was essentially untouched as well. For example, it is true in Einsteinian physics, as in Newtonian, that velocity is a relative concept while acceleration is an absolute one. Much else in Newtonian physics was also preserved, including Newton’s three laws of motion and such basic concepts as force, velocity, acceleration, momentum, mass, and energy (although some of these quantities had to be reinterpreted as vectors in four-dimensional space-time rather than in three-dimensional space).

The word revolution is misleading when applied to scientific theories. In a revolution, the old order is swept away. However, in most of the so-called revolutions in physics, the old ideas are not simply thrown overboard, and there is not a radical rupture with the past. A better word than revolutions to describe these dramatic advances in science would be breakthroughs. In great theoretical breakthroughs, new insights are achieved that are profound, far-reaching, and take scientific understanding to a new level, a higher viewpoint. Nevertheless, many—indeed most—of the old insights retain their validity, although in some cases they are modified or qualified by new insights. Probably the only true revolution in the history of physics was the first one, the Scientific Revolution of the seventeenth century. The physics that preceded that revolution, namely the physics of Aristotle, was largely set aside and replaced by something thoroughly different.

We see this in the fact that physics courses in high school, college, and graduate school do not begin with a study of Aristotelian physics. The details of Aristotelian physics are of interest only to students of history, not to modern scientists as scientists. It is not necessary to know anything at all about Aristotelian physics to do science nowadays. By contrast, before one learns the theory of relativity (or quantum theory), it is still necessary to spend several years studying the physics of the seventeenth through nineteenth centuries. That physics is still profoundly relevant. In fact, many branches of modern physics and engineering still use only pre-relativity and pre-quantum concepts.

Moreover, and this is a crucial point, the Newtonian theory of mechanics and gravity remains as the one and only correct “limit” of Einsteinian physics when speeds are small compared to the speed of light, and when gravitational fields are weak. That is, the smaller speeds are and the weaker gravitational fields are, the more accurately do Einstein’s answers agree with Newton’s answers. The idea of an older theory being the correct “limit” of the theory that replaces it is extremely important. In such a case, the older theory is not strictly speaking right; however, it is not simply wrong either. It would be better to say that it is “right, up to a point.”

An example will help make this clearer. Any map of Manhattan, if it’s printed on a flat piece of paper, must be wrong, strictly speaking, because the surface of the earth is curved. However, Manhattan is small enough that the earth’s sphericity has negligible effects. (It affects the angles on a map of Manhattan by less than one ten-thousandth of a degree.) Indeed, it would make no sense to worry about those effects, because they are dwarfed by other ones, such as the hilliness of Manhattan Island. Therefore, ignoring the earth’s sphericity is a reasonable approximation to make if one is talking about sufficiently small areas. And statements based on it are not simply falsehoods; rather, they contain real information and give correct insights into geographical relationships. This is a critical point: an incomplete and inexact description of a situation may be sufficient to convey a completely true insight into that situation. Otherwise, we could never learn anything.

Let us take another example from a “revolution” in physics that hasn’t happened yet but is widely anticipated. Both Newtonian and Einsteinian physics are based on the idea that space (or space-time) is a continuous manifold of “points” that lie at definite “distances” from each other. However, quantum theory makes it appear extremely doubtful that one can apply such concepts to the very small. Many physicists expect that our intuitive concepts of space and time will prove to be altogether inadequate for describing anything smaller than a fundamental scale called the “Planck length” (about 10-33 cm), and that radically new concepts will have to be used. That is, a new theory will be needed. And it is thought that when this new theory is found, it will show that our intuitive concepts of “space,” “time,” “point,” and “distance” are never applicable to the physical universe except in an approximate sense. That approximation is surely extremely good for distances much greater than the Planck length; nevertheless, it is always only an approximation. Supposing all these expectations someday prove to be correct, would it mean that all statements employing the concept of distance (e.g., “the book you are holding in your hands is 8 inches by 5.2 inches by 0.3 inches,” or “the distance from my home to my office is 1.75 miles”) are false? Obviously not. The concept of distance, while strictly speaking only approximately valid, is such a good approximation in such situations that to cavil at its use would be pedantic and unreasonable. In the same way, one is quite justified in continuing to use Newtonian physics in many situations (including all those that arise in everyday life), even though we know it does not give us an exact and complete account of what is going on.

One final point requires emphasis: Einstein’s theory of relativity has nothing whatsoever to do with the foolish idea that “everything is relative.” In both Newtonian physics and Einsteinian physics (as in life generally) some things are relative and some things are absolute. For instance, in both Einsteinian physics and Newtonian physics, velocity is relative but acceleration is absolute. In Newtonian physics both temporal distance and spatial distance are absolute, whereas in Einsteinian physics they are both relative, but something called “space-time distance” is absolute. And in Newtonian physics the speed of light in a vacuum is relative, whereas in Einsteinian physics it is absolute (it is the same in every reference frame). The term relativity has caused endless mischief. Things are not more relative in relativity theory than in Newtonian physics; rather, different things are relative and different things are absolute.

THE QUANTUM REVOLUTION

Quantum theory was not the brainchild of one man, as was relativity. Many great scientists contributed to its development from 1900 to the mid-1920s, when its basic structure was complete. The major founders of quantum theory include Max Planck (1858–1947), Einstein (1879–1955), Louis deBroglie (1892–1987), Niels Bohr (1885–1962), Arnold Sommerfeld (1868–1951), Max Born (1882–1970), Werner Heisenberg (1901–76), Erwin Schrödinger (1887–1961), Wolfgang Pauli (1900–1958), and Paul Dirac (1902–84).

Quantum theory has a far better claim to be considered “revolutionary” than does relativity theory. Whereas relativity changed our understanding of space and time, quantum theory fundamentally transformed the basic conceptual framework of all of physics. Physical theories that employ the pre-quantum conceptual framework (whether Newtonian or relativistic) are called “classical” theories.

Even so, it would be misleading to say that quantum theory simply “overthrew” classical physics. Quantum physics is built upon the foundations of classical physics in a profound way. In fact, there is a precise and general procedure for “quantizing” any classical theory, that is, for constructing a quantum version of it. And in the appropriate limit (roughly speaking, when systems are large) the quantum version gives the same answers as the classical version. Moreover, it is not possible to relate quantum theoretical predictions to actual measurements without making use of classical concepts.

An important fact about quantum theory is that it is based on probabilities in a fundamental way. Probabilities are often useful in classical physics, too, but there they are an accommodation to practical limitations. In classical physics, if one had complete information about a system at one time, one could (in principle) know everything about its past and future development exactly, as Laplace noted. There would be no need of probabilities. However, in quantum theory complete information about a system does not uniquely determine its future behavior—only the probabilities of various outcomes. This famous nondeterminism (or “indeterminacy”) of quantum theory is obviously of great importance philosophically. Some have argued that it is relevant in some way to the freedom of the human will, an argument that, not surprisingly, is highly controversial.


HEISENBERG, WERNER (1901–76) was born in Würzburg, Germany. After obtaining his doctorate in physics from the University of Munich in 1923, he worked with Max Born at the University of Göttingen and Niels Bohr at the University of Copenhagen in the rapidly developing area of quantum physics. At that point, fundamental physics was in disarray, as theorists struggled to find a consistent framework for quantum ideas to replace the existing confused patchwork of insights and methods. In 1925, at the age of twenty-three, Heisenberg discovered this framework and published his “matrix mechanics.” (In 1926, Erwin Schrödinger published a “wave mechanics” that was soon shown to be the same theory in different mathematical guise. Heisenberg and Schrödinger both received the Nobel Prize in Physics.) In 1927, Heisenberg formulated his celebrated and very fundamental “uncertainty principle,” which holds that the “coordinates” and “momenta” that classical physics uses to describe the state of a system cannot have definite values at the same time. He continued to make important contributions to nuclear physics, condensed matter physics, and particle physics. During World War II, he led Germany’s atomic bomb project. His motives and his commitment to the project have remained the subject of controversy. However, he certainly deplored what he called the “infamies” of the Nazi regime, which he saw as a “flight into insanity that took the form of a political movement.”


The probabilistic character of quantum theory leads to very difficult epistemological and ontological questions, which have given rise to a variety of “interpretations.” The issues are too complex and subtle to review here. However, it may be of great significance that the traditional interpretation (also called the “Copenhagen,” “standard,” or “orthodox” interpretation) gives special status to the mind of the “observer” (i.e., the one who knows the outcomes of experiments or observations). The reason for this, in a nutshell, is that probabilities have to do with someone’s degree of knowledge or lack thereof. (If one knows a future outcome, one need not use probabilities to discuss it.) As the eminent physicist Sir Rudolf Peierls (1907–95) put it, “The quantum mechanical description is in terms of knowledge, and knowledge requires somebody who knows.” Peierls and others, such as the Nobel laureate Eugene Wigner (1902–95), have argued that the traditional interpretation of quantum theory implies that the mind of the observer cannot be completely described in physical terms. If true, this assertion has profound philosophical—in particular, antimaterialist—implications. However, dissatisfaction with the traditional interpretation has led many to embrace alternatives, such as the “many worlds interpretation” or a version of the “hidden variable” or “pilot wave” theories. There is no majority view on these questions among either physicists or philosophers.

None of this philosophical confusion means that quantum theory is in any trouble as a theory of physics. There is no ambiguity or controversy about its testable predictions; and these predictions have been confirmed in countless ways over a period of eighty years, as of this writing. If superstring theory proves to be the ultimate theory of physics, as many leading physicists expect, then quantum theory is probably secure, for superstring theory does not at this point seem to entail any revision of the fundamental postulates of quantum theory.

We have observed that most great advances in physics lead to profound unifications in our understanding of nature. Quantum theory is no exception; it led to one of the most remarkable unifications of all, namely of matter and forces. In the classical electromagnetic theory of Maxwell, light is made up of waves in a field. However, Planck in 1900 and Einstein in 1905 showed that certain phenomena could not be understood unless light was assumed to come in discrete chunks, or “quanta,” of energy—in other words, particles. These particles of light are now called “photons.” The puzzle that something could be both a wave and a particle, a seeming contradiction, was resolved in quantum theory. “Wave-particle duality” was then found to apply across the board. Just as things that were understood classically to be waves were seen to be also particles, so things that were understood classically to be particles were now seen also to be waves—indeed, waves in a field. For instance, the electron is both a particle and a wave in an “electron field” that fills all of space. On the other hand, as Faraday taught us, forces also arise from fields. Thus, both particles of matter and the forces by which they interact are manifestations of one kind of thing, a field, which is why the basic language of fundamental physics for the last half-century is called quantum field theory. The force between two particles can be understood as being due to “field lines” stretching between them, as Faraday pictured it, or, equivalently, as being due to the exchange of “virtual particles” between them, as Richard Feynman (1918–88) pictured it.


FARADAY, MICHAEL (1791–1867) was born in London. He was the son of a blacksmith and his education was, in his own words, “of the most ordinary description, consisting of little more than the rudiments of reading, writing, and arithmetic at a common day school.” At thirteen he became an errand boy in a bookshop, and at fourteen he started a seven-year apprenticeship as a bookbinder, which gave him the opportunity to read many scientific books. In 1813, Faraday attended a public lecture by the famous chemist Sir Humphrey Davy, taking copious notes (Davy had pioneered the use of electricity to break apart compounds and had discovered in that way five chemical elements). Faraday applied for a job with him and was at first rebuffed. He soon applied again, sending Davy the notes he had taken. Impressed, Davy hired him as a secretary, then fired him (advising him to go back to bookbinding), then hired him again as a laboratory assistant. Faraday soon became a brilliant experimental chemist in his own right; however, he is most famous for his research in electromagnetism. In particular, he showed that wires moving relative to magnets had electrical currents “induced” in them. (Faraday showed that the same current was induced whether the magnet moved or the wire. This fact was one of the clues that led Einstein to his theory of relativity.) Faraday’s law of induction is one of the pillars of Maxwell’s theory of electromagnetism. It is even more important that Faraday was the first to articulate the concept of a force “field,” an idea fundamental to modern physics. Devout, humble, and generous, Faraday is one of the most appealing personalities among the great scientists.