For a good many centuries, human thought about nature has swung between two opposing points of view. According to one view, the universe obeys fixed, immutable laws, and everything exists in a well-defined objective reality. The opposing view is that there is no such thing as objective reality; that all is flux, all is change. As the Greek philosopher Heraclitus put it, “You can’t step into the same river twice.” The rise of science has largely been governed by the first viewpoint. But there are increasing signs that the prevailing cultural background is starting to switch to the second—ways of thinking as diverse as postmodernism, cyberpunk, and chaos theory all blur the alleged objectiveness of reality and reopen the ageless debate about rigid laws and flexible change.
What we really need to do is get out of this futile game altogether. We need to find a way to step back from these opposing worldviews—not so much to seek a synthesis as to see them both as two shadows of some higher order of reality—shadows that are different only because the higher order is being seen from two different directions. But does such a higher order exist, and if so, is it accessible? To many—especially scientists—Isaac Newton represents the triumph of rationality over mysticism. The famous economist John Maynard Keynes, in his essay Newton, the Man, saw things differently:
In the eighteenth century and since, Newton came to be thought of as the first and greatest of the modern age of scientists, a rationalist, one who taught us to think on the lines of cold and untinctured reason. I do not see him in this light. I do not think that anyone who has pored over the contents of that box which he packed up when he finally left Cambridge in 1696 and which, though partly dispersed, have come down to us, can see him like that. Newton was not the first of the age of reason. He was the last of the magicians, the last of the Babylonians and Sumerians, the last great mind which looked out on the visible and intellectual world with the same eyes as those who began to build our intellectual inheritance rather less than 10,000 years ago. Isaac Newton, a posthumous child born with no father on Christmas Day, 1642, was the last wonder-child to whom the Magi could do sincere and appropriate homage.
Keynes was thinking of Newton’s personality, and of his interests in alchemy and religion as well as in mathematics and physics. But in Newton’s mathematics we also find the first significant step toward a worldview that transcends and unites both rigid law and flexible flux. The universe may appear to be a storm-tossed ocean of change, but Newton—and before him Galileo and Kepler, the giants upon whose shoulders he stood—realized that change obeys rules. Not only can law and flux coexist, but law generates flux.
Today’s emerging sciences of chaos and complexity supply the missing converse: flux generates law. But that is another story, reserved for the final chapter.
Prior to Newton, mathematics had offered an essentially static model of nature. There are a few exceptions, the most obvious being Ptolemy’s theory of planetary motion, which reproduced the observed changes very accurately using a system of circles revolving about centers that themselves were attached to revolving circles—wheels within wheels within wheels. But at that time the perceived task of mathematics was to discover the catalogue of “ideal forms” employed by nature. The circle was held to be the most perfect shape possible, on the basis of the democratic observation that every point on the circumference of a circle lies at the same distance from its center. Nature, the creation of higher beings, is by definition perfect, and ideal forms are mathematical perfection, so of course the two go together. And perfection was thought to be unblemished by change.
Kepler challenged that view by finding ellipses in place of complex systems of circles. Newton threw it out altogether, replacing forms by the laws that produce them.
Although its ramifications are immense, Newton’s approach to motion is a simple one. It can be illustrated using the motion of a projectile, such as a cannonball fired from a gun at an angle. Galileo discovered experimentally that the path of such a projectile is a parabola, a curve known to the ancient Greeks and related to the ellipse. In this case, it forms an inverted U-shape. The parabolic path can be most easily understood by decomposing the projectile’s motion into two independent components: motion in a horizontal direction and motion in a vertical direction. By thinking about these two types of motion separately, and putting them back together only when each has been understood in its own right, we can see why the path should be a parabola.
The cannonball’s motion in the horizontal direction, parallel to the ground, is very simple: it takes place at a constant speed. Its motion in the vertical direction is more interesting. It starts moving upward quite rapidly, then it slows down, until for a split second it appears to hang stationary in the air; then it begins to drop, slowly at first but with rapidly increasing velocity.
Newton’s insight was that although the position of the cannonball changes in quite a complex way, its velocity changes in a much simpler way, and its acceleration varies in a very simple manner indeed. Figure 2 summarizes the relationship between these three functions, in the following example.
Suppose for the sake of illustration that the initial upward velocity is fifty meters per second (50 m/sec). Then the height of the cannonball above ground, at one-second intervals, is:
0, 45, 80, 105, 120, 125, 120, 105, 80, 45, 0.
You can see from these numbers that the ball goes up, levels off near the top, and then goes down again. But the general pattern is not entirely obvious. The difficulty was compounded in Galileo’s time—and, indeed, in Newton’s—because it was hard to measure these numbers directly. In actual fact, Galileo rolled a ball up a gentle slope to slow the whole process down. The biggest problem was to measure time accurately: the historian Stillman Drake has suggested that perhaps Galileo hummed tunes to himself and subdivided the basic beat in his head, as a musician does.
The pattern of distances is a puzzle, but the pattern of velocities is much clearer. The ball starts with an upward velocity of 50 m/sec. One second later, the velocity has decreased to (roughly) 40 m/sec; a second after that, it is 30 m/sec; then 20 m/sec, 10 m/sec, then 0 m/sec (stationary). A second after that, the velocity is 10 m/sec downward. Using negative numbers, we can think of this as an upward velocity of -10 m/sec. In successive seconds, the pattern continues: -20 m/sec, -30 m/sec, -40 m/sec, -50 m/sec. At this point, the cannonball hits the ground. So the sequence of velocities, measured at one-second intervals, is:
FIGURE 2.
Calculus in a nutshell. Three mathematical patterns determined by a cannonball: height, velocity, and acceleration. The pattern of heights, which is what we naturally observe, is complicated. Newton realized that the pattern of velocities is simpler, while the pattern of accelerations is simpler still. The two basic operations of calculus, differentiation and integration, let us pass from any of these patterns to any other. So we can work with the simplest, acceleration, and deduce the one we really want—height.
50, 40, 30, 20, 10, 0, -10, -20, -30, -40, -50.
Now there is a pattern that can hardly be missed; but let’s go one step further by looking at accelerations. The corresponding sequence for the acceleration of the cannonball, again using negative numbers to indicate downward motion, is
-10, -10, -10, -10, -10, -10, -10, -10, -10, -10, -10.
I think you will agree that the pattern here is extremely simple. The ball undergoes a constant downward acceleration of 10 m/sec2. (The true figure is about 9.81 m/sec2, depending on whereabouts on the Earth you perform the experiment. But 10 is easier to think about.)
How can we explain this constant that is hiding among the dynamic variables? When all else is flux, why is the acceleration fixed? One attractive explanation has two elements. The first is that the Earth must be pulling the ball downward; that is, there is a gravitational force that acts on the ball. It is reasonable to expect this force to remain the same at different heights above the ground. Indeed, we feel weight because gravity pulls our bodies downward, and we still weigh the same if we stand at the top of a tall building. Of course, this appeal to everyday observation does not tell us what happens if the distance becomes sufficiently large—say the distance that separates the Moon from the Earth. That’s a different story, to which we shall return shortly.
The second element of the explanation is the real breakthrough. We have a body moving under a constant downward force, and we observe that it undergoes a constant downward acceleration. Suppose, for the sake of argument, that the pull of gravity was a lot stronger: then we would expect the downward acceleration to be a lot stronger, too. Without going to a heavy planet, such as Jupiter, we can’t test this idea, but it looks reasonable; and it’s equally reasonable to suppose that on Jupiter the downward acceleration would again be constant—but a different constant from what it is here. The simplest theory consistent with this mixture of real experiments and thought experiments is that when a force acts on a body, the body experiences an acceleration that is proportional to that force. And this is the essence of Newton’s law of motion. The only missing ingredients are the assumption that this is always true, for all bodies and for all forces, whether or not the forces remain constant; and the identification of the constant of proportionality as being related to the mass of the body. To be precise, Newton’s law of motion states that
mass × acceleration = force.
That’s it. Its great virtue is that it is valid for any system of masses and forces, including masses and forces that change over time. We could not have anticipated this universal applicability from the argument that led us to the law; but it turns out to be so.
Newton stated three laws of motion, but the modern approach views them as three aspects of a single mathematical equation. So I will use the phrase “Newton’s law of motion” to refer to the whole package.
The mountaineer’s natural urge when confronted with a mountain is to climb it; the mathematician’s natural urge when confronted with an equation is to solve it. But how? Given a body’s mass and the forces acting on it, we can easily solve this equation to get the acceleration. But this is the answer to the wrong question. Knowing that the acceleration of a cannonball is always -10 m/sec2 doesn’t tell us anything obvious about the shape of its trajectory. This is where the branch of mathematics known as calculus comes in; indeed it is why Newton (and Leibniz) invented it. Calculus provides a technique, which nowadays is called integration, that allows us to move from knowledge of acceleration at any instant to knowledge of velocity at any instant. By repeating the same trick, we can then obtain knowledge of position at any instant. And that is the answer to the right question.
As I said earlier, velocity is rate of change of position, and acceleration is rate of change of velocity. Calculus is a mathematical scheme invented to handle questions about rates of change. In particular, it provides a technique for finding rates of change—a technique known as differentiation. Integration “undoes” the effect of differentiation; and integrating twice undoes the effect of differentiating twice. Like the twin faces of the Roman god Janus, these twin techniques of calculus point in opposite directions. Between them, they tell you that if you know any one of the functions—position, velocity, or acceleration—at every instant, then you can work out the other two.
Newton’s law of motion teaches an important lesson: namely, that the route from nature’s laws to nature’s behavior need not be direct and obvious. Between the behavior we observe and the laws that produce it is a crevasse, which the human mind can bridge only by mathematical calculations. This is not to suggest that nature is mathematics—that (as the physicist Paul Dirac put it) “God is a mathematician.” Maybe nature’s patterns and regularities have other origins; but, at the very least, mathematics is an extremely effective way for human beings to come to grips with those patterns.
All of the laws of physics that were discovered by pursuing Isaac Newton’s basic insight—that change in nature can be described by mathematical processes, just as form in nature can be described by mathematical things—have a similar character. The laws are formulated as equations that relate not the physical quantities of primary interest but the rates at which those quantities change with time, or the rates at which those rates change with time. For example the “heat equation,” which determines how heat flows through a conducting body, is all about the rate of change of the body’s temperature; and the “wave equation,” which governs the motion of waves in water, air, or other materials, is about the rate of change of the rate of change of the height of the wave. The physical laws for light, sound, electricity, magnetism, the elastic bending of materials, the flow of fluids, and the course of a chemical reaction, are all equations for various rates of change.
Because a rate of change is about the difference between some quantity now and its value an instant into the future, equations of this kind are called differential equations. The term “differentiation” has the same origin. Ever since Newton, the strategy of mathematical physics has been to describe the universe in terms of differential equations, and then solve them.
However, as we have pursued this strategy into more sophisticated realms, the meaning of the word “solve” has undergone a series of major changes. Originally it implied finding a precise mathematical formula that would describe what a system does at any instant of time. Newton’s discovery of another important natural pattern, the law of gravitation, rested upon a solution of this kind. He began with Kepler’s discovery that planets move in ellipses, together with two other mathematical regularities that were also noted by Kepler. Newton asked what kind of force, acting on a planet, would be needed to produce the pattern that Kepler had found. In effect, Newton was trying to work backward from behavior to laws, using a process of induction rather than deduction. And he discovered a very beautiful result. The necessary force should always point in the direction of the Sun; and it should decrease with the distance from the planet to the Sun. Moreover, this decrease should obey a simple mathematical law, the inverse-square law. This means that the force acting on a planet at, say, twice the distance is reduced to one-quarter, the force acting on a planet at three times the distance is reduced to one-ninth, and so on. From this discovery—which was so beautiful that it surely concealed a deep truth about the world—it was a short step to the realization that it must be the Sun that causes the force in the first place. The Sun attracts the planet, but the attraction becomes weaker if the planet is farther away. It was a very appealing idea, and Newton took a giant intellectual leap: he assumed that the same kind of attractive force must exist between any two bodies whatsoever, anywhere in the universe.
And now, having “induced” the law for the force, Newton could bring the argument full circle by deducing the geometry of planetary motion. He solved the equations given by his laws of motion and gravity for a system of two mutually attracting bodies that obeyed his inverse-square law; in those days, “solved” meant finding a mathematical formula for their motion. The formula implied that they must move in ellipses about their common center of mass. As Mars moves around the Sun in a giant ellipse, the Sun moves in an ellipse so tiny that its motion goes undetected. Indeed, the Sun is so massive compared to Mars that the mutual center of mass lies beneath the Sun’s surface, which explains why Kepler thought that Mars moved in an ellipse around the stationary Sun.
However, when Newton and his successors tried to build on this success by solving the equations for a system of three or more bodies—such as Moon/Earth/Sun, or the entire Solar System—they ran into technical trouble; and they could get out of trouble only by changing the meaning of the word “solve.” They failed to find any formulas that would solve the equations exactly, so they gave up looking for them. Instead, they tried to find ways to calculate approximate numbers. For example, around 1860 the French astronomer Charles-Eugène Delaunay filled an entire book with a single approximation to the motion of the Moon, as influenced by the gravitational attractions of the Earth and the Sun. It was an extremely accurate approximation—which is why it filled a book—and it took him twenty years to work it out. When it was subsequently checked, in 1970, using a symbolic-algebra computer program, the calculation took a mere twenty hours: only three mistakes were found in Delaunay’s work, none serious.
The motion of the Moon/Earth/Sun system is said to be a three-body problem—for evident reasons. It is so unlike the nice, tidy two-body problem Newton solved that it might as well have been invented on another planet in another galaxy, or in another universe. The three-body problem asks for a solution for the equations that describe the motion of three masses under inverse-square-law gravity. Mathematicians tried to find such a solution for centuries but met with astonishingly little success beyond approximations, such as Delaunay’s, which worked only for particular cases, like Moon/Earth/Sun. Even the so-called restricted three-body problem, in which one body has a mass so small that it can be considered to exert no force at all upon the other two, proved utterly intractable. It was the first serious hint that knowing the laws might not be enough to understand how a system behaves; that the crevasse between laws and behavior might not always be bridgeable.
Despite intensive effort, more than three centuries after Newton we still do not have a complete answer to the three-body problem. However, we finally know why the problem has been so hard to crack. The two-body problem is “integrable”—the laws of conservation of energy and momentum restrict solutions so much that they are forced to take a simple mathematical form. In 1994, Zhihong Xia, of the Georgia Institute of Technology, proved what mathematicians had long suspected: that a system of three bodies is not integrable. Indeed, he did far more, by showing that such a system can exhibit a strange phenomenon known as Arnold diffusion, first discovered by Vladimir Arnold, of Moscow State University, in 1964. Arnold diffusion produces an extremely slow, “random” drift in the relative orbital positions. This drift is not truly random: it is an example of the type of behavior now known as chaos—which can be described as apparently random behavior with purely deterministic causes.
Notice that this approach again changes the meaning of “solve.” First that word meant “find a formula.” Then its meaning changed to “find approximate numbers.” Finally, it has in effect become “tell me what the solutions look like.” In place of quantitative answers, we seek qualitative ones. In a sense, what is happening looks like a retreat: if it is too hard to find a formula, then try an approximation; if approximations aren’t available, try a qualitative description. But it is wrong to see this development as a retreat, for what this change of meaning has taught us is that for questions like the three-body problem, no formulas can exist. We can prove that there are qualitative aspects to the solution that a formula cannot capture. The search for a formula in such questions was a hunt for a mare’s nest.
Why did people want a formula in the first place? Because in the early days of dynamics, that was the only way to work out what kind of motion would occur. Later, the same information could be deduced from approximations. Nowadays, it can be obtained from theories that deal directly and precisely with the main qualitative aspects of the motion. As we will see in the next few chapters, this move toward an explicitly qualitative theory is not a retreat but a major advance. For the first time, we are starting to understand nature’s patterns in their own terms.