The Art of Reckoning
We live in a technical civilization, of which most of us understand very little. Why does electric light go on when you press the switch? Why is it cold in the ice-box? How do airmen take aim at a target from a fast-moving aeroplane? What enables astronomers to predict eclipses? On what principles do insurance companies decide what to charge? These are all very practical questions; unless some one knew the answer, we could not enjoy the comforts upon which we are wont to pride ourselves. But those who know the answers are few. Usually these few invent a rule or a machine which enables other people to get on with very little knowledge; a practical electrician does not have to know the theory of electricity, though this was necessary for the inventions which he knows how to manipulate. If you want to be able to answer such every-day questions, you have to learn many things; the most indispensable of these is mathematics.
Some people will always dislike mathematics, however well they may be taught. They ought not to try to become mathematicians, and their teachers ought to let them off after they have proved their inefficiency over the rudiments. But if mathematics were properly taught, there would be very much fewer people who would hate it than there are at present.
There are various ways of stimulating a love of mathematics. One is the method unintentionally adopted by Galileo’s father, who was himself a teacher of mathematics, but found himself unable to make a living by his profession. He determined that his son should do something more lucrative, and with that end in view concealed from the youth the very existence of mathematics. But one day—so the story goes—the boy, now 18 years old, happened to overhear a lecture on geometry which was being given by a man in the next room. He was fascinated, and within a very short time became one of the leading mathematicians of the age. However, I doubt if this method is quite suitable for adoption by educational authorities. I think perhaps there are other methods that are likely to be more widely successful.
In the early stages, all teaching of mathematics should start from practical problems; they should be easy problems, and such as might seem interesting to a child. When I was young (perhaps things have not changed in this respect) the problems were such as no one could possibly wish to solve. For instance: A, B, and C are travelling from X to Y. A on foot, B on a horse, and C on a bicycle. A is always going to sleep at odd moments, B’s horse goes lame, and C has a puncture. A takes twice as long as B would have taken if his horse hadn’t gone lame, and C gets there half an hour after A would have got there if he hadn’t gone to sleep—and so on. Even the most ardent pupil is put off by this sort of thing.
The best way, in teaching, is to take a hint from the early history of mathematics. The subject was invented because there were practical problems that people really wished to solve, either from curiosity or for some urgent practical reason. The Greeks told endless stories about such problems and the clever men who found out how to deal with them. No doubt these stories are often untrue, but that does not matter when they are used as illustrations. I shall repeat a few of them, without vouching for their historical accuracy.
The founder of Greek mathematics and philosophy was Thales, who was a young man in 600 B.C. In the course of his travels he went to Egypt, and the king of Egypt asked him if he could find out the height of the Great Pyramid. Thales, at a given moment, measured the length of its shadow and of his own. It was obvious that the proportion of his height to the length of his shadow was the same as the proportion of the height of the pyramid to the length of its shadow, and so the answer was found by the rule of three. The king then asked him if he could find out the distance of a ship at sea without leaving the land. This is a more difficult problem, and he can hardly have given a general solution, although tradition says that he did. The principle is to observe the direction of the ship from two points on the coast which are at a known distance apart; the further off the ship is, the less difference there will be in the two directions. The complete answer requires trigonometry, which did not exist until many centuries after the time of Thales. But in particular cases the answer is easy. Suppose, for instance, that the coast runs east and west, that the ship is due north of a certain point A on the coast, and due north-west of a certain other point B. Then the distance from A to the ship will be the same as the distance from A to B, as the reader can easily convince himself by drawing a figure. Supposing the ship part of a hostile navy, and Egyptian troops draw up on the shore to oppose it, this knowledge might be very useful.
Serious mathematics began with the proposition known as the theorem of Pythagoras. The Egyptians had made some slight beginnings in geometry, in order—so it is said—to be able to measure out their fields again when the Nile flood subsided. They had noticed that a triangle whose sides are respectively 3, 4, and 5 units of length has a right angle. Pythagoras (or some one belonging to his school) noticed a curious fact about this triangle. If you make squares on the sides of a triangle of this kind, one square will have 9 square units, another 16, and the third 25; now 9 and 16 are 25. Pythagoras (or a disciple) generalized this, and proved that in any right-angled triangle the squares on the shorter sides are together equal to the square on the longest side. This was a most important discovery, and encouraged the Greeks to construct the science of geometry, which they did with amazing skill.
But out of this discovery a worry arose, which troubled both the Greeks and the mathematicians of modern times, and has only been fully solved in our own day. Suppose you have a right-angled triangle in which each of the shorter sides is one inch long; how long will the third side be? The square on each of the shorter sides is one square inch; so the square on the longer side will measure two square inches. So the length of the longer side must be some number of inches such that, when you multiply this number by itself, you get 2. This number is called “the square root of 2.” The Greeks soon discovered that there is no such number. You can easily persuade yourself of this. The number can’t be a whole number, because 1 is too small and 2 is too big. But if you multiply a fraction by itself, you get another fraction, not a whole number; so there cannot be any fraction which, multiplied by itself, gives 2. So the square root of 2 is neither a whole number nor a fraction. What else it could be remained a mystery, but mathematicians continued hopefully to use it and talk about it, in the expectation that some day they would discover what they meant. In the end this expectation proved justified.
A similar problem arose as to what is called “the cube root of 2,” that is to say, a number x such that x times x times x is 2. A certain city—so the story runs—had been dogged by misfortunes, and at last sent to consult the oracle of Apollo at Delphi as to the cause of the series of disasters. The god replied that the statue of himself in his temple in that city was too small, and he wanted one twice as large. The citizens were eager to comply with the divine commands, and at first they thought of making a statue twice as high as the old one. But then they realized that it would be also twice as broad and twice as thick, so that it would need eight times as much material, and would, in fact, be eight times as large. This would be going beyond what the oracle ordained, and would be a waste of money. How much taller, then, must the new statue be, if, altogether, it was to be twice as large? The citizens sent a deputation to Plato, to ask if any one in his academy could give them the answer. He set the mathematicians to work on the problem. But after some centuries they decided that it was insoluble. It could, of course, be solved approximately, but, as in the case of the square root of 2, there is no fraction that solves it exactly. Though the problem was not solved, much useful work was done in the course of looking for a solution.
Leaving the ancients for the present, let us come to the problems of insurance companies. Suppose you wish to insure your life, so that your widow will get $1,000 when you die. How much ought you to pay every year? Let us suppose your age such that the average man of your age will live another 20 years. If you pay $50 a year, you will, in 20 years, have paid $1,000, and at first sight you might think it a fair bargain if the insurance company asked you to pay $50 a year. But in fact this would be too much, because of interest. Assuming you live 20 years, your first $50 will be invested by the insurance company, and will bring interest; the interest will be invested, and so on; so that you have to calculate what your $50 will amount to in 20 years at compound interest. For the next $50, you have to calculate what it will amount to in 19 years at compound interest, and so on. In this way, your payments will have brought the insurance company much more than $1,000 by the end of the 20 years. In fact, if the insurance company gets 4 percent on its investments, your payments of $50 a year will have brought in about $1,500 at the end of the 20 years. To work out sums of this sort, you have to know how to add up what are called “geometrical progressions.”
A “geometrical progression” is a series of numbers in which each, after the first, is a fixed multiple of its predecessor. For instance, 1, 2, 4, 8, 16, … is a geometrical progression in which each number is double of its predecessor; 1, 3, 9, 27, 81, … is a geometrical progression in which each number is three times its predecessor; 1, 12, 14, 18, 116, … is one in which each number is half its predecessor, and so on.
Now let us return to our dollar invested at 4 percent compound interest. At the end of the year, it amounts to $1.04. At the end of the second year, you have $1.04 and a year’s interest on it; this is 1.04 times 1.04, i. e. (1.04)2. At the end of the third year, it amounts to (1.04)3, and so on. And so, if you pay a dollar a year for 20 years, at the end of the 20th year what you have paid has become worth
(1.04)20 plus (1.04)19 plus … plus (1.04)2 plus 1.04 which is a geometrical progression.
The ancients took much interest in geometrical progressions, particularly in those that go on forever. For instance, 12 plus 14 plus 18 plus 116 plus … for ever adds up to 1. So does the recurring decimal .9999…. This led to all sorts of puzzles, which took a long time to solve.
Ancient geometry concerned itself not only with straight lines and circles, but also with “conic sections,” which are the various sorts of curves that can be made by the intersection of a plane and a cone; or again they can be defined as all the possible shapes of the shadow of a circle on a wall. The Greeks studied them simply for pleasure, without any idea of practical utility, which they despised. But after about 2,000 years, in the 17th Century, they were suddenly found to be of the greatest practical importance. The development of artillery had shown people that if you want to hit a distant object, you must not aim straight at it, but a little above it. No one knew exactly how a cannon ball traveled, but military commanders were anxious to know. Galileo, who was employed by the Duke of Tuscany, discovered the answer: they travel in parabolas, which are a particular kind of conic section. At about the same time Kepler discovered that the planets go round the sun in ellipses, which are another kind of conic section. In this way all the work that had been done on conic sections became useful in warfare, navigation, and astronomy.
I spoke a moment ago of conic sections as shadows of circles. You can make the different kinds of conic sections for yourself if you have a lamp with a circular lampshade. The shadow of the lampshade on the ceiling (unless it is crooked) will be another circle, but the shadow on the wall will be a hyperbola. If you take a piece of paper and hold it above the lampshade, if you hold it not quite horizontal the shadow will be an ellipse; as you slope it more, the ellipse will get longer and thinner; the first shadow that is not an ellipse, as you slope the piece of paper more and more, is a parabola; after that, it becomes a hyperbola. Falling drops in a fountain make a parabola; so do stones thrown over a cliff.
Mathematically, as any one can see, the subject of shadows is the same as perspective. The study of the properties which a figure has in common with all its possible shadows is called “projective” geometry; although it is really simpler than the sort of geometry that the Greeks did, it was discovered much later. One of the pioneers in this subject was Pascal, who, unfortunately, decided that religious meditation was more important than mathematics.
I have not hitherto said anything about algebra, which owed its beginning to the very late Alexandrian Greeks, but was mainly developed, first by the Arabs, and then by the men of the 16th and 17th Centuries. Algebra, at first, is apt to seem more difficult than geometry, because in geometry there is a concrete figure to look at, whereas the x’s and y’s of algebra are wholly abstract. Algebra is only generalized arithmetic: when there is some proposition which is true of any number, it is a waste of time to prove it for each separate number, so we say “let x be any number” and proceed with our reasoning. Suppose, for instance, you notice that 1 plus 3 is 4, which is twice 2.; 1 plus 3 plus 5 is 9, which is 3 times 3; 1 plus 3 plus 5 plus 7 is 16, which is 4 times 4. It may occur to you to wonder whether this is a general rule, but you will need algebra to express at all simply the question you are asking yourself, which is: “Is the sum of the first n odd numbers always n2?” When you have come to the point of being able to understand this question, you will easily find a proof that the answer is yes. If you don’t use a letter such as n, you have to use very complicated language. You can say: “If any number of odd numbers, starting from i and missing none, are added up, the result is the square of the number of odd numbers added.” This is much more difficult to understand. And when we reach more complicated questions, it soon becomes quite impossible to understand them without the use of letters instead of the phrase “any number.”
Even problems that have to do with particular numbers are often much easier to solve if we use the letter x for the number we want. When I was very young I was puzzled by the conundrum: “If a fish weighs 5 lbs, and half its own weight, how much does it weigh?” Many people are inclined to say 7½ lbs. If you begin “let x be the weight of the fish,” and go on “5 lbs added to one-half of x equals x,” it is obvious that 5 lbs is half of x, so that x is 10 lbs. This problem is almost too easy to need “x.” Take one just a little more difficult. The police are pursuing a criminal along a certain road; he has 10 minutes start, but the police car can do 70 miles an hour, while the criminal’s car can only do 60. How long will it take them to catch up with him? The answer of course is 1 hour. This can be “seen” in one’s head; but if I said the criminal had 7 minutes start, his car could do 53 miles an hour, and the police car could do 67, you would find it best to begin: Let t be the number of minutes it will take the police to catch up. Getting used to the algebraic use of letters is difficult for a boy or girl beginning algebra. It should be made easy by first giving a great many instances of a general formula. For instance:

11 times 11 is 10 times 10 plus twice 10 plus 1
12 times 12 is 11 times 11 plus twice 11 plus 1
13 times 13 is 12 times 12 plus twice 12 plus 1, and so on,

and in the end it becomes easy to understand that

n plus 1 times n plus 1 is n times n plus twice n plus 1.

In the early stages of teaching algebra, this process should be repeated with each new formula.
One of the odd things about mathematics is that, in spite of its great practical utility, it seems, in much of the detail, like a mere frivolous game. No one is likely to become good at mathematics unless he enjoys the game for its own sake. Skilled work, of no matter what kind, is only done well by those who take a certain pleasure in it, quite apart from its utility, either to themselves in earning a living, or to the world through its outcome. No one can become a good mathematician merely in order to earn a living, or merely in order to be a useful citizen; he must also get the kind of satisfaction from mathematics that people get from solving bridge problems or chess problems. I will therefore give a few examples. If they amuse you, it may be worth your while to devote a good deal of time to mathematics; if not, not.
I remember that, as a boy, I discovered for myself, with great delight, the formula for the sum of what is called an “arithmetical progression.” An arithmetical progression is a series of numbers in which each term after the first is greater (or less) than its predecessor by a fixed amount. This fixed amount is called the “common difference.” The series 1, 3, 5, 7, … is an arithmetical progression, in which the common difference is 2. The series 2, 5, 8, 11, … is an arithmetical progression in which the common difference is 3. Suppose now you have an arithmetical progression consisting of a finite number of terms, and you want to know what all the terms together add up to. How will you proceed?
Let us take an easy example: the series 4, 8, 12, 16, … up to 96, that is to say, all numbers less than 100 that divide by 4. If you want to know what all these add up to, you can of course do the sum straight forwardly. But you can save yourself this trouble by a little observation. The first term is 4, the last is 96; these add up to 100. The second term is 8, the last but one is 92; these again add up to 100. So it is obvious that you can take the numbers in pairs, and that each pair will add up to 100. There are 24 numbers, therefore there are 12 pairs, therefore the sum you want is 1200. This suggests the general rule: To find the sum of an arithmetical progression, add together the first and last terms, and then multiply by half the numbers of terms. You can easily persuade yourself that this is right, not only when the number of terms is even, as in the above example, but also when it is odd.
But we may want to get a new form for this formula, if we have not been told what the last term is, but only the first term, the number of terms, and the common difference. Let us take an example. Suppose the first term is 5, the common difference is 3, and the number of terms is 21. Then the last term will be 5 plus 20 times 3, i.e. 65. So the sum of the first and last terms is 70, and the sum of the series is half of this multiplied by the number of terms, i.e. half of 70 times 21. This is 35 times 21, i.e. 735. The rule is: Add twice the first term to the common difference multiplied by one less than the number of terms in the series, and then multiply all this by half the number of terms in the series. This is the same as the earlier rule, but differently expressed.
Now let us take another problem. Suppose you had a number of tanks, each a perfect cube, i.e. having length, breadth, and depth all equal. Suppose the first is 1 foot each way, the second 2, the third 3, and so on. You wish to know how many cubic feet of oil you can get them all to hold. The first will hold 1 cubic foot, the second 8, the third 27, the fourth 64, the fifth 125, the sixth 216, and so on. So what you want to find is the sum of the cubes of the first so many numbers. You notice that

1 & 8 make 9, i.e. 3 times 3, and 3 is half of 2 times 3.
1 & 8 & 27 make 36, i.e. 6 times 6, and 6 is half of 3 times 4.
1 & 8 & 27 & 64 make 100, i.e. 10 times 10, and 10 is half of 4 times 5.
1 & 8 & 27 & 64 & 125 make 225, i.e. 15 times 15, and 15 is half of 5 times 6.
1 & 8 & 27 & 64 & 125 & 216 make 441, i.e. 21 times 21, and 21 is half of 6 times 7.

This suggests a rule for the sum of the cubes of the first so many whole numbers. The rule is: Multiply the number of whole numbers concerned by one more than itself, take half of this product, and then take the square of the number you have now got. You can easily persuade yourself that this formula is always right, by what is called “mathematical induction.” This means: assume your formula is right up to a certain number, and prove that in that case it is right for the next number. Notice that your formula is right for 1. Then it follows that it is right for 2, and therefore for 3, and so on. This is a very powerful method, by which a great many of the properties of whole numbers are proved. It often enables you, as in the above case, to turn a guess into a theorem.
Let us consider another kind of problem, which is called that of “combinations and permutations.” This kind of problem is often of great importance, but we will begin with trivial examples. Suppose a hostess wishes to give a dinner party, and there are 20 people to whom she owes an invitation, but she can only ask 10 at a time. How many possible ways are there of making a selection? Obviously there are 20 ways of choosing the first guest; when he has been chosen, there are 19 ways of choosing the next; and so on. When 9 guests have been chosen, there are 11 possibilities left, so the last guest can be chosen in 11 ways. So the whole number of possible choices is

20 times 19 times 18 times 17 times 16 times 15 times 14 times 13 times 12 times 11.

This is quite a large number; it is a miracle that hostesses do not become more bewildered. We can simplify the statement of the answer by using what are called “factorials.”

Factorial 2 means the product of all the numbers up to 2, i.e. 2
Factorial 3 means the product of all the numbers up to 3, i.e. 6
Factorial 4 means the product of all the numbers up to 4, i.e. 24
Factorial 5 means the product of all the numbers up to 5, i.e. 120

and so on. Now the number of possible choices we had above is factorial 20 divided by factorial 10. This is a problem in what is called “combinations.” The general rule is that the number of ways in which you can choose m things out of n things (n being greater than m) is factorial n divided by factorial m.
Now let us consider “permutations,” where the question is not what things to choose, but how to arrange them. Our hostess, we will suppose, has chosen her 10 guests, and is now considering how to seat them. She and her husband have fixed places, at the head and foot of the table, and the guests have to be distributed among the 10 other places. So there are 10 possibilities for the first guest, and when his place is fixed there are 9 for the next, and so on; thus the total number of possibilities is factorial 10, i.e. 3,628,800. Fortunately, social rules, such as alternating men and women and separating husbands and wives, reduce the effective possibilities to 4 or 5.
Let us take one more problem in “combinations.” Suppose you have a number of objects, and you may choose as many or as few of them as you like, and may even choose all or none. How many choices have you?
If there is one object A, you have 2 choices, A or none.
If there are 2, A and B, you have 4 choices, A and B, or A, or B, or none.
If there are 3, A and B and C, you have 8 choices, A and B and C, A and B, A and C, B and C, A, B, C, or none.
If there are 4, you have 16, and so on. The general rule is that the number of choices is 2 multiplied by itself as many times as there are objects. This is really obvious, because you have two possibilities in regard to each object, namely to choose it or reject it, and when you have made your choice in regard to one object you still have complete freedom as regards the others.
Problems of permutations and combinations have an enormous number of applications. One of them is in the Mendelian theory of heredity. The first biologists who revived Mendel’s work knew almost no mathematics, but they found certain numbers constantly turning up. One of them mentioned this to a mathematical friend, who at once pointed out that they are numbers which occur in the theory of combinations, and when this had been noticed the reason was easy to see. Nowadays Mendelism is full of mathematics: take, for instance, such a problem as this: if a certain recessive characteristic gives you an advantage in the struggle for existence, will it tend to become general in a population in which it sometimes occurs? And, if so, how long will it take to belong to some given percentage of the population, if we know the percentage having this characteristic at present? Such problems are often of great practical importance, for instance, in regard to the spread of feeble-mindedness and other mental defects.
The great merit of modern as compared with ancient mathematics is that it can deal with continuous change. The only kind of motion that could be dealt with by ancient or medieval mathematics was uniform motion in a straight line or a circle: Aristotle had said that it was “natural” for earthly bodies to move in straight lines and heavenly bodies in circles, a view which persisted until Kepler and Galileo showed that it had no application to the facts. The technical instrument for dealing with continuous change is the differential and integral calculus, invented independently by Newton and Leibniz.
We may illustrate the use of the calculus by considering what is meant by “velocity.” Suppose you are in a train which has lately started from a station and is still gaining speed, and you want to know how fast it is moving at the present moment. We will suppose that you know how far apart the telegraph poles are, so that you can estimate the distance the train has traversed in a given time. You find, let us suppose, that in the second after the moment at which you wished to know the speed of the train it has traversed 44 feet. 44 feet a second is 30 miles an hour, so you say “we were doing 30 miles an hour.” But although this was your average speed throughout the second, it was not your speed at the beginning, because the train is accelerating, and was moving faster towards the end of the second than at the beginning. If you were able to measure sufficiently accurately, you might find that in the first quarter of a second the train traveled 10 feet, not 11. So the speed of the train at the beginning of the second was nearer 40 feet than 44. But 40 feet per second will still be too much, since even in a quarter of a second there will have been some acceleration. If you can measure small times and distances accurately, the shorter the time you take for estimating the train’s speed the more nearly right you will be. But you will never be quite right.
What, then, can be meant by the speed of the train at a given instant? This is the question that is answered by the differential calculus. You make a series of closer and closer approximations to its speed by taking shorter and shorter times. If you take one second, your estimate is 44 feet per second; if you take a quarter of a second, it is 40. We will suppose that there are men on the edge of the railway with stopwatches; they find that if you take a tenth of a second, the speed works out at 39.2 feet per second; if a twentieth of a second, at 39.1, and so on. Imagining an impossible accuracy of measurement and observation, we may suppose that the observers find, as they make the times shorter and shorter, that the speed as estimated is always slightly above 39, but is not always above any number greater than 39. Then 39 is called the “limit” of the series of numbers, and we say that 39 feet per second is the velocity of the train at the given instant. This is the definition of velocity at an instant.
The “differential calculus” is the mathematical instrument by which, if you know the position of a body at each instant, you calculate its speed at each instant. The “integral calculus” deals with the opposite problem: given the direction and speed of motion of a body at each instant, to calculate where it will be each instant, given the place from which it started. The two together are called the “calculus.”
A simple example of a problem that needs the integral calculus is what is called the “curve of pursuit.” A farmer and his dog are in a square field, of which the corners are A, B, C, D. At first the dog is at A and the farmer is at B. The farmer begins to walk towards C, and at the same moment whistles to his dog, who runs at a uniform speed always towards where his master is at that moment. What curve will the dog describe.
More important examples are derived from the motions of the planets. Kepler proved by observation that they move round the sun in ellipses, and he discovered a relation between the distances of different planets from the sun and the times it takes them to go round the sun. This enabled Newton, by the differential calculus, to infer the velocity of a planet at any point of its orbit; the velocity is not constant, but greatest when the planet is nearest to the sun. Then, using the differential calculus once more, he could calculate the planet’s acceleration at each instant—i.e. its change of velocity both in magnitude and direction. He found that every planet at every moment has an acceleration towards the sun which is inversely proportional to the square of its distance from the sun.
He then took up the inverse problem, which is one for the integral calculus. If a body has, at every moment, an acceleration towards the sun which is inversely proportional to the square of its distance from the sun, how will it move? He proved that it will move in a conic section. Observation shows that, in the case of the planets and certain comets, this conic section is an ellipse; in the case of certain other comets, it may be a hyperbola. With this his proof of the law of gravitation was complete.
It must not be supposed that the calculus applies only to change in time. It applies wherever one quantity is a continuous “function” of another. The notion of “function” is an important one, which I shall try to explain.
Given a variable quantity, another is said to be a “function” of it if, when the variable quantity is given, the value of the other is determinate. For instance, if you have to transport a quantity of oil by train, the number of tank-cars you will need is a “function” of the quantity of oil; if you have to feed an army, the quantiy of food required is a “function” of the number of soldiers. If a body is falling in a vacuum, the distance it has fallen is a “function” of the time during which it has been falling. The number of square feet of carpet required in a square room is a “function” of the length of the sides, and so is the amount of liquid that can be put into a cubic container; in one case the function is the square, in the other the cube —a room whose sides are twice as long as those of another room will need four times as much carpet, and a cask which is twice as high as another will hold eight times as much liquid, if its other dimensions are increased in proportion.
Some functions are very complicated. Your income tax is a function of your income, but only a few experts know what function. Suppose some mathematically minded expert were to propose a simple function, for instance, that your income tax should be proportional to the square of your income. He might combine this with the proposal that no one’s income, after deduction of the tax, should exceed $25,000. How would this work out? The tax would have to be one hundred-thousandth part of the square of your income in dollars. On incomes of less than the square root of $1,000 (which is about $32), the tax would be less than a cent, and would not be collected; on $1,000 the tax would be $10; on $2,000, $40; on $10,000, $1,000 and on $50,000 it would be $25,000. After that, any increase of income would make you poorer. If you had an income of $100,000, the tax would be exactly equal to your income, and you would be penniless. I do not think it very likely that any one will advocate this plan.
Given any function of a variable x, a slight increase in x will be accompanied by a slight increase or decrease of the function unless the function is discontinuous. For instance, let x be the radius of a circle, and let the function concerned be the area of the circle, which is proportional to the square of the radius. If the radius is slightly increased, the area of the circle is increased; the increase is obtained by multiplying the increase of the radius by the circumference. The differential calculus gives the rate of the increase of the function for a given small increase of the variable. On the other hand, if you know the rate of increase of the function relative to the variable, the integral calculus tells you what the total increase or decrease of the function will be as the variable passes from one value to another. The simplest of important instances is that of a body falling in a vacuum. Here the acceleration is constant, that is to say, the increase of velocity in any given time is proportional to the time. Therefore the total velocity at any time is proportional to the length of time since the fall began. From this the integral calculus shows that the total distance fallen since the fall began is proportional to the square of the time since the fall began. This can be proved without the integral calculus, and was so proved by Galileo; but in more complicated cases the calculus is essential.
Mathematics is, at least professedly, an exact instrument, and when it is applied to the actual world there is always an unjustified assumption of exactness. There are no exact circles or triangles in nature; the planets do not revolve exactly in ellipses, and if they did we could never know it. Our powers of measurement and observation are limited. I do not mean that they have a definite limit; on the contrary, improvement in technique is continually lowering the limit. But it is impossible that any technique should leave no margin of probable error, because, whatever apparatus we may invent, we depend, in the end, upon our senses, which cannot distinguish between two things that are very closely similar. It is easy to prove that there are differences which we cannot perceive. Suppose, for instance, there are three closely similar shades of color, A, B, and C. It may happen that you cannot see any difference between A and B, or between B and C, but you can see a difference between A and C. This shows that there must be imperceptible differences between A and B and between B and C. The same would be true if A, B, C were three nearly equal lengths. The measurement of lengths, however it may be improved, must always remain only approximate, though the approximation may be very close.
For this reason, careful scientific measurements are always given with a “probable error.” This means that the result given is as likely as not to be out by the amount of the assigned probable error. It is practically certain to be more or less, but very unlikely to be out much more than the probable error. I wish men in other fields would admit that their opinions were subject to error; but in fact people are most dogmatic where there is least reason for certainty.
The reader who recalls our definition of “velocity” will see that it assumes an impossible minuteness of observation. Empirically, there can be no such thing as the velocity at an instant, because there is a lower limit to the times and distances that we can measure. Suppose we could carry our technique so far that we could measure a hundred-thousandth of a second and a hundred-thousandth of a centimeter. We could then tell how far a very small body had moved in a hundred-thousandth of a centimeter, unless it was moving less than a centimeter a second. But we could not tell what it had been doing during that very short time; it might have been traveling uniformly, it might have been going slower at first and then faster, or vice versa, and it might have done the whole distance in one sudden jump. This last hypothesis, which seems bizarre, is actually suggested by quantum theory as the best explanation of certain phenomena. We are in the habit of taking it for granted that space and time and motion are continuous, but we cannot know this, because very minute discontinuities would be imperceptible. Until lately, the hypothesis of continuity worked; now it begins to be doubtful whether it has this merit where very minute phenomena are concerned.
The exactness of mathematics is an abstract logical exactness which is lost as soon as mathematical reasoning is applied to the actual world. Plato thought—and many followed him in this—that, since mathematics is in some sense true, there must be an ideal world, a sort of mathematician’s paradise, where everything happens as it does in the text-books of geometry. The philosopher, when he gets to heaven (where only philosophers go, according to Plato), will be regaled by the sight of everything that he has missed on earth—perfectly straight lines, exact circles, completely regular dodecahedra, and whatever else is necessary to perfect his bliss. He will then realize that mathematics, though not applicable to the mundane scene, is a vision, at once reminiscent and prophetic, of the better world from which the wise have come and to which they will return. Harps and crowns had less interest for the Athenian aristocrat than for the humble folk who made up the Christian mythology; nevertheless Christian theologians, as opposed to the general run of Christians, accepted much of Plato’s account of heaven. When, in modern times, this sort of thing became incredible, an exactness was attributed to Nature, and men of science felt no doubt that the universe works precisely in Newtonian fashion. For Newton’s world was one that God could have made; a sloppy, inaccurate, more-or-less-so kind of world, it was felt, would be unworthy of Him. Only in quite recent times has the problem of mathematical exactness, as confronted with the approximate character of sensible knowledge, come to be stated in ways wholly free from all taint of inherited theology.
The result of recent investigations of this problem is to bring in approximateness and inexactitude everywhere, even into the most traditionally sacred regions of logic and arithmetic. To the older logicians, matters were simplified by their belief in immutable natural species. There were cats and dogs, horses and cows; two of each kind had been created by God, two of each kind had gone into the ark, two of a kind, breeding together, always produced offspring of the same kind. And as for Man, was he not distinguished from the brutes by his possession of reason, an immortal soul, and a sense of right and wrong? And so the meanings of such words as “dog,” “horse,” “man” were perfectly definite, and any living thing to which one of these words was applicable was separated by a finite gulf from all the living things of other kinds. To the question “is this a horse?” there was always an unequivocal and indubitable answer. But to the believer in evolution all this is changed. He holds that the horse evolved gradually out of animals that were certainly not horses, and that somewhere on the way there were creatures that were not definitely horses and not definitely not horses. The same is true of man. Rationality, so far as it exists, has been gradually acquired. Of geological specimens it is impossible to judge whether they had immortal souls or a moral sense, even granting that we have these advantages. Various bones have been found which clearly belonged to more or less human bipeds, but whether these bipeds should be called “men” is a purely arbitrary question.
It thus appears that we do not really know what we mean by ordinary every-day words such as “cat” and “dog,” “horse” and “man.” There is the same sort of uncertainty about the most accurate terms of science, such as “meter” and “second.” The meter is defined as the distance between two marks on a certain rod in Paris, when the rod is at a certain temperature. But the marks are not points, and temperature cannot be measured with complete accuracy. Therefore we cannot know exactly how long the meter is. Concerning most lengths, we can be sure that they are longer than a meter, or sure that they are shorter. But there remain some lengths of which we cannot be sure whether they are longer or shorter than a meter, or exactly a meter long. The second is defined as the time of swing of a pendulum of a certain length, or as a certain fraction of the day. But we cannot measure accurately either the length of the pendulum or the length of the day. Thus there is just the same trouble about meters and seconds as about horses and dogs, namely that we do not know exactly what the words mean.
“But,” you may say, “none of this shakes my belief that 2 and 2 are 4.” You are quite right, except in marginal cases—and it is only in marginal cases that you are doubtful whether a certain animal is a dog or a certain length is less than a meter. Two must be two of something, and the proposition “2 and 2 are 4” is useless unless it can be applied. Two dogs and two dogs are certainly four dogs, but cases may arise in which you are doubtful whether two of them are dogs. “Well, at any rate there are four animals,” you may say. But there are microorganisms concerning which it is doubtful whether they are animals or plants. “Well then, living organisms,” you say. But there are things of which it is doubtful whether they are living organisms or not. You will be driven into saying: “Two entities and two entities are four entities.” When you have told me what you mean by “entity,” we will resume the argument.
Thus concepts, in general, have a certain region to which they are certainly applicable, and another to which they are certainly inapplicable, but concepts which aim at exactness, like “meter” and “second,” though they have a large region (within the approximate field) to which they are certainly inapplicable, have no region at all to which they are certainly applicable. If they are to be made certainly applicable, it must be by sacrificing the claim to exactness.
The outcome of this discussion is that mathematics does not have that exactness to which it apparently lays claim, but is approximate like everything else. This, however, is of no practical importance, since in any case all our knowledge of the sensible world is only approximate.
I have gone into this question because, to many people mathematics seems to make a claim to a sort of knowledge superior in kind to that of every day, and this claim, in those who are persuaded that it is not legitimate, rouses a resistance which interferes with their capacity to assimilate mathematical reasoning. The superior certainty of mathematics is only a matter of degree and is due, in so far as it exists, to the fact that mathematical knowledge is really verbal, although this is concealed by its complicated character.
What I have said about exactness so far is not quite the whole of the truth on this question. We cannot know the world exactly, it is true, but we do know that, if we suppose it to be as the mathematicians say, the results are correct as far as we can judge. That is to say, mathematics offers the best working hypotheses for understanding the world. Whenever the current hypotheses seem more or less wrong, it is new mathematics that supply the necessary corrections. Newton’s law of gravitation held the field for two and a half centuries, and was then amended by Einstein; but Einstein’s universe was quite as mathematical as Newton’s. Quantum theory has developed a physics of the atom which is very different from classical physics, but it still works with mathematical symbols and equations. The apparatus of conceptions and operations invented by the pure mathematicians is indispensable in explaining the multiform occurrences in the world as due to the operation of general laws; the only hypotheses that have a chance of being true, in the more advanced sciences, are such as would not occur to any one but a mathematician.
If you wish, therefore, to understand the world theoretically, so far as it can be understood, you must learn a very considerable amount of mathematics. If your interests are practical, and you only wish to manipulate the world, whether for your own profit or for that of mankind, you can, without learning much mathematics, achieve a great deal by building on the work of your predecessors. But a society which confined itself to such work would be, in a sense, parasitic on what had been already discovered. This may be illustrated from the history of radio. Nearly 100 years ago Faraday made a great many ingenious experiments on electromagnetism, but being no mathematician he could not invent really comprehensive hypotheses to explain his results. Then came Clark Maxwell who was not an experimenter but a first-rate mathematician. He inferred from Faraday’s experiments that there ought to be electromagnetic waves, and that light must be composed of such electromagnetic waves as had frequencies to which eyes are sensitive. In him this was pure theory. His work belongs to the 70’s of the last century. About 20 years after his time or rather less, a German physicist named Hertz, who was both a mathematician and an experimenter, decided to test Maxwell’s theory practically, and invented an apparatus by which he could manufacture electromagnetic waves. It turned out that they traveled with the velocity of light, and had all the properties that Maxwell had said they ought to have. Last came Marconi, who made Hertz’s invention such as could be used outside the laboratory, for it is Hertz’s waves that are used in the radio. This whole development illustrates admirably the interaction of experiment and theory upon which progress in science depends.
Finally, mathematics affords, to those who can appreciate it, very great pleasures to which no moralist can object. In the actual manipulation of the symbols there is the same kind of enjoyment that people find in chess, but it is dignified by being useful and not merely a game. In the sense of understanding something of natural processes there is a feeling of the power of human thought, and in the work of the best mathematicians there is a kind of clear-cut beauty which shows what men can achieve when they free themselves from cowardice and ferocity and from enslavement to the accidents of their corporal existence.