INTRODUCTION

Science is nothing without experiments. As the Nobel Prize-winning physicist Richard Feynman said: ‘In general, we look for a new law by the following process: First we guess it; then we compute the consequences of the guess to see what would be implied if this law that we guessed is right; then we compare the result of the computation to nature, with experiment or experience [observation of the world], compare it directly with observation, to see if it works. If it disagrees with experiment, it is wrong. In that simple statement is the key to science. It does not make any difference how beautiful your guess is, it does not make any difference how smart you are, who made the guess, or what his name is — if it disagrees with experiment, it is wrong.’1

img
© Physics Today Collection/American Institute of Physics/Science Photo Library
American physicist Richard Feynman (1918–1988).

Those words – if it disagrees with experiment, it is wrong – provide the simplest summary of what science is all about. People sometimes wonder why it took so long for science to get started. After all, the Ancient Greeks were just as clever as us, and some of them had both the curiosity and the leisure to philosophize about the nature of the world. But, by and large, with a few exceptions, that is all they did – philosophise. We do not intend to denigrate philosophy by this remark; it has its own place in the roll of human achievements. But it is not science. For example, these philosophers debated the question of whether a light object and a heavy object dropped at the same time would hit the ground at the same time, or whether the heavier object would fall more quickly. But they did not test their ideas by dropping objects with different weights from the top of a tall tower; that experiment would not be carried until the seventeenth century (although not, as we shall explain, by Galileo; see here). Indeed, it was just at the beginning of the seventeenth century that the English physician and scientist* William Gilbert (see here) first spelled out clearly the scientific method later summed up so succinctly by Feynman. In 1600, writing in his book De Magnete, Gilbert described his work, notably concerning magnetism, as ‘a new kind of philosophizing’, and went on: ‘If any see fit not to agree with the opinions here expressed and not to accept certain of my paradoxes, still let them note the great multitude of experiments and discoveries … we have dug them up and demonstrated them with much pains and sleepless nights and great money expense. Enjoy them you, and if ye can, employ them for better purposes … Many things in our reasonings and our hypothesese will perhaps seem hard to accept, being at variance with the general opinion; but I have no doubt that hereafter they will win authoritativeness from the demonstrations themselves.’2

img
© National Library of Medicine/Science Photo Library
William Gilbert (1544–1603), English physician and physicist. In 1600 Gilbert published De Magnete (Concerning Magnetism), a pioneering study in magnetism, which contained the first description of the scientific method, and greatly influenced Galileo.

In other words, if it disagrees with experiment, it is wrong. The reference to ‘great money expense’ also strikes a chord in the modern age, when scientific advances seem to require the construction of expensive instruments, such as the Large Hadron Collider at CERN, probing the structure of matter on the smallest scale, or the orbiting automatic observatories that reveal the details of the Big Bang in which the Universe was born. This highlights the other key to the relatively late development of science. It required (and requires) technology. There is, in fact, a synergy between science and technology, with each feeding off the other. Around the time that Gilbert was writing, lenses developed for spectacles were adapted to make telescopes, used to study, among other things, the heavens. This encouraged the development of better lenses, which benefited, among other things, people with poor eyesight.

A more dramatic example comes from the nineteenth century. Steam engines were initially developed largely by trial and error. The existence of steam engines inspired scientists to investigate what was going on inside them, often out of curiosity rather than any deliberate intention to design a better steam engine. But as the science of thermodynamics developed, inevitably this fed back into the design of more efficient engines. However, the most striking example of the importance of technology for the advancement of science is one that is far less obvious and surprises many people at first sight. It is the vacuum pump, in its many guises down the ages. Without efficient vacuum pumps, it would have been impossible to study the behaviour of ‘cathode rays’ in evacuated glass tubes in the nineteenth century, or to discover that these ‘rays’ are actually streams of particles – electrons – broken off from the supposedly unbreakable atom. And coming right up to date, the beam pipes in the Large Hadron Collider form the biggest vacuum system in the world, within which the vacuum is more perfect than the vacuum of ‘empty’ space. Without vacuum pumps, we would not know that the Higgs particle (see here) exists; in fact, we would not have known enough about the subatomic world to even speculate that such an entity might exist.

img
© Science Source/Science Photo Library
Robert Hooke’s hand-crafted microscope.

But we know that atoms and even subatomic particles exist, in a much more fundamental way than the Ancient Greek philosophers who speculated about such things, because we have been able (and, equally significantly, we have been willing) to carry out experiments to test our ideas. The ‘guesses’ that Feynman refers to are more properly referred to as hypotheses. Scientists look at the world around them, and make hypotheses (guesses) about what is going on. For example, they hypothesise that a heavy object and a lighter object dropped at the same time will hit the ground at different times. Then they drop objects from a high tower, and find that the hypothesis is wrong. There is an alternative hypothesis: that heavy and light objects fall at the same rate. Experiment proves that this is correct, so this hypothesis gets elevated to the status of a theory. A theory is a hypothesis that has been tested by experiment and passed those tests. Human nature being what it is, of course, it is not always so straightforward and clear cut. Adherents to the failed hypothesis may try desperately to find a way to shore it up and explain things without accepting the experimental evidence. But in the long run, the truth will out – if only because the die hards really do die.

Non-scientists sometimes get confused by this distinction between a hypothesis and a theory, not least because many scientists are guilty of sloppy use of the terminology. In everyday language, if I have a ‘theory’ about something (such as the reason why some people like Marmite and others don’t) this is really just a guess, or a hypothesis; this is not what the word ‘theory’ means in science. Critics of Darwin’s theory who do not understand science sometimes say that it is ‘only a theory’, with the implication ‘my guess is as good as his’. But Darwin’s theory of natural selection starts from the observed fact of evolution, and explains how evolution occurs. In spite of what those critics might think, it is more than a hypothesis – not just a guess – because it has been tested by experiment, and has passed those tests. Darwin’s theory of evolution by natural selection is ‘only’ a theory in the same way that Newton’s theory of gravity is ‘only’ a theory. Newton started from the observed facts of the ways things fall or orbit around the Earth and the Sun, and developed an idea of how gravity works – gravity involving an inverse square law of attraction. Experiments (and further observations, which throughout this book we include under the heading ‘experiments’) confirmed this.

img
© Paul D. Stewart/Science Photo Library
Charles Darwin’s illustration, from his book Fertilisation of Orchids, of Cypripedium (slipper orchid, Paphiopedilum), beneath a photograph of an early variety of Sandford orchid cultivar.

Gravity provides another example of how science works. Newton’s theory passed every test at first, but as observations improved it turned out that the theory could not explain certain subtleties in the orbit of Mercury, the closest planet to the Sun, which orbits where gravity is strong – that is, where there is a strong gravitational field. In the twentieth century, Albert Einstein came up with an idea, which became known as the general theory of relativity, that explained everything that Newton’s theory explained, but which also explained the orbit of Mercury and correctly predicted the way light gets bent as it passes near the Sun (see here). Einstein’s theory is still the best theory of gravity we have, in the sense that it is the most complete. But that does not mean that Newton’s theory has to be discarded. It still works perfectly within certain limits, such as in describing how things move under the influence of gravity in less extreme circumstances, in the so-called ‘weak field approximation’, and is fine for calculating the orbit of the Earth around the Sun, or for calculating the trajectory of a spaceprobe sent to rendezvous with a comet.

Contrary to what is sometimes taught, science does not proceed by revolutions, except on very rare occasions. It is incremental, building on what has gone before. Einstein’s theory builds on, but does not replace, Newton’s theory. The idea of atoms as little hard balls bouncing off one another works fine if you want to calculate the pressure of a gas inside a box, but has to be modified if you want to calculate how electrons jumping about within atoms produce the coloured lines of a spectrum of light. No experiment will ever prove the theories of Einstein or Darwin ‘wrong’ in the sense that they have to be thrown away or require us to start again, but they may be shown to be incomplete, in the way Newton’s theory was shown to be incomplete. Better theories of gravity or evolution would need to explain all the things that the present theories explain, and more besides.

Don’t just take our word for it. In his book Quantum Theory, Paul Dirac, possibly the greatest genius of the quantum pioneers, wrote: ‘When one looks back over the development of physics, one sees that it can be pictured as a rather steady development with many small steps and superposed on that a number of big jumps. These big jumps usually consist in overcoming a prejudice … And then a physicist has to replace this prejudice by something more precise, and leading to some entirely new conception of nature.’3

All of this should be clear from the selection of experiments that we have chosen in order to mark the historical growth of science, starting with a couple of those rare pre-1600 exceptions that did amount to more than mere philosophising, and coming up to date with the discovery of what the Universe at large is made of. This choice is necessarily a personal one, and limited by the constraint of choosing exactly 100 experiments. There is so much more that we could have included. But one obvious feature of the story, which we realized as we were researching this book, is not a matter of personal choice, but another example of the way science works. Some of the experiments reported here come in clusters, with several in a similar area of science in a short span of time – for example, in the development of atomic/quantum physics. This is what happens when scientists succeed in ‘overcoming a prejudice’. When a breakthrough is made, it leads to new ideas (new ‘guesses’, as Feynman would have said, but, crucially, informed guesses) and new experiments, which tumble out almost on top of each other until that seam is exhausted.

A problem for the non-specialist is that the information on which those guesses are based is itself based on the whole edifice of science, a series of experiments going back for centuries. The vacuum in the Large Hadron Collider has its origins in the work of Evangelista Torricelli in the seventeenth century (see here). But Torricelli could never have imagined the existence of the Higgs particle, let alone an experiment to detect it. The first steps in such a series are relatively easy to understand, even for non-scientists, not least thanks to the successes of science over the years. It is now ‘obvious’ to us that objects with different weight will fall at the same rate, just as it was ‘obvious’ to the ancients that they would not. But when it gets to the Higgs particle and the composition of the Universe, unless you have a degree (or two) in physics, it may be far from obvious that the story makes sense. At some level, things have to be taken on trust. But the key to that trust is that everything in the scientific world view is based on experiment, by which term we include observations of phenomena predicted by theories and hypotheses, such as the bending of light as it goes past the Sun (see here). If you find that some of the concepts described here fly in the face of common sense, remember what Gilbert said. They may ‘seem hard to accept, being at variance with the general opinion’; but they ‘win authoritativeness from the demonstrations [experiments] themselves’. And above all, if it disagrees with experiment, it is wrong.

* The term ‘scientist’ was not coined until much later, but we shall use it for convenience to describe all the thinkers or ‘natural philosophers’ of centuries past.