During the 19th century, physics became weightier in several senses. The number of imponderables fell almost to zero. Physics became a recognized profession and its practitioners ‘physicists’. It and they acquired special training facilities in university institutes and technical schools that sprang up in Europe and America after 1870 primarily to train students as schoolteachers and electrical engineers (see Figure 20). At the same time, the perpetually shifting border between physics and mathematics moved to make room for theorists; and again, as happened with the discovery of the gas types, fertile concepts, such as the ion and the table of elements, came over from chemistry.
20. A physics institute around 1900. The size, fit with local architecture, and arrangement of lecture halls, laboratories, and utilities at the University of Manchester were typical of institutes built around 1900. Rutherford was professor and director there from 1907 to 1919.
Nineteenth-century physics ended with a new standard model or project more Cartesian than Newtonian. The new project penetrated more deeply than its predecessor, which had obtained superficial unity with similarly acting, but distinct imponderables; whereas it premised an ultimate reduction to the same matter in diverse modes of motion. Once again Descartes did battle with Aristotle, accomplished a mighty synthesis, and prepared the way for something new. The physicists of the early 20th century admired, and set aside, their 19th-century accomplishments as ‘classical’, and built a quantum science of atoms and molecules that, they claimed, contained all of physics, and chemistry too, ‘in principle’.
From around 1850, physics increasingly made good on Bacon’s promise that experimental science would improve the human condition. New industries competed to supply electric communication, light, and power, and to invent electrical appliances for the home and the workplace. New governmental agencies, national and international, regulated services, and standardized products. Old universities introduced new curricula to compete with technical higher schools and replaced academies as the primary loci of research. And societies of physicists sprang up to see to the professional interests of their members.
Newton’s rays of light had had a rival since 1690, when Huygens explained double refraction by assuming light to be a pressure wave in a world-filling ether. The first wave theory competitive with Newton’s ray theory did not appear for a century. Then around 1800 Thomas Young, having earned a medical degree in Germany with a thesis on sound and hearing, analogized the medium supposed to transmit light to a thin air capable of supporting acoustic vibrations. Thus he modelled the interference patterns of light rays passing through two parallel narrow slits as constructive and destructive combinations of sound waves. His model met resistance in a phenomenon foreign to sound: polarization.
The need to capture polarization became urgent when French physicists, expecting to refute Young’s theory, unexpectedly confirmed an apparently absurd consequence of it. The absurdity, discovered by Augustin Fresnel, a graduate of the École polytechnique, was that the patterns produced by light falling on a screen through a minute hole could have dark blots at their centres. Young and Fresnel then independently suggested that the light-bearing medium could account for polarization if it oscillated at right angles to the direction of the wave. To transmit these ‘transverse’ vibrations, the ‘luminiferous ether’ had to behave like an elastic solid rather than a thin air. Physicists fatigued themselves during the rest of the century imagining ether rigid enough to propagate light yet soft enough to pass the planets. Nonetheless, most knowledgeable people had dropped the light particle in favour of ether vibrations by 1830.
At the same time, and following a similar pattern, the magnetic fluids also disappeared. Again an experimental demonstration made on the periphery (Paris then being the centre of physics) prompted a French mathematician to a destructive generalization. The actors were H. C. Ørsted (Copenhagen), who discovered that a current-carrying wire could act at a distance on a magnetic needle (1819), and A.-M. Ampère (Paris), who showed that a circular current behaves as a magnet and deduced that magnetism arises from electricity in motion (1822, 1827). This model implied the existence of distance forces as awkward as an elastic-solid vacuum: the magnetic force of a current lies not in its direction, but in circles around it, and the force between moving droplets of electrical fluid depends on their relative velocities as well as on their separation. Furthermore, to include the generation of an electric current by a changing magnetic force (1831), the most famous of the many famous discoveries of Michael Faraday (Royal Institution, London), the force-law must also involve the relative acceleration of the droplets.
In an ironic reversal of polarity, the proponents of these peculiar distance forces were continental physicists, notably Wilhelm Weber (Göttingen) and Hermann von Helmholtz (Königsberg, later Bonn and Berlin), whose predecessors had regarded Newton’s gravity as unintelligible, whereas Newton’s countrymen, Faraday and his followers, tried to do without distance forces altogether. Faraday located electricity in a special medium he supposed to exist in the space between electrified bodies. The stresses and strains in this medium or ‘field’ constituted, and conveyed, what appeared to ordinary physicists as forces acting at a distance between ‘charged’ or ‘magnetized’ bodies. William Thomson, later Lord Kelvin (Glasgow), translated Faraday’s intuitions into a dynamical picture portraying magnetic forces as vortices, and electric forces as linear flows, in the field. In the 1860s James Clerk Maxwell (Aberdeen and London, later Cambridge) worked out a dynamical model of Faraday’s field capable of representing most known electrical and magnetic phenomena. On this expendable scaffold, he raised the edifice of an enduring electrodynamics, a set of relations (‘Maxwell’s equations’, 1866) linking electric and magnetic forces and their sources. For those who followed the Faraday–Maxwell line, electrical fluids had no better claim to existence than magnetic fluids or corpuscles of light.
Nothing, therefore, could have been more gratifying to the grand-unifying physicist than to learn that Faraday’s field and the luminiferous ether were one and the same. In 1864, Maxwell calculated from known electric and magnetic parameters that the speed with which the electromagnetic disturbances predicted by his model travel through space came close to the latest measurement of the speed of light. In 1887, Helmholtz’s former student Heinrich Hertz (Karlsruhe) generated and detected the electromagnetic waves Maxwell had foreseen. The resultant grand synthesis complicated the task of ether-field theorists, who now required an elastic-solid medium filling all space, competent to produce all the phenomena of electricity, magnetism, and light, and transparent to gravitating bodies.
Like light particles, caloric had had a challenger in the 18th century that supposed heat to be a mode of motion. Count Rumford’s production of heat from grinding cannon muzzles, though pertinent to the challenge, had not won it. The decisive experiments against the caloric theory, which date from the 1840s, also took place in an industrial setting, a brewery whose owner, James Prescott Joule, controlled his product by exact thermometry. Joule had studied with John Dalton and knew how to improve his leisure. Applying his instruments to engines, he showed that they consumed heat in raising a weight, and that a certain quantity of mechanical effort, or a certain expenditure of zinc in a battery, always generated the same amount of heat. Unlike matter, heat could be created and destroyed! Joule’s conclusion had to overcome not only scepticism about his thermometers but also a brilliant theory of steam engines.
This theory, which applied to all engines, regardless of fuel, mechanism, and working substance, assumed the indestructibility of caloric. Its author, Sadi Carnot, a polytechnicien like Fresnel, calculated the maximum efficiency possible for any heat engine as a function of the temperatures of the boiler and condenser. To escape a possible perpetual motion, the most efficient machine must be reversible; and to be reversible, its operations must never allow contact between parts maintained or working at different temperatures. From the ideal cycle of operations he invented to satisfy this condition, he deduced that the engine’s efficiency, the ratio of the weight lifted to the amount of caloric fluid Q employed, should be proportional to the difference of temperature ΔT between the boiler and the condenser.
The clarity, ingeniousness, and plausibility of Carnot’s analysis of 1824 impressed William Thomson. But he also inclined to Joule’s theory, which, in contrast to Carnot’s, destroyed caloric in doing mechanical work. Mathematics indicated a path to reconciliation. Thomson saw that Joule’s expression for efficiency would be the same as Carnot’s if the conserved quantity was not Q, but Q/T. Two principles, not one, were involved: (1) mechanical work and electricity could be converted into heat and vice versa without loss (Joule); and (2) a heat engine working reversibly conserves the quantity Q/T (Carnot).
Rudolf Clausius (Bonn) also reconciled Carnot’s conservation theory with Joule’s transformation theory, which Clausius had in the form developed independently of Joule by the physician Robert Julius Mayer. Clausius called the coin of conversion among the forces of nature ‘energy’. The mysterious quantity Q/T—nature’s fee for the conversion—he called ‘entropy’. In the ideal reversible case, the fee is zero. In practice, small amounts of heat are lost so as to render all ‘real’ processes irreversible. In Clausius’ formulation, the energy of the universe stays constant while its entropy strives for a maximum (‘the laws of thermodynamics’, 1865). Thus considerations arising in industrial settings drove the most plausible and ancient of the imponderable fluids, the matter of fire and heat, out of physics.
The concept of heat as motion proved as far-reaching as the synthesis of light and electrodynamics. It made possible a quantitative link between mechanical concepts and temperature via parameters that measured molecules and allowed mathematicians passage between the macro and microworlds. The wormhole to this wonderland lay in the equation for a ‘perfect gas’, a legendary medium described by Boyle’s law as generalized by Laplace’s protégé J. L. Gay-Lussac (Paris): pressure = (R × temperature)/volume. The constant R was the wormhole. A simple model (‘the kinetic theory of gases’), in which the pressure exerted by a standard number of molecules N arises from their bombardment of the walls of a cubical container, gave an astonishingly simple ‘law’: the kinetic energy of a perfect gas molecule is 3kT/2, where k, the bond between mechanics and thermodynamics, and the measurable and the molecular, is R/N.
This simple theory ignored collisions among gas molecules, distributions in velocity, mean free paths, and, most importantly, a statistical treatment of equilibrium. Clausius, Maxwell, and Ludwig Boltzmann (Graz and Vienna) added these realistic touches, and, after heroic calculations, recovered the old result: each degree of freedom of motion of a molecule has an energy kT/2 at equilibrium (‘the equipartition of energy’). Molecules able to move only in translation in three dimensions have an average energy of 3kT/2. The specific heat of a gas sample each of whose N molecules has f degrees of freedom would be (f/2)R. The formula worked well—for monatomic molecules.
Heat equilibrium occurs when entropy is a maximum. This simple statement hides a deep difficulty. If a gas is a perfect mechanical system, its motion should be reversible. Hence a statistical-mechanical representation of entropy appeared impossible. Boltzmann countered that, owing to the colossal number of molecules in play, departures from equilibrium will almost certainly be reversed instantaneously (1872, 1877, the ‘H-theorem’). Entropy S consequently could be understood as a function of the probability W of finding the system in a particular state. The wormhole k, rechristened ‘Boltzmann’s constant’, links not only the macro and the microworlds, but also the living and the dead. It appears on Boltzmann’s tombstone in the form S = klogW.
The successes of the gas theory and Maxwell’s electrodynamics, the equivalence of all energy to mechanical energy underwritten by the first law of thermodynamics, and the relative ease with which physicists reasoned with the intuitive concepts of matter and motion combined to return 19th-century physicists to the dream of Descartes. The president of the French Physical Society, Alfred Cornu (Paris), opened the first international congress of physicists, held in Paris in 1900, with the reassurance that Descartes ‘hovered’ over them. The great mathematician Henri Poincaré (Paris) gave the keynote address. He advised his auditors against developing a fondness for a revised-Cartesian or any other world system. Rather, they should collect the facts of experiment and arrange them for consultation in the most convenient manner. A good physicist was more librarian than philosopher.
Poincaré’s unflattering estimate of the epistemological status of physics corresponded to the considered opinion of many physicists. Although they might act and speak as if they sought the truths of nature, when they philosophized they acknowledged that the goal was beyond them. A prime impulse to this neo-scepticism was the demotion, by Gustav Kirchhoff (Heidelberg), of the most secure branch of physics, analytical mechanics, from a true account of matter and motion to a mere description of it. This ‘descriptionism’ turns up in the influential epistemologies of physicists who took a broad view of their discipline, notably Ernst Mach (Vienna), and also Joseph Larmor (Cambridge), who sponsored an English translation of Poincaré’s writings.
The metaphor of the library suited a large part of 19th-century physics, which boasted many new laws or effects easily entered into Poincaré’s imaginary catalogue. Representative entries concerning metals might read, alphabetically: ‘Electric current, generated by heating a junction of dissimilar metals’, Thomas Seebeck, 1822; ‘Heat capacity, atomic, inversely proportional to atomic weight’, Pierre Louis Dulong and Alexis Thérèse Petit, 1819; ‘Heat conductivity, proportional to T times electrical conductivity’, Gustav Wiedemann and Rudolf Franz, 1853; and ‘Paramagnetism, proportional to 1/T ’, Pierre Curie, 1895. The inverse of these effects also had entries, for example, ‘Heat developed by an electric current passing a bimetallic junction’, J. C. A. Peltier, 1834.
The library added new shelves with the discoveries that mechanical forces could create electricity (piezoelectricity) and magnetism (magnetostriction). Pursuing such reciprocal effects, James Alfred Ewing, who established the frontier between physics and engineering at Cambridge, discovered hysteresis, the lagging of magnetic effects behind their immediate causes (1882–5). The most exact entries in the library of physics around 1900 were the characteristic wavelengths of the series of spectral lines emitted by the elements when heated sufficiently. The first such survey, made by Kirchhoff and Robert Bunsen (Heidelberg) in 1860, employed salts vaporized in Bunsen’s burner, which immediately revealed cesium and rubidium, so named after the strongest colours in their spectra. Spectral analysis opened a way to explore the constitution of stars as well as of substances here below. Librarians of spectra invented several cataloguing rules, notably the ‘Balmer formula’ for hydrogen (after Johann Balmer, 1885), and its generalization, by Johannes Rydberg (Lund, 1888), to many series and elements. Their formulations contained the clue to what the Greeks would have rated an oxymoron: the internal structure of atoms.
Meteorologica provided many important new entries for the physics catalogue of 1900. Dalton had endowed atoms with different sizes to explain why the atmosphere’s constituents do not separate out according to weight. The different sizes, which implied different weights, cause each gas type to act as if the others were not present. After much wrangling over atomic weights, chemists came to agree enough in the 1860s to prompt the creation of a Ouija board that ordered known elements into families by weight and predicted the existence of unknown elements soon discovered. In the 1890s, meteorology again intervened significantly in atomistics when Lord Rayleigh (formerly Cambridge) discovered a minor constituent of the atmosphere he named argon. A family of ‘noble gases’ of the inactive argon type beginning with helium soon disclosed itself to the chemist William Ramsay. The new family fitted into the Ouija board perfectly, with one exception: argon and potassium had to swap places rightfully theirs by weight to preserve chemical periodicity. The noble gases thus confirmed the power, and deepened the mystery, of the periodic table of elements.
Further to geophysics, Newton’s tidal theory, refined by ‘harmonic analysis’ (William Whewell, Cambridge), grew powerful enough to become seriously misleading. The prevailing theory of the Earth’s formation around 1850, the ‘nebular hypothesis’ started by Kant and Laplace, derived our solar system from a spiralling gas cloud that heated as it coagulated and left Earth with a molten core. Kelvin observed that a mass of liquid under Earth’s surface should show tidal effects. None being known, he concluded that Earth is a rigid body.
From thermodynamics and the properties of surface rocks, Kelvin calculated that Earth has been cooling for more than a hundred million years—an indigestibly long time for people who credited biblical chronology, but not long enough for Darwin’s evolution to work. More acceptably to biblical literalists, Kelvin did not allow the human race very long to improve itself. The second law, he asserted, insured that the Earth, once too hot for humans, soon would become too cold for them. His widely accepted views about the age and rigidity of Earth, apparently anchored on rock-hard physical principles, did not survive long into the 20th century. Analysis of seismic waves from deep earthquakes showed that only slow longitudinal waves crossed the core. This information allowed Beno Gutenberg (Göttingen, 1912) to estimate that the core’s radius is almost half Earth’s, and everyone to infer that, since liquids do not support transverse waves, Earth is not solid throughout. By similar techniques, Andrija Mohorovičić (Zagreb) located a discontinuity some 30 miles beneath the continents that divides the crust from the ‘mantle’ (1909–10). Thus macroscopic physics softened Kelvin’s rigid Earth. As for Earth’s age, microphysics identified an additional heat source in radioactivity that exploded it beyond the needs of evolutionists.
Kelvin was an exemplary physicist not only because of the importance of his discoveries, his breadth of interest, and his dedication to the programme of mechanical reduction, but also because he could diagnose serious difficulties in his science. In 1900, he bundled his reservations into two problem areas, which, in happy reference to meteorology, he labelled ‘clouds’. One he seeded on the fact that no one had managed to design ether that could do everything Maxwell required of it. The other was the equipartition of energy.
Like Kelvin, Boltzmann acknowledged that the progress of mechanical reduction had its difficulties, but saw no other way forward. He ridiculed the physical chemist Wilhelm Ostwald (Leipzig), who championed a science based only on energy and its transformations (‘energetics’), for admonishing colleagues not to ‘make unto thee any graven image, or any likeness of anything’. This commandment (Exodus 20:4–6) dissuaded physicists no more than it had worshippers of the golden calf. However some, like Max Planck (Berlin), though critical of energetics, stuck as close as they could to the two laws of thermodynamics. Still others thought to reverse the argument that gave priority to mechanics and substitute electricity or heat in its stead. Boltzmann associated these schisms with the fin de siècle attack on received canons of art, music, and literature. Everywhere, he said in 1899, classicism had its enemies. But for clarity, longevity, and productivity he would stay with what he, perhaps the first among mortals, called ‘classical physics’—the science of clear mechanical models in space and time—and trust that some imperial Newton would dispose of equipartition.
Kelvin’s clouds grew more thunderous after 1900. Through Maxwell’s grand synthesis, the problem of how light propagates in a transparent body moving through stationary ether had taken on new urgency. Theory had explained some optical phenomena by supposing that moving matter drags a portion of the ether with it; but when the same theory required that it be pulled by spinning charged disks, it stubbornly remained quiescent. And yet moving ponderable bodies seemed to drag the ether surrounding them, since experiments to measure their velocity through it invariably failed. The most famous and delicate of these experiments, by A. A. Michelson (Cleveland, later Chicago) and Edward Morley (Cleveland), date from the 1880s.
As when Galileo sliced through the accumulated conundrums of motion by replacing physics with mathematics, so now H. A. Lorentz (Leyden) transformed Maxwell’s equations for moving bodies so as to kill terms that predicted detectable effects arising from motion. His manoeuver in effect did away with the ether as a mechanical entity. In 1905, Albert Einstein, then a patent examiner in Bern, recognized that the Lorentz transformations were just what he needed to describe the physics in bodies moving with constant rectilinear velocities relative to one another.
Special relativity (Einstein’s theory of 1905) is as democratic as equipartition since it effectively places all observers at rest within their own ether. Light consequently always propagates towards or away from observers in free space at the same speed irrespective of their states of motion. To this principle Einstein added the equally intuitive proposition that the laws of physics should be the same in all inertial frames; no physics experiment can tell you whether the passing train or your train or both are in steady motion. From these easily comprehensible beginnings Einstein deduced very bizarre consequences: time dilation and space contraction (clocks run slower, and sticks grow shorter, in ‘moving systems’ as seen from ‘stationary’ ones), and the equivalence of matter and energy. However disorienting the conclusions, the principles could be construed as a rational expansion of the principles of classical physics.
Discharging Kelvin’s second cloud required a passage through irrationality. The problem that provoked the madness concerned the equilibrium distribution of radiant energy contained in an oven at constant temperature (‘blackbody radiation’). This seemingly obscure problem had some industrial interest, as it related to standards of illumination, and the librarians of physics, foreseeing its solution, already had a place for it on the shelf between electrodynamics and thermodynamics. When ether was treated as a mechanical system, however, equipartition awarded every mode of ethereal vibration the same amount of energy, kT/2. As emphasized in 1900 by Lord Rayleigh, this democratic division could be catastrophic. If, like other vibrating systems, the ether had many more modes of vibration at high frequencies than at low, all the electromagnetic energy available to it should run into the ultraviolet and beyond.
To escape this predicament, Rayleigh and, more assertively, James Jeans (Cambridge) suggested that the process might take as long as the age of the universe, thus saving equipartition by postponing equilibrium to a time when no one was likely to observe it. That was not reasonable. Knowing nothing of Rayleigh’s deliberations, Planck published in the same year, 1900, a theoretical formula for the blackbody spectrum that appeared to agree with experiment. Soon, however, measurements in the infrared negated the formula. Planck thereupon found one that worked. He concocted a theory for it that subverted the classical physics of Boltzmann from which he thought he had derived it. That was irrational. Planck’s ‘quantum theory’ soon joined with other evidence that the microworld could not be described using the ordinary concepts of physics.
William Whewell, who gave the world ‘physicist’ and ‘scientist’, supplied Faraday with ‘ion’ to refer to the electrified unknowns in motion in a working electrolytic cell. In his doctoral dissertation at the University of Uppsala in 1884, Svante Arrhenius identified ions with charged molecular fragments and claimed their presence in all solutions. His concept of dissociation proved its power also in the study of electrical discharges through dilute gases. During the 1870s and 1880s, physicists discovered ‘cathode rays’, which proceed invisibly in straight lines from the cathode of the discharge tube to cause its walls to fluoresce. (Whewell made ‘cathode’ and ‘anode’ for Faraday as well as ‘ion’.) The obvious hypothesis, which supposed the rays to be negative gas ions repelled from the cathode, failed when J. J. Thomson (Cambridge) and others showed around 1897 that cathode-ray particles must have a ratio of charge to mass (e/m) about 1,000 times as large as that of the lightest electrolytic ion, hydrogen. Thomson inferred that the cathode-ray ‘ion’ (his word was ‘corpuscle’) represented matter in a state of complete dissociation.
The recognition of the corpuscle/electron, which re-established electric charge in British physics, capped a series of extraordinary discoveries associated with the gas-discharge tube. Late in 1895 Wilhelm Conrad Röntgen (Würzburg, later Munich) deduced from the glow of a phosphorescent screen some distance from a discharge tube generating cathode rays the existence of penetrating radiation of novel character (see Figure 21). He could not reflect or refract its rays or bend them with a magnet; but they made good photographs of the inside of a living human hand. He summed up his knowledge of their nature by calling them ‘X-rays’. Henri Becquerel (Paris) found that uranium gave off rays different from Röntgen’s. Marie Curie and her husband Pierre Curie (Paris) recognized in 1897–8 that thorium and the elements polonium and radium, which they discovered in uranium ores, also had the ability to radiate (‘radioactivity’).
21. Desktop physics. W. C. Röntgen pictured in his laboratory in Würzburg in 1895, the year he discovered X-rays. Research spaces in the institutes were few and small around 1900, although many physics experiments required dependable electricity and good vacuum pumps.
Thomson and his student Ernest Rutherford established that all the new rays ionized the air, which enabled Rutherford to distinguish a soft (‘alpha’) and a more penetrating (‘beta’) component among the Becquerel rays. The e/m of the beta ray came out close to that of the cathode-ray ion. Thomson’s inference that electrons might be the building blocks of matter had further confirmation in experiments performed in 1896 by Pieter Zeeman (Leyden, then Amsterdam) as elucidated by Lorentz. The ‘Zeeman effect’ (the splitting of spectral lines in a magnetic field) revealed that the electric oscillator supposed responsible for spectral emission had an e/m close to that of Thomson’s corpuscle. Thus, around 1900, physics got its first elementary particle and half a dozen rays for which its library had no shelf mark.
Continuing his study of alpha particles, Rutherford, now a professor in Montreal (1902), and his chemist colleague Frederick Soddy uncovered the tendency to self-destruction of radioactive substances. Most of the many fleeting decay products subsequently detected did not fit into the periodic table. This information prompted the realization, by 1913, that the discrepancy at argon/potassium was systematic: the Ouija board worked correctly when ordered not by weight but by an integer beginning with unity at hydrogen and assigned on the basis of chemical properties. More than one radioelement could occupy the same cell in the table (‘isotopy’): atoms differing in weight could have identical chemical properties. As a discovery, isotopy ranked in logic if not in depth with thermodynamics: in each case scientists realized that a single concept (atomic weight, energy) required another (atomic number, entropy) to provide an adequate description of the phenomena.
Shortly before the recognition of isotopy, Rutherford invented his version of the nuclear atom. It deviated from Thomson’s market-leading model, in which electrons circulate within a positively charged space, in being unstable both mechanically and electrodynamically. That had ruled it out in its previous versions. But by 1911 the few bold spirits who had given up on ordinary physics in the microworld could think that Rutherford’s atom was so bad that it might be a good place to look for quantum activity. Its advantages included, besides having classical physics against it, offering a simple representation of Z, the atomic number, as the charge on the nucleus, and explaining some details about the passage of alpha particles through matter.
Physicists entered the microworld definitively during the last few years before World War I, when their ability to assign exact values to the charges, masses, and numbers of atoms and electrons established their belief in them. The charge on the electron was the most valuable value: it produced, via measurable quantities, m, N, and k. In 1910, Robert Millikan (Chicago) gave the value e = 4.891 × 10−10 esu (electrostatic units) in his ‘oil-drop experiment’, a refinement of a technique using water invented in 1898 by J. J. Thomson. Planck’s theory of blackbody radiation gave e as 4.69 × 10−10 esu, close to Rutherford’s value from counting alpha particles, 4.65 × 10−10 esu.
Jean Perrin (Paris) confirmed these values from another direction by obtaining N from measurements of the dance of uniform tiny gumballs suspended in water between the tugs of gravity and osmotic pressure. Their dance, Brownian motion, arises from imbalances in the impacts of water molecules on them. In 1905, Einstein had calculated the mean displacement of the gumballs over time from the places at which they are first seen. The formula included N and measurable quantities. Perrin’s average, N = 7.0 × 1023, which he gave in 1909, agreed fairly well with 6.2 × 1023, calculated using e determined by Rutherford.
Planck’s successful formula for blackbody radiation contained two constants. One the theory identified as k; the other was an ad hoc number, h, which placed a threshold on the energy ϵ that an ether mode of frequency ν could take up. Hence ‘ϵ = hν’, a slogan as familiar to physicists as ‘E = mc2’. That Planck’s quantum hypothesis hid a breach with classical theory did not come fully to light until Einstein and others exposed it around 1905. Two years later, Einstein brought Planck’s formulation closer to the classical problem of equipartition by applying it to the vibrations (the specific heats) of elastic solids. Using Planck’s radiation law, Einstein obtained specific heats that deviated from the empirical law of Dulong and Petit at low temperatures. Measurements by Walther Nernst (Berlin) confirmed Einstein’s extrapolation and the insight that Planck’s law applied to vibrations far removed from its original jurisdiction. The problem of the quantum moved up on the agenda of physics. At Nernst’s suggestion, the Belgian industrialist Ernest Solvay sponsored a meeting in Brussels in 1911 at which all the important physicists interested in radiation and quanta came together to try to explain to one another what Planck’s quantum h meant. They did not succeed.
Instances of irrationality multiplied. X-rays spread like waves but interacted with matter like particles. The same paradox appeared in the photoelectric effect, discovered by Hertz in 1887, in which ultraviolet light, which everyone knew to be a wave, knocks electrons out of metals as if (so Einstein pointed out in 1905) it was composed of particles. Radioactivity offered another instance of paradox. Thomson had made the reasonable suggestion that a gradual loss of energy by atomic electrons through radiation caused radioactivity. Disciples of Boltzmann pointed out that Rutherford and Soddy’s law of radioactive decay admitted the interpretation that in a small interval of time every atom of a radioelement has the same probability of exploding. Franz Exner (Vienna) drew the surprising conclusion: the probability of radioactive disintegration and other fluctuations were chance events incalculable in principle. Many prominent physicists rejected the implied limitation on causality.
A postdoctoral student working in Rutherford’s laboratory in Manchester, Niels Bohr, welcomed the impasse. His doctoral thesis had generalized the electron theory of metals, pioneered by Thomson, Lorentz, and Paul Drude (Berlin), who treated conduction electrons confined in a wire as a free gas apart from their collisions with metal molecules. Bohr came closer than they did to deriving the Wiedemann–Franz law but failed with Curie’s law and heat radiation. He placed the blame on equipartition and supposed that a restriction like Planck’s must be imposed on atomic electrons. That did not surprise him. His reading of Danish philosophy and literature had prepared him to expect that physical theory must occasionally hit immovable barriers.
Bohr accepted the nuclear atom and removed its defects by fiat: in their ground states, atomic electrons shall shirk their classical obligations to radiate and to perturb their common circular motion. Early in 1913, he encountered the Balmer–Rydberg formula in the form νn = K(1/22 − 1/n2), where n, a running integer, designates a spectral line. Multiplying both sides by h and reading the result as an energy equation in Planck’s style, Bohr recognized the existence of excited ‘stationary states’ in which electrons circulated with the immunities they had in the ground state and with energy −hK/n2. Balmer lines originate in an unmotivated transition (‘quantum jump’) from the nth to the second stationary state. By interpreting −hK/n2 as the energy of the nth stationary state and allowing ordinary physics inconsistently to rule the state, Bohr derived Rydberg’s K in terms of the atomic constants (K = 2π2me4/h3). This tour de force persuaded Einstein, Jeans, and other alumni of the Solvay Council that Bohr had found a way forward.
Although Bohr offered a ‘correspondence principle’ that hinted at how the calculations of his quantized atom could be made to agree with those of classical physics in certain limits, he emphasized that even where calculations agreed, the processes did not. His quantum jumps thus joined radioactive explosions as the first examples of natural phenomena placed formally beyond the reach of human inquiry since the time of Thales.
Most of the 700 attendees of the International Congress of Physics of 1900—about one in four of the world’s physicists—held paid posts in teaching (60 per cent), industry (20 per cent), and government (20 per cent). An important novelty was the slowly growing subset of theorists. They occupied chairs primarily in Germany and Austria. In Britain and the British Empire, graduates of the Cambridge regime in mathematics held about half the permanent positions in physics in 1900. Their ability to devise and calculate the behaviour of mechanical models gave English physics an orientation that favoured the production of atomic models like those of Thomson and Rutherford.
French physics had a quite different but equally distinctive epistemology, imbued with positivism and instilled at the École polytechnique and École normale, through which over half the academic physicists active in France in 1900 passed. This positivism, with its reluctance to commit to models of the microworld, marked the approach of the Curies. No standard epistemology or hegemonic centre existed in Germany, where students customarily attended more than one university or polytechnic, or in the United States, whose professors came from a variety of universities and typically finished their training in decentralized Germany.
Most informed observers ranked Germany first among the science-producing nations around 1900 largely because of the quality of German-language publications, the preponderance of German scientific instruments, and the generous support of science institutes by several states of the Reich. Decentralization and competition were the driving principles and the guarantee that in Germany the best scientists would not be crushed, as in France, into the apex of a single educational system. Once at the Parisian apex, however, a physicien could regain space by accumulating posts (cumul). Henri Becquerel, by no means an overachiever, held three professorships in Paris, one won on his own and two passed down as if private property from his father and grandfather.
In the British Empire, little support for academic physical science came from the central government; and in the United States, none. A large source of funds for expansion in both came from individuals or corporations. Spokesmen in each country, particularly in Britain, compared this uncertain funding unfavourably with the generosity of the German Länder. Nonetheless, private philanthropy was expanding the material base for physics more rapidly in the Anglo-Saxon countries than in their principal rival. German statesmen of science pointed to this generosity as something to fear and emulate.
Around 1900 Britain and the United States, embarrassed to have to send their products to Germany’s Physikalisch-Technische Reichsanstalt (PTR, 1887) for certification, established standards laboratories of their own. Like the PTR, the British National Physical Laboratory (1900) and the American National Bureau of Standards (1901) employed graduate physicists in increasing numbers. Naturally these recruits pressed to do research, some of which, like measurements of the blackbody spectrum at the PTR, related to fundamental problems. A few industries producing the products for testing also established research laboratories. Staff at the General Electric Research Laboratory (1900) numbered perhaps 200 in 1913, including the former academic physicists Irving Langmuir and W. D. Coolidge. A little behind General Electric came American Telephone and Telegraph (1907), Corning Glass (1908), Eastman Kodak (1912), Philips Eindhoven (1914), and Siemens & Halske (1900, 1913). (See Figure 22.)
22. The palace of electricity. The palace, a feature of the Paris Exposition of 1900, impressed visitors with its displays of colored lights. Electricity also won surprised friends by powering moving sidewalks around the extensive exhibition grounds.
Three other institutional forms created with government and/or industrial money in the early 20th century affected the tone and pace of physical science. The Nobel Prizes, endowed with the proceeds of dynamite and smokeless powder, were intended by their founder to reward those ‘who, during the preceding year, shall have conferred the greatest benefit on mankind’. Although the first prize in physics, to Röntgen for X-rays, met this test, the professors soon conquered the system and rewards went more and more to academic work of no immediate practical value.
Andrew Carnegie’s endowment of $10 million in 1901 for a research institute in Washington, DC, staggered contemporaries. Most of the income supported studies of meteorological subjects. To inspire similar generosity among the Reich’s industrialists, a Kaiser-Wilhelm-Gesellschaft (KWG) came into being to assist in financing ‘pure’ research institutes. Starting from the premise that ‘Science has reached a point in its scope and thrust that the state alone can no longer care for its needs,’ the society’s projectors pointed to the Carnegie Institution, the Nobel Institution, and a gift of a million dollars to the University of Chicago for a physics institute, as proof that Germany was falling behind. The KWG incorporated in January 1911, with a pledged private capital of around a fourth of the endowment of the Carnegie Institution. The society’s institute for physics opened virtually in 1917, with Einstein as director and Planck as effective executive, but waited for a building for twenty years, and then the Rockefeller Foundation paid for it. Apparently, the society did not think that an establishment run by Einstein and Planck was likely to contribute much to German industry.
As physics professionalized, the old division of labour between the academy and the university ceased. Research increasingly became an expectation, and then an obligation, of professors. Careers depended more and more on publication in the transactions of the main scientific academies and professional societies. Among the societies, which were themselves creations of the later 19th century, those of Berlin and London, and the Institute of Electrical Engineers, were perhaps the most prestigious publishers. Among the national academies, the French, British, and American, and among regional academies those of Berlin (Prussia), Göttingen (Hanover), Leipzig (Saxony), and Munich (Bavaria) remained important outlets. In addition, general or disciplinary news journals like Nature (founded 1871), La Nature (1873), and Physikalische Zeitschrift (1899) disseminated research reports and good tidings.
Above the local level (universities, polytechnics, government and industrial laboratories) and the regional/national level (professional societies, academies) stood, in the scale of inclusiveness, the national associations for the advancement of science and international organizations of various degrees of specialization and permanence. The national associations collected scientists, supporters of science, and the interested public in large annual meetings that wooed inclusiveness by changing venue from year to year. The Gesellschaft deutscher Naturforscher und Ärzte was the oldest (1822), followed closely by the British Association for the Advancement of Science (1831), and, later, by similar bodies in France, the United States, and Italy.
Internationalism was as distinctive a feature of the half-century before World War I as were the nationalisms it tried to ameliorate. Encouraged by cosmopolitan or humanitarian promptings, enabled by the efficient communication and transportation system of Europe, and supported by governments seeking international agreements in technical matters, international conferences on various branches of physics tripled, from 1.7 per year around 1875 to 5.5 per year around 1905. The logical apex of the international movement in physics was an organization to coordinate the world’s work in physical science in accordance with a grand research plan. In 1912, Solvay partially filled this logical space by founding an Institut international de physique. In 1914, Charles Edouard Guillaume, director of the Bureau international des poids et mesures (1875), proposed the establishment of an international association of physical societies. His timing was not good.