The Scientific Revolution
Near the end of the 16th century, Galileo Galilei (1564–1642) accepted Aristotelian ideas and such modifications as impetus at first, but he soon brought a radical concept to studies of motion. Instead of trying to explain why objects move as they do, he experimented and then described exactly how they move. He also used experiment to determine how forces affect objects that do not move. Although he made some errors, his basic conclusion in 1590—that an object in motion continues to move in a straight line until stopped by a force—is still accepted. His other famous conclusion—that light bodies and heavy bodies fall through the same distance in the same amount of time—is also true. He had established it by experiment, and announced the correct mathematical law governing falling (distance increases with the square of time) in 1638.
Other scientists continued in the same vein as Galileo during the 17th century. The German astronomer Johannes Kepler (1571–1630) advanced optics and showed in 1604 that the intensity of light diminishes as the square of the distance from its source. In 1643 the Italian physicist Evangelista Torricelli (1608–47), with the invention of the barometer, showed that Aristotle had been wrong about the vacuum, since a vacuum forms above the mercury column in the original barometer. Blaise Pascal experimented with the vacuum and with fluids, establishing that in a fluid, force is transmitted in all directions and always acts perpendicular to the surface of the container (1654, published 1662). Isaac Newton experimented with breaking light into its components (1665) and reported that white light is the combination of the colored lights of the rainbow (1675).
This period when experiments began to dominate physics is known as the scientific revolution. It culminated in 1687 when Newton’s Principia emerged. Newton improved on Galileo’s laws of motion and combined them with a mathematical law of gravity. The combination was sufficient to explain not only the motions of objects on Earth, but also the motions of the heavenly bodies (see Law of Gravity and Newton’s Laws of Motion). Newton (and independently Leibniz) had also invented a new mathematical tool, the calculus. Throughout the 18th century, Newtonian physics and calculus were combined to develop systematically a wide range of topics in physics, ranging from acoustics to detailed orbits of the planets. Some scientists believed that if the exact position and momentum of every point in space were known, the future of the universe could be predicted exactly as well.
Electromagnetism Although Newton’s work explained how gravity functioned, it did not explain why material objects attract each other with this force. There were also other forces that were unexplained, and less was known of their rules. As early as 1600 William Gilbert applied the experimental method to two of these forces, identifying and differentiating between magnetism and static electricity. A hundred years later, scientists began to attempt to tease from nature the secrets of these forces. Weak electric charges were made by rubbing glass tubes with silk or by similar means at first. In 1729 another English experimenter, Stephen Gray (1666–1736), was the first to recognize that these weak charges could travel from one material to another through substances that were later called conductors. He soon showed that when conductors do not carry away the charge, almost anything—even a human being—could be charged with electricity. A French experimenter, Charles Du Fay (1698–1739), was the first to recognize that there are two kinds of charge and that like charges repel, whereas different charges attract, each other (1733).
The English and French experimenters and their assistants also began to build up charges strong enough to produce the first recognized shocks. In 1746 the invention of the way to store static electricity (called a Leiden jar after the site of its discovery) permitted experiments with much more powerful charges. In 1751 Benjamin Franklin connected the small shocks from Leiden jars with the powerful shock of lightning, proving his theory by flying a kite in a thunderstorm and conducting the charge down the wet string. In 1769 a Scottish scientist, John Robison (1739–1805), showed that the repulsive force caused by charge obeys an inverse-square law like the law for loss of intensity of light over distance.
A new source of electric charge began to be developed in Italy during the 1770s and 1780s when Luigi Galvani (1737–1798) investigated charge produced in the muscles of animals, which Alessando Volta (1745–1827) recognized as the result of chemical interactions. Volta in 1800 built a chemical device (similar to a modern automobile battery) that produced the first current electricity.
Meanwhile a parallel set of experiments with magnets began in 1749 when the English experimenters John Canton (1718-72) and John Michell (1724–93) developed stronger magnets than occur in nature. Michell immediately used his magnets to derive the mathematical laws of attraction and repulsion. In 1751 Benjamin Franklin showed that electric charge can produce magnetism. In 1785 the French physicist Charles Coulomb (1736–1806) carefully measured both electric and magnetic forces and also found that both obey exactly the same inverse-square laws. As early as 1807 the Danish physicist Hans Christian Oersted (1777–1851) began to search for a deeper connection between electricity and magnetism, which he found in 1820 when he observed that an electric current affects a magnetized needle. The recognition that electricity and magnetism are closely connected quickly led to the discovery of the laws governing electromagnetism (see Laws of Current Electricity) as well as to devices that combined the two forces to produce motion (electric motors), powerful electric currents (generators, or dynamos), and powerful electromagnets.
Light At the same time as charge and magnetism were being analyzed, there were apparently unrelated studies concerning light. As early as 1678, the Dutch physicist Christiaan Huygens (1625–95) had proposed a theory of light based on waves. But in 1704 Newton published Opticks, which summarized his view that light consists of small particles. About a hundred years later the study of light experienced several rapid advances. In 1800 and 1801 two forms of invisible light were discovered: infrared by William Herschel and ultraviolet by Johann Ritter (1776–1810). Also in 1801, the English scientist Thomas Young (1773-1829) conducted experiments that convinced scientists everywhere that light must be a wave phenomenon, a view reinforced in 1808 when the French physicist Etienne Malus (1775–1812) discovered polarized light, a form of light in which waves are confined to a plane.
It was already known that electric charge could in some circumstances produce light (in lightning, for example). In 1839 the French physicist Edmond Becquerel (1820–91) determined that the opposite also occurs in some circumstances; light produces electric current, known as the photovoltaic effect. A few years later Michael Faraday showed that a magnetic field changes the polarization of light (1845). With these discoveries in mind James Clerk Maxwell concluded that light consists of waves incorporating both electricity and magnetism—that is, electromagnetic waves. He predicted that electromagnetic waves exist in the electromagnetic spectrum below infrared and above ultraviolet radiation. In 1873 Maxwell published a complete mathematical theory of electromagnetism.
There were still mysteries. While setting up the equipment to produce and detect radio waves (the long electromagnetic waves predicted by Maxwell) in 1887, the German physicist Heinrich Hertz (1857–94) observed that light shining on the apparatus affects the size of an electric spark. Further investigation with more energetic electromagnetic radiation revealed that the amount of charge released by the metal depends on the frequency rather than the intensity of the radiation, a finding which made no sense at first. The problem was resolved in 1905 when Albert Einstein proved that light, as Newton had proposed, behaves in this case as a particle instead of as a wave.
Heat As early as 1724 scientists tried to explain heat and cold with the idea that heat is an unusual component of matter, similar to a liquid, which they called caloric. Caloric persisted throughout the 18th century until a decisive experiment by Benjamin Thompson (Count Rumford, 1753–1814) showed that heat is closely connected to motion. Scientists since have believed heat to be an effect of the motion of molecules in any substance (cold is simply the absence of heat, or slower molecular motions), but it was not until 1860 that James Clerk Maxwell and, independently, the Austrian physicist Ludwig Boltzmann (1844—1906) worked out the mathematical theory of such particles.
Meanwhile, physicists were discovering the general laws of heat. The French physicist Sadi Carnot (1796–1832), after studying the still new steam engines, established mathematically in 1824 that work is done as heat passed from a high temperature to a lower one and that the maximum amount of work possible depends only on the temperature. Heat was recognized as a form of energy, along with motion, electricity, light, and stored, or potential, energy. Several English and German physicists measured exactly the amount of heat produced by motion, work that led to the laws of thermodynamics (“movement of heat”). With the new understanding of heat the British
physicist William Thomson (Baron Kelvin, 1824–1907) recognized in 1851 that the total absence of heat would produce a specific coldest temperature, absolute zero.
Experimentalists used various methods to lower temperatures nearly to absolute zero, liquefying air in 1878, hydrogen in 1895, and helium, the element that has the coldest known transition from a gas to a liquid, in 1908.
With liquid helium near absolute zero, strange new forms of matter could be created. One of the most important is matter that superconducts—an electric current started in a ring of a superconducting material will continue around the ring as long as the temperature is maintained at a few degrees above absolute zero. Since the Dutch physicist Heike Kamerlingh-Onnes (1853–1926) discovered the first form of superconductivity in 1911, other materials, called high-temperature superconductors, have been found (starting in 1986), although none are superconducting at temperatures above –200°F (–130°C). Liquid helium itself was found to have unusual properties similar to those of superconductors, such as superfluidity. Like some very cold gases, first produced in 1995, liquid helium is a Bose-Einstein condensate (BEC), matter in which the atoms merge into a single superatom, first predicted by Albert Einstein in 1924.
Relativity Maxwell’s theory of electromagnetism (proposed in 1873) assumed that electromagnetic waves must be motions in some all-pervasive but undetectable substance, which was called ether. Various attempts were made to define the properties of ether and, in a famous failed experiment of 1888, to determine Earth’s motion through the ether. The Polish-American physicist Albert Michelson (1852–1931) and the American physicist Edward Morley (1838–1923) used a sensitive device invented by Michelson to measure the speed of light in the direction of Earth’s motion through space and perpendicular to that motion, but failed to find any difference, suggesting that ether was a flawed concept. When Einstein developed the special theory of relativity (1905), however, he concluded that electromagnetic waves do not need ether to explain their properties. He took as a postulate that light travels through a vacuum at the same speed under all conditions; thus you cannot determine how Earth is moving by looking for variations in the speed of light that such motion would cause. He also observed that physical laws as measured should be the same for two entities moving with respect to each other with no change in velocity. From these ideas he concluded that the universe can be described in terms of four-dimensional space-time and that matter and energy are related by the famous equation E = mc2, where E is energy, m is mass, and c is the speed of light in a vacuum. Relativity theory also showed that time can be viewed as a dimension related to the dimensions of space. A definition of modern physics, then, might be that it is the study of matter-energy in space-time.
Next Einstein considered what happens if one entity is accelerated with relation to the other. He based this theory, the general theory of relativity, on the idea that no test can determine a difference between gravitational force and the force produced by acceleration, called inertia.
The general theory of relativity, which resulted from this postulate in 1915, is a description of gravity in terms of the curvature of space-time. Einstein’s theory explained previously observed, but unexplained, changes in the orbit of Mercury and in 1919 described how light from a star was bent by the sun’s gravitational field. Almost as soon as the general theory was published, it became clear that the theory as originally formulated predicted an expanding universe and also predicted the existence of what we now call black holes (1917), stars that have collapsed into points with such a strong gravitational force that light cannot escape. In 1979 another effect predicted by the theory, the lensing effect caused by the gravitational force of an entire galaxy, was observed for the first time; since then, gravitational lenses have become one of the principal tools astronomers use to observe the early universe.
Einstein thought in 1917 that the universe should be static, but assumed that gravity would cause the universe to be contracting. He interpreted his original equations as showing a universe that is slowly collapsing. To resolve this, he added a “cosmological constant” to the equations for general relativity to provide a small force opposing gravity. Einstein was clearly wrong about the possibility of gravitational collapse, for the Dutch physicist Willem de Sitter (1872–1934) showed in 1919 that Einstein’s equations without the cosmological constant actually predicted an expanding universe. When expansion of the universe was observed by astronomers in the 1920’s, Einstein abandoned the cosmological constant. In recent years, however, astronomers have detected an acceleration of the expansion of the universe. The mysterious force that causes this expansion is called “dark energy.” Some physicists think that dark energy is evidence suggesting
that Einstein’s cosmological constant was correct and should be reinstated.
Particles and Quantum Theory Several Greek and Roman writers had a theory that matter is made from small, indivisible particles; this theory was revived in 1803 as atomic theory by the British chemist John Dalton (1766–1844). During the 19th century, the idea of indivisible atoms came to be accepted, but near the end of the century evidence emerged that atoms themselves are made from even smaller particles. The electron was discovered by the English physicist J. J. Thomson (1856–1940) in 1897 and measured to be smaller by far than the smallest atom. Two years later, Thomson showed that the electron is a part of the atom.
Because electrons have a negative charge, but atoms are electrically neutral, it was apparent that there must be some particle (or other entity) in the atom with a positive charge toto neutralize the charge of the electron. By 1911 the New Zealand-born British physicist Ernest Rutherford (1871–1937) had established that the positive charge is carried by a particle much heavier than the electron; he named this new particle the proton. The Danish physicist Niels Bohr (1885–1962) developed the mathematical theory of hydrogen, which has the simplest atom, in 1913. He found that the theory was correct in terms of experiment only if he used the idea that electrons can travel in only a few orbits and that they must be able to change from one orbit to another instantly (giving off or absorbing light in the process).
The idea that light energy has only separate (discrete) levels had first been used in 1900 to explain the spectrum of light emitted as a body is heated. The discrete levels were called quanta by the German physicist Max Planck (1858–1947), who had developed this theory. In 1905 Einstein used the same idea to explain the phenomenon discovered by Hertz in 1887, showing that light behaves like particles (quanta of light). Bohr showed that electron orbits are also quanta. Thus the theory of particle behavior is called the quantum theory.
Quantum theory advanced rapidly in the 1920s, beginning with the idea of the French physicist Louis de Broglie (1892–1987) that particles such as the electron have a wave aspect. The following year the Pauli exclusion principle (see “Two Basic Laws of Quantum Physics”) and the matrix theory of the electron were established, along with the concept of particle spin. In 1926 the German physicist Erwin Schrödinger developed the equation of the electron wave. In 1927 the Heisenberg uncertainty principle was introduced. During this period, the only known particles were the photon, electron, and proton, but in 1930 Wolfgang Pauli (1900–58) proposed the neutrino, which was followed by dozens of other particles (see Subatomic Particles). Quantum theory was cast into the more precise form called quantum electrodynamics in 1947, when several physicists developed mathematical techniques to resolve problems with the original quantum theory.
Nuclear Physics Radioactivity, which was discovered in 1896 by the French physicist Henri Becquerel (1852–1908), was the key to discovery of the proton and the concept that each atom has a positive nucleus surrounded by negative electrons. The study of the nucleus could not advance much until the discovery, in 1932, of the neutral particle the neutron, which is part of the nucleus in all atoms but the simplest hydrogen atom. Different forms of the same element, called isotopes, have the same number of protons in the nucleus, but different numbers of neutrons.
The French wife-and-husband team Irène Joliot-Curie (1897-1956) and Frédéric Joliot-Curie (1900–58) showed in 1934 that an element can be changed to a radioactive isotope by bombarding the atoms with neutrons. In 1937 the Italian-American physicist Emilio Segrè used the same idea to produce a previously unknown artificial element, technetium. In 1940 the first artificial element with an atomic number higher than that of uranium was created and named neptunium, element 93. The following year element 94, plutonium, joined the list. Today there are artificial elements through element 116.
In 1938 the German physicist Otto Hahn (1879–1968) and the Austrian physicist Lise Meitner (1878–1968) discovered that the large uranium atom could break into pieces (fission) when stuck with a neutron, releasing additional neutrons and other forms of energy in the process. This discovery led to the atomic, or nuclear fission, bomb and nuclear power (see “Technology”). Also in 1938 two physicists, Hans Bethe and Carl von Weizsäcker, proposed that in the intense heat and pressure of the interior of a star, hydrogen nuclei combine with each other to form helium (fusion), releasing energy in the process. This process also led to the development of a fusion bomb (the hydrogen bomb, 1952).
In the last decades of the 20th century physicists developed the standard model of elementary particles. This model incorporates three of the four fundamental forces in nature: the strong and weak nuclear forces and electromagnetic force (the other force is gravity). The standard model has so far stod up to testing, thought the goal for scientists is to incorporate gravity into the standard model, creating a “theory of everything” (see “String Theory and Supersymmetry” and “Subatomic Particles”).
Times focus
The Large Hadron Collider and the Higgs Particle
Call it the Hubble Telescope of Inner Space.
The Large Hadron Collider, located 300 feet underneath the French-Swiss border outside Geneva, is the world’s biggest and most expensive particle accelerator. It is designed to accelerate the subatomic particles known as protons to energies of 7 trillion electron volts apiece and then smash them together to create tiny fireballs, recreating conditions that last prevailed when the universe was less than a trillionth of a second old.
Whatever forms of matter and whatever laws and forces held sway Back Then—relics not seen in this part of space since the universe cooled 14 billion years ago—will spring fleetingly to life. If all goes well, they will leave their footprints in four mountains of hardware and computer memory that international armies of physicists have erected in the cavern.
After 16 years and $10 billion, on March 30, 2010, the collider finally began its work of smashing subatomic particles. The day was a milestone—delayed a year and a half by an assortment of technical problems—and brings closer a moment of truth for CERN and for the world’s physicists, who have staked their credibility and their careers, not to mention all those billions of dollars, on the conviction that they are within touching distance of fundamental discoveries about the universe. If they fail to see something new, experts agree, it could be a long time, if ever, before giant particle accelerators are built on Earth again, ringing down the curtain on at least one aspect of the age-old quest to understand what the world is made of and how it works.
“If you see nothing,” said John Ellis, a theoretical physicist at CERN, “in some sense then, we theorists have been talking rubbish for the last 35 years.”
Machines like CERN’s new collider get their magic from Einstein’s equation of mass and energy. The more energy that these machines can pack into their little fireballs, in effect the farther back in time they can go, closer and closer to the Big Bang, the smaller and smaller things they can see.
The new hadron collider, scientists say, will take physics into a realm of energy and time where the current reigning theories simply do not apply, corresponding to an era when cosmologists think that the universe was still differentiating itself, evolving from a primordial blandness and endless potential into the forces and particles that constitute modern reality.
One prime target is a mysterious particle called the Higgs that is thought to endow other particles with mass, according to the reigning theory of particle physics, known as the Standard Model. That theory will now face its most severe test. Other theories go beyond this model to predict new forms of matter that explain the mysterious dark matter waddling the cosmos and even new dimensions of space-time.
The guts of the collider are some 1,232 electromagnets, thick as tree trunks, long as boxcars, weighing in at 35 tons apiece, strung together like an endless train stretching around the gentle curve of the CERN tunnel.
In order to bend 7-trillion-electron-volt protons around in such a tight circle these magnets, known as dipoles, have to produce magnetic fields of 8.36 Tesla, more than 100,000 times the Earth’s field, requiring in turn a current of 13,000 amperes through the magnet’s coils. To make this possible the entire ring is bathed in 128 tons of liquid helium to keep it cooled to 1.9 degrees Kelvin, at which temperature the niobium-titanium cables are superconducting and pass the current without resistance.
Running through the core of this train, surrounded by magnets and cold, are two vacuum pipes, one for protons going clockwise, the other counterclockwise. Traveling in tight bunches along the twin beams, the protons will cross each other at four points around the ring, 30 million times a second. During each of these violent crossings, physicists expect that about 20 protons, or the parts thereof—quarks or gluons—will actually collide and spit fire. It is in vast caverns at those intersection points that the detectors, or “sunken cathedrals” in the words of a CERN theorist, Alvaro de Rujula, are placed to capture the holy fire.
The payoff for this investment, physicists say, could be a new understanding of one of the most fundamental of aspects of reality, namely the nature of mass. This is where the shadowy particle known as the Higgs boson, a.k.a. the God particle, comes in.
In the Standard Model, a suite of equations describing all the forces but gravity, which has held sway as the
law of the cosmos for the last 35 years, elementary particles are born in the Big Bang without mass.
Some of the particles acquire their heft, so the story goes, by wading through a sort of molasses that pervades all of space. The Higgs process, named after Peter Higgs, a Scottish physicist who first showed how this could work in 1964, has been compared to a cocktail party where particles gather their masses by interaction. The more they interact, the more mass they gain.
The Higgs idea is crucial to a theory that electromagnetism and the weak force are separate manifestations of a single so-called electroweak force. It shows how the massless bits of light called photons could be long-lost brothers to the heavy W and Z bosons, which would gain large masses from such cocktail party interactions as the universe cooled.
The confirmation of the theory by the Nobel-winning work at CERN 20 years ago ignited hopes among physicists that they could eventually unite the rest of the forces of nature.
Moreover, Higgs-like fields have been proposed as the source of an enormous burst of expansion, known as inflation, early in the universe, and, possibly, as the secret of the dark energy that now seems to be speeding up the expansion of the universe. So it is important to know whether the theory works and, if not, to find out what does endow the universe with mass.
But nobody has ever seen a Higgs boson, the particle that personifies this molasses. It should be producible in particle accelerators, but nature has given confusing clues about where to look for it. Measurements of other exotic particles suggest that the Higgs’s mass should be around 90 billion electron volts, the unit of choice in particle physics. But other results, from the LEP collider before it shut down in 2000, indicate that the Higgs must weigh more than 114 billion electron volts. By comparison, an electron is half a million electron volts, and a proton is about 2,000 times heavier.
The new collider was specifically designed to hunt for the Higgs particle, which is key to the Standard Model and to any greater theory that would supersede it. The Tevatron is also searching for the Higgs.
Theorists say the Higgs or something like it has to show up simply because the Standard Model breaks down and calculations using it go kerflooey at energies exceeding one trillion electron volts. If you try to predict what happens when two particles collide, it gives nonsense, explained Dr. Ellis.
Condensed Matter Nuclear and particle physics apply to what occurs within atoms and in isolated subatomic particles but do not explain the behavior of surface interactions, of clusters of small numbers of atoms or molecules, of complex molecular structures such as colloidal solutions or foams, or of electromagnetic phenomena in solids or liquids. Physicists have come to refer to the branch of the science that is concerned with the collective behavior of many particles as “condensed matter” physics. Today condensed-matter physics is one of the most active areas of the science.
Although scientific studies of magnetism and static electricity began in 1600, the first accurate theory of the cause of magnetism was that of the French physicist André-Marie Ampère (1775-1836) in 1825. Michael Faraday (1791–1867) recognized in 1845 that there are several magnetic effects, including diamagnetism (opposition to a magnetic field), paramagnetism (which disappears when a magnetic field is removed), and ferromagnetism (the familiar “permanent” magnetism that can be induced in iron and some other metals). Another major advance occurred in 1907 when the French physicist Pierre-Ernest Weiss (1865–1940) explained ferromagnetism as the effect produced when many small regions, called domains, become aligned by a magnetic field.
Early experimenters with static electricity observed that some substances—notably metals—conduct electricity and others are insulators. But not until 1900 did the German physicist Paul Drude (1863–1906) establish that in conductors some electrons are free to move away from their atoms, carrying negative charge with them. When quantum theory was developed, the Russian-German physicist Arnold Sommerfeld (1868–1951) developed in detail the theory of how electrons behave in a conductor. But there were still mysteries unsolved, for superconductivity was not explained until 1957, and high-temperature superconductivity still lacks a satisfactory explanation.
Understanding how conductors and insulators work led to a better understanding of semiconductors. This provided the background for the development in 1947 of the transistor and for subsequent applications of semiconductors, including some types of lasers and light-emitting diodes. Today condensed-matter physicists are applying the concept of spin to produce the effective disk drives in modern computers and look forward to using the electronics of spin, called spintronics, to develop improved devices that accomplish the tasks of transistors and their variants better and faster.
Physics and Other Disciplines Physics is a fundamental underpinning of most science other than the studies of human beings and some theories concerning living organisms, and sometimes physics becomes completely combined with parts of other sciences. Three notable examples are combinations of physics with astronomy, earth science, and biology.
Astrophysics is the study of stars, gas clouds, and other astronomical bodies, based on the application of the laws of physics, including energy production, composition, and evolution. While a broad view of astrophysics would include virtually all of astronomy, the disciple was originally concerned primarily with energy production and the development of stars from gas clouds through several stages such as red giants or white dwarfs to concluding explosions as supernovas or collapse into burned-out cinders or black holes. In recent years, the evolution of the universe as a whole (cosmology) has become a central focus of many astrophysicists; cosmology includes the development of subatomic particles in the early universe and the possible roles of subatomic particles and physical forces in such concepts as dark matter or the unknown energy that is accelerating the expansion of the universe.
Geophysics is the study of the structure of Earth based on the application of physical laws to Earth’s shape, seismology, electromagnetic properties, oceans, and atmosphere. The methods of geophysics have revealed Earth’s layered structure, consisting of inner and outer cores, mantle, and crust, and have provided the theoretical basis of plate tectonics. In recent years, the definition of geophysics has been stretched to include the physical properties of planets other than Earth as well as of the satellites of planets.
Biophysics is the study of such physical processes as transport of materials in living organisms, growth of such organisms, and their structural stability in terms of the laws of physics. Of particular concern are transport of ions across cell membranes and the mechanisms of protein folding along with the physics of such imaging techniques as CT, MRI, and PET scans.
String Theory and Supersymmetry Although quantum theories of particle physics explain many phenomena and allow interactions to be calculated to a high degree of accuracy, some of the mathematics involved has been viewed as questionable. Positive and negative infinities are added in such a way that their difference nearly cancels, but leaves a tiny amount that is exactly the amount measured by experiment. Also, physicists since Einstein have hoped to develop a unified theory that would include relativity and quantum mechanics as the logical outgrowth. Several developments since 1970 have attempted to resolve the mathematics and unify the various theories. The first was string theory, which replaced the concept of particles with one-dimensional strings whose properties are mathematically tractable, but only in spaces with more than four dimensions. In 1974 this was joined with a theory that every particle has a partner—if one particle represents matter, then the other represents force, and vice versa. This symmetry, called supersymmetry, called for a wide range of new particles that had not been previously observed. Two years later the recognition that certain strings behave like the graviton, a particle predicted by general relativity theory, led to combining relativity with string theory in a theory called supergravity. By 1984 string theory and supersymmetry had also been combined to create superstring theory—strings instead of particles, very massive and unknown partners for every known string, and all in ten- or eleven-dimensional space. The dimensions above the three known dimensions of space and the dimension of time are also unobserved and thought to be curled so tightly that they are too small to observe. In 1995 the American physicist Edward Witten (b. 1951) extended the symmetric theory of supergravity to a theory in which the fundamental entities are membranes in eleven-dimensional space. Variations on this concept, known as M theory or brane theory, remain the most popular concept of the underlying reality of the universe, called the “theory of everything,” for today’s theoretical physicists, although these theories are hampered by the inability of experimenters to prove or disprove them.