Kenneth Snelson (b. 1927), Richard Buckminster “Bucky” Fuller (1895–1983)
The ancient Greek philosopher Heraclitus of Ephesus once wrote that the world is a “harmony of tensions.” One of the most intriguing embodiments of this philosophy is the tensegrity system, which inventor Buckminster Fuller described as “islands of compressions in an ocean of tension.”
Imagine a structure composed only of rods and of cables. The cables connect the ends of rods to other ends of rods. The rigid rods never touch one another. The structure is stable against the force of gravity. How can such a flimsy-looking assemblage persist?
The structural integrity of such structures is maintained by a balance of tension forces (e.g., pulling forces exerted by a wire) and compression forces (e.g., forces that tend to compress the rods). As an example of such forces, when we push the two ends of a dangling spring together, we compress it. When we pull apart the two ends, we create more tension in the spring.
In tensegrity systems, compression-bearing rigid struts tend to stretch (or tense) the tension-bearing cables, which in turn compress the struts. An increase in tension in one of the cables can result in increased tensions throughout the structure, balanced by an increase in compression in the struts. Overall, the forces acting in all directions in a tensegrity structure sum to a zero net force. If this were not the case, the structure might fly away (like an arrow shot from a bow) or collapse.
In 1948, artist Kenneth Snelson produced his kite-like “X-Piece” tensegrity structure. Later, Buckminster Fuller coined the term tensegrity for these kinds of structures. Fuller recognized that the strength and efficiency of his huge geodesic domes were based on a similar kind of structural stabilization that distributes and balances mechanical stresses in space.
To some extent, we are tensegrity systems in which bones are under compression and balanced by the tension-bearing tendons. The cytoskeleton of a microscopic animal cell also resembles a tensegrity system. Tensegrity structures actually mimic some behaviors observed in living cells.
SEE ALSO Truss (2500 B.C.), Arch (1850 B.C.), I-Beams (1844), Leaning Tower of Lire (1955).
A diagram from U.S. Patent 3,695,617, issued in 1972 to G. Mogilner and R. Johnson, for “Tensegrity Structure Puzzle.” Rigid columns are shown in dark green. One objective is to try to remove the inner sphere by sliding the columns.
Hendrik Brugt Gerhard Casimir (1909–2000), Evgeny Mikhailovich Lifshitz (1915–1985)
The Casimir effect often refers to a weird attractive force that appears between two uncharged parallel plates in a vacuum. One possible way to understand the Casimir effect is to imagine the nature of a vacuum in space according to quantum field theory. “Far from being empty,” write physicists Stephen Reucroft and John Swain, “modern physics assumes that a vacuum is full of fluctuating electromagnetic waves that can never be completely eliminated, like an ocean with waves that are always present and can never be stopped. These waves come in all possible wavelengths, and their presence implies that empty space contains a certain amount of energy” called zero-point energy.
If the two parallel plates are brought very close together (e.g. a few nanometers apart), longer waves will not fit between them, and the total amount of vacuum energy between the plates will be smaller than outside of the plates, thus causing the plates to attract each other. One may imagine the plates as prohibiting all of the fluctuations that do not “fit” into the space between the plates. This attraction was first predicted in 1948 by physicist Hendrik Casimir.
Theoretical applications of the Casimir effect have been proposed, ranging from using its “negative energy density” for propping open traversable wormholes between different regions of space and time to its use for developing levitating devices—after physicist Evgeny Lifshitz theorized that the Casimir effect can give rise to repulsive forces. Researchers working on micromechanical or nanomechanical robotic devices may need to take the Casimir effect into account as they design tiny machines.
In quantum theory, the vacuum is actually a sea of ghostly virtual particles springing in and out of existence. From this viewpoint, one can understand the Casimir effect by realizing that fewer virtual photons exist between the plates because some wavelengths are forbidden. The excess pressure of photons outside the plates squeezes the plates together. Note that Casimir forces can also be interpreted using other approaches without reference to zero-point energy.
SEE ALSO Third Law of Thermodynamics (1905), Wormhole Time Machine (1988), Quantum Resurrection (100 Trillion).
The sphere shown in this scanning electron microscope image is slightly over one tenth of a millimeter in diameter and moves toward a smooth plate (not shown) due to the Casimir Effect. Research on the Casimir Effect helps scientists better predict the functioning of micro-mechanical machine parts. (Photo courtesy of Umar Mohideen.)
Albert Einstein (1879–1955), Kurt Gödel (1906–1978), Kip Stephen Thorne (b. 1940)
What is time? Is time travel possible? For centuries, these questions have intrigued philosophers and scientists. Today, we know for certain that time travel is possible. For example, scientists have demonstrated that objects traveling at high speeds age more slowly than a stationary object sitting in a laboratory frame of reference. If you could travel on a near light-speed rocket into outer space and return, you could travel thousands of years into the Earth’s future. Scientists have verified this time slowing or “dilation” effect in a number of ways. For example, in the 1970s, scientists used atomic clocks on airplanes to show that these clocks had a slight slowing of time with respect to clocks on the Earth. Time is also significantly slowed near regions of very large masses.
Although seemingly more difficult, numerous ways exist in which time machines for travel to the past can theoretically be built that do not seem to violate any known laws of physics. Most of these methods rely on high gravities or on wormholes (hypothetical “shortcuts” through space and time). To Isaac Newton, time was like a river flowing straight. Nothing could deflect the river. Einstein showed that the river could curve, although it could never circle back on itself, which would be a metaphor for backwards time travel. In 1949, mathematician Kurt Gödel went even further and showed that the river could circle back on itself. In particular, he found a disturbing solution to Einstein’s equations that allows backward time travel in a universe that rotated. For the first time in history, backward time travel had been given a mathematical foundation!
Throughout history, physicists have found that if phenomena are not expressly forbidden, they are often eventually found to occur. Today, designs for time travel machines are proliferating in top science labs and include such wild concepts as Thorne Wormhole Time Machines, Gott loops that involve cosmic strings, Gott shells, Tipler and van Stockum cylinders, and Kerr Rings. In the next few hundred years, perhaps our heirs will explore space and time to degrees we cannot currently fathom.
SEE ALSO Tachyons (1967), Wormhole Time Machine (1988), Special Theory of Relativity (1905), General Theory of Relativity (1915), Chronology Protection Conjecture (1992).
If time is like space, might the past, in some sense, still exist “back there” as surely as your home still exists even after you have left it? If you could travel back in time, which genius of the past would you visit?
Willard Frank Libby (1908–1980)
“If you were interested in finding out the age of things, the University of Chicago in the 1940s was the place to be,” writes author Bill Bryson. “Willard Libby was in the process of inventing radiocarbon dating, allowing scientists to get an accurate reading of the age of bones and other organic remains, something they had never been able to do before….”
Radiocarbon dating involves the measuring of the abundance of the radioactive element carbon-14 (14C) in a carbon-containing sample. The method relies on the fact that 14C is created in the atmosphere when cosmic rays strike nitrogen atoms. The 14C is then incorporated into plants, which animals subsequently eat. While an animal is alive, the abundance of 14C in its body roughly matches the atmospheric abundance. 14C continually decays at a known exponential rate, converting to nitrogen-14, and once the animal dies and no longer replenishes its 14C supply from the environment, the animal’s remains slowly lose 14C. By detecting the amount of 14C in a sample, scientists can estimate its age if the sample is not older than 60,000 years. Older samples generally contain too little of 14C to measure accurately. 14C has a half-life of about 5,730 years due to radioactive decay. This means that every 5,730 years, the amount of 14C in a sample has dropped by half. Because the amount of atmospheric 14C undergoes slight variations through time, small calibrations are made to improve the accuracy of the dating. Also, atmospheric 14C increased during the 1950s due to atomic bomb tests. Accelerator Mass Spectrometry can be used to detect 14C abundances in milligram samples.
Before radiocarbon dating, it was very difficult to obtain reliable dates before the First Dynasty in Egypt, around 3000 B.C. This was quite frustrating for archeologists who were feverish to know, for example, when Cro-Magnon people painted the caves of Lascaux in France or when the last Ice Age finally ended.
SEE ALSO Olmec Compass (1000 B.C.), Hourglass (1338), Radioactivity (1896), Mass Spectrometer (1898), Atomic Clocks (1955).
Because carbon is very common, numerous kinds of materials are potentially useable for radiocarbon investigations, including ancient skeletons found during archeological digs, charcoal, leather, wood, pollen, antlers, and much more.
Enrico Fermi (1901-1954), Frank Drake (b. 1930)
During our Renaissance, rediscovered ancient texts and new knowledge flooded medieval Europe with the light of intellectual transformation, wonder, creativity, exploration, and experimentation. Imagine the consequences of making contact with an alien race. Another, far more profound Renaissance would be fueled by the wealth of alien scientific, technical, and sociological information. Given that our universe is both ancient and vast—there are an estimated 250 billion stars in our Milky Way galaxy alone—the physicist Enrico Fermi asked in 1950, “Why have we not yet been contacted by an extraterrestrial civilization?” Of course, many answers are possible. Advanced alien life could exist, but we are unaware of their presence. Alternatively, intelligent aliens may be so rare in the universe that we may never make contact with them. The Fermi Paradox, as it is known today, has given rise to scholarly works attempting to address the question in fields ranging from physics and astronomy to biology.
In 1960, astronomer Frank Drake suggested a formula to estimate the number of extraterrestrial civilizations in our galaxy with whom we might come into contact:
Here, N is the number of alien civilizations in the Milky Way with which communication might be possible; for example, alien technologies may produce detectable radio waves. R* is the average rate of star formation per year in our galaxy. fp is the fraction of those stars that have planets (hundreds of extra solar planets have been detected). ne is the average number of “Earth-like” planets that can potentially support life per star that has planets. fl is the fraction of these ne planets that actually yield life forms. fi is the fraction of fl that actually produce intelligent life. The variable fc represents the fraction of civilizations that develop a technology that releases detectable signs of their existence into outer space. L is the length of time such civilizations release signals into space that we can detect. Because many of the parameters are very difficult to determine, the equation serves more to focus attention on the intricacies of the paradox than to resolve it.
SEE ALSO Tsiolkovsky Rocket Equation (1903), Time Travel (1949), Dyson Sphere (1960), Anthropic Principle (1961), Living in a Simulation (1967), Chronology Protection Conjecture (1992), Cosmic Isolation (100 Billion).
Given that our universe is both ancient and vast, the physicist Enrico Fermi asked in 1950, “Why have we not yet been contacted by an extraterrestrial civilization?”
Alexandre-Edmond Becquerel (1820–1891), Calvin Souther Fuller (1902–1994)
In 1973, British chemist George Porter said, “I have no doubt that we will be successful in harnessing the sun’s energy…. If sunbeams were weapons of war, we would have had solar energy centuries ago.” Indeed, the quest to efficiently create energy from sunshine has had a long history. Back in 1839, nineteen-year-old French physicist Edmund Becquerel discovered the photovoltaic effect in which certain materials produce small amounts of electric current when exposed to light. However, the most important breakthrough in solar power technology did not take place until 1954 when three scientists from Bell Laboratories—Daryl Chapin, Calvin Fuller, and Gerald Pearson—invented the first practical silicon solar cell for converting sunlight into electrical power. Its efficiency was only around 6% in direct sunlight, but today efficiencies of advanced solar cells can exceed 40%.
You may have seen solar panels on the roofs of buildings or used to power warning signs on highways. Such panels contain solar cells, commonly composed of two layers of silicon. The cell also has an antireflective coating to increase the absorption of sunlight. To ensure that the solar cell creates a useful electrical current, small amounts of phosphorus are added to the top silicon layer, and boron is added to the bottom layer. These additions cause the top layer to contain more electrons and the bottom layer to have fewer electrons. When the two layers are joined, electrons in the top layer move into the bottom layer very close to the junction between layers, thus creating an electric field at the junction. When photons of sunlight hit the cell, they knock loose electrons in both layers. The electric field pushes electrons that have reached the junction toward the top layer. This “push” or “force” can be used to move electrons out of the cell into attached metal conductor strips in order to generate electricity. To power a home, this direct-current electricity is converted to alternating current by a device called an inverter.
Solar panels used to power the equipment of a vineyard.
SEE ALSO Archimedes’ Burning Mirrors (212 B.C.), Battery (1800), Fuel Cell (1839), Photoelectric Effect (1905), Energy from the Nucleus (1942), Tokamak (1956), Dyson Sphere (1960).
Solar panels on the roof of a home.
Martin Gardner (1914-2010)
One day while walking through a library, you notice a stack of books leaning over the edge of a table. You wonder if it would be possible to stagger a stack of many books so that the top book would be far out into the room—say five feet—with the bottom book still resting on the table? Or would such a stack fall under its own weight? For simplicity, the books are assumed to be identical, and you are only allowed to have one book at each level of the stack; in other words, each book rests on at most one other book.
The problem has puzzled physicists since at least the early 1800s, and in 1955 was referred to as the Leaning Tower of Lire in the American Journal of Physics. The problem received extra attention in 1964 when Martin Gardner discussed it in Scientific American.
The stack of n books will not fall if the stack has a center of mass that is still above the table. In other words, the center of mass of all books above any book B must lie on a vertical axis that “cuts” through B. Amazingly, there is no limit to how far you can make the stack jut out beyond the table’s edge. Martin Gardner referred to this arbitrarily large overhang as the infinite-offset paradox. For an overhang of just 3 book lengths, you’d need a walloping 227 books! For 10 books, you’d need 272,400,600 books. And for 50 book lengths you’d need more than 1.5 × 1044 books. A formula for the amount of overhang attainable with n books, in book lengths, can be used: 0.5 × (1 + 1/2 + 1/3 + … + 1/n). This harmonic series diverges very slowly; thus, a modest increase in book overhang requires many more books. Additional fascinating work has since been conducted on this problem after removing the constraint of having only one book at each level of the stack.
SEE ALSO Arch (1850 B.C.), Truss (2500 B.C.), Tensegrity (1948).
Would it be possible to stagger a stack of many books so that the top book would be many feet into the room, with the bottom book still resting on the table? Or would such a stack fall under its own weight?
Max Knoll (1897–1969), Ernst August Friedrich Ruska (1906–1988), Erwin Wilhelm Müller (1911–1977), Albert Victor Crewe (1927–2009)
“Dr. Crewe’s research opened a new window into the Lilliputian world of the fundamental building blocks of nature,” writes journalist John Markoff, “giving [us] a powerful new tool to understand the architecture of everything from living tissue to metal alloys.”
The world had never “seen” an atom using an electron microscope before University of Chicago Professor Albert Crewe used his first successful version of the scanning transmission electron microscope, or STEM. Although the concept of an elementary particle had been proposed in the fifth century B.C. by the Greek philosopher Democritus, atoms were far too small to be visualized using optical microscopes. In 1970, Crewe published his landmark paper titled “Visibility of Single Atoms” in the journal Science, which presented photographic evidence of atoms of uranium and thorium.
“After attending a conference in England and forgetting to buy a book at the airport for the flight home, he pulled out a pad of paper on the plane and sketched two ways to improve existing microscopes,” writes Markoff. Later, Crewe designed an improved source of electrons (a field emission gun) for scanning the specimen.
Electron microscopes employ a beam of electrons to illuminate a specimen. Using the transmission electron microscope, invented around 1933 by Max Knoll and Ernst Ruska, electrons pass through a thin sample followed by a magnetic lens produced by a current-carrying coil. A scanning electron microscope employs electric and magnetic lenses before the sample, allowing electrons to be focused onto a small spot that is then scanned across the surface. The STEM is a hybrid of both approaches.
In 1955, physicist Erwin Müller used a field ion microscope to visualize atoms. The device used a large electric field, applied to a sharp metal tip in a gas. The gas atoms arriving at the tip are ionized and detected. Physicist Peter Nellist writes, “Because this process is more likely to occur at certain places on the surface of the tip, such as at steps in the atomic structure, the resulting image represents the underlying atomic structure of the sample.”
SEE ALSO Von Guericke’s Electrostatic Generator (1660), Micrographia (1665), Atomic Theory (1808), Bragg’s Law of Crystal Diffraction (1912), Quantum Tunneling (1928), Nuclear Magnetic Resonance (1938).
A field ion microscope (FIM) image of a very sharp tungsten needle. The small roundish features are individual atoms. Some of the elongated features are caused by atoms moving during the imaging process (approximately 1 second).
Louis Essen (1908–1997)
Clocks have become more accurate through the centuries. Early mechanical clocks, such as the fourteenth-century Dover Castle clock, varied by several minutes each day. When pendulum clocks came into general use in the 1600s, clocks became accurate enough to record minutes as well as hours. In the 1900s, vibrating quartz crystals were accurate to fractions of a second per day. In the 1980s, cesium atom clocks lost less than a second in 3,000 years, and, in 2009, an atomic clock known as NIST-F1—a cesium fountain atomic clock—was accurate to a second in 60 million years!
Atomic clocks are accurate because they involve the counting of periodic events involving two different energy states of an atom. Identical atoms of the same isotope (atoms having the same number of nucleons) are the same everywhere; thus, clocks can be built and run independently to measure the same time intervals between events. One common type of atomic clock is the cesium clock, in which a microwave frequency is found that causes the atoms to make a transition from one energy state to another. The cesium atoms begin to fluoresce at a natural resonance frequency of the cesium atom (9,192,631,770 Hz, or cycles per second), which is the frequency used to define the second. Measurements from many cesium clocks throughout the world are combined and averaged to define an international time scale.
One important use of atomic clocks is exemplified by the GPS (global positioning system). This satellite-based system enables users to determine their positions on the ground. To ensure accuracy, the satellites must send out accurately timed radio pulses, which receiving devices need to determine their positions.
English physicist Louis Essen created the first accurate atomic clock in 1955, based on energy transitions of the cesium atom. Clocks based on other atoms and methods are continually being researched in labs worldwide in order to increase accuracy and decrease cost.
SEE ALSO Hourglass (1338), Anniversary Clock (1841), Stokes’ Fluorescence (1852), Time Travel (1949), Radiocarbon Dating (1949).
In 2004, scientists at the National Institute of Standards and Technology (NIST) demonstrated a tiny atomic clock, the inner workings of which were about the size of a grain of rice. The clock included a laser and a cell containing a vapor of cesium atoms.
Hugh Everett III (1930–1982), Max Tegmark (b. 1967)
A number of prominent physicists now suggest that universes exist that are parallel to ours and that might be visualized as layers in a cake, bubbles in a milkshake, or buds on an infinitely branching tree. In some theories of parallel universes, we might actually detect these universes by gravity leaks from one universe to an adjacent universe. For example, light from distant stars may be distorted by the gravity of invisible objects residing in parallel universes only millimeters away. The entire idea of multiple universes is not as far-fetched as it may sound. According to a poll of 72 leading physicists conducted by the American researcher David Raub and published in 1998, 58% of physicists (including Stephen Hawking) believe in some form of multiple universes theory.
Many flavors of parallel-universe theory exist. For example, Hugh Everett III’s 1956 doctoral thesis “The Theory of the Universal Wavefunction” outlines a theory in which the universe continually “branches” into countless parallel worlds. This theory is called the many-worlds interpretation of quantum mechanics and posits that whenever the universe (“world”) is confronted by a choice of paths at the quantum level, it actually follows the various possibilities. If the theory is true, then all kinds of strange worlds may “exist” in some sense. In a number of worlds, Hitler won World War II. Sometimes, the term “multiverse” is used to suggest the idea that the universe that we can readily observe is only part of the reality that comprises the multiverse, the set of possible universes.
If our universe is infinite, then identical copies of our visible universe may exist, with an exact copy of our Earth and of you. According to physicist Max Tegmark, on average, the nearest of these identical copies of our visible universe is about 10 to the 10100 meters away. Not only are there infinite copies of you, there are infinite copies of variants of you. Chaotic Cosmic Inflation theory also suggests the creation of different universes—with perhaps countless copies of you existing but altered in fantastically beautiful and ugly ways.
SEE ALSO Wave Nature of Light (1801), Schrödinger’s Cat (1935), Anthropic Principle (1961), Living in a Simulation (1967), Cosmic Inflation (1980), Quantum Computers (1981), Quantum Immortality (1987), Chronology Protection Conjecture (1992).
Some interpretations of quantum mechanics posit that whenever the universe is confronted by a choice of paths at the quantum level, it actually follows the various possibilities. Multiverse implies that our observable universe is part of a reality that includes other universes.
Wolfgang Ernst Pauli (1900–1958), Frederick Reines (1918–1998), Clyde Lorrain Cowan, Jr. (1919–1974)
In 1993, physicist Leon Lederman wrote, “Neutrinos are my favorite particles. A neutrino has almost no properties: no mass (or very little), no electric charge … and, adding insult to injury, no strong force acts on it. The euphemism used to describe a neutrino is ‘elusive.’ It is barely a fact, and it can pass through millions of miles of solid lead with only a tiny chance of being involved in a measurable collision.”
In 1930, physicist Wolfgang Pauli predicted the essential properties of the neutrino (no charge, very little mass)—to explain the loss of energy during certain forms of radioactive decay. He suggested that the missing energy might be carried away by ghostlike particles that escaped detection. Neutrinos were first detected in 1956 by physicists Frederick Reines and Clyde Cowan in their experiments at a nuclear reactor in South Carolina.
Each second, over 100 billion neutrinos from the Sun pass through every square inch (6.5 square centimeters) of our bodies, and virtually none of them interact with us. According to the Standard Model of particle physics, neutrinos do not have a mass; however, in 1998, the subterranean Super Kamiokande neutrino detector in Japan was used to determine that they actually have a minuscule mass. The detector used a large volume of water surrounded by detectors for Cherenkov Radiation emitted from neutrino collisions. Because neutrinos interact so weakly with matter, neutrino detectors must be huge to increase the chances of detection. The detectors also reside beneath the Earth’s surface to shield them from other forms of background radiation such as Cosmic Rays.
Today, we know that there are three known types, or flavors, of neutrinos and that neutrinos are able to oscillate between the three flavors while they travel through space. For years, scientists wondered why they detected so few of the expected neutrinos from the Sun’s energy-producing fusion reactions. However, the solar neutrino flux only appears to be low because the other neutrino flavors are not easily observed by some neutrino detectors.
SEE ALSO Radioactivity (1896), Cherenkov Radiation (1934), Standard Model (1961), Quarks (1964).
The Fermi National Accelerator Laboratory, near Chicago, uses protons from an accelerator to produce an intense beam of neutrinos that allow physicists to observe neutrino oscillations at a distant detector. Shown here are “horns” that help focus the particles that decay and produce neutrinos.
Igor Yevgenyevich Tamm (1895-1971), Lev Andreevich Artsimovich (1909-1973), Andrei Dmitrievich Sakharov (1921-1989)
Fusion reactions in the Sun bathe the Earth in light and energy. Can we learn to safely generate energy by fusion here on Earth to provide more direct power for human needs? In the Sun, four hydrogen nuclei (four protons) are fused into a helium nucleus. The resulting helium nucleus is less massive than the hydrogen nuclei that combine to make it, and the missing mass is converted to energy according to Einstein’s E = mc2. The huge pressures and temperatures needed for the Sun’s fusion are aided by its crushing gravitation.
Scientists wish to create nuclear fusion reactions on Earth by generating sufficiently high temperatures and densities so that gases consisting of hydrogen isotopes (deuterium and tritium) become a plasma of free-floating nuclei and electrons—and then the resultant nuclei may fuse to produce helium and neutrons with the release of energy. Unfortunately, no material container can withstand the extremely high temperatures needed for fusion. One possible solution is a device called a tokamak, which employs a complex system of magnetic fields to confine and squeeze the plasmas within a hollow, doughnut-shaped container. This hot plasma may be created by magnetic compression, microwaves, electricity, and neutral particle beams from accelerators. The plasma then circulates around the tokamak without touching its walls. Today, the world’s largest tokamak is ITER, under construction in France.
Researchers continue to perfect their tokamaks with the goal of creating a system that generates more energy than is required for the system’s operation. If such a tokamak can be built, it would have many benefits. First, small amounts of fuel required are easy to obtain. Second, fusion does not have the high-level radioactive waste problems of current fission reactors in which the nucleus of an atom, such as uranium, splits into smaller parts, with a large release of energy.
The tokamak was invented in the 1950s by Soviet physicist Igor Yevgenyevich Tamm and Andrei Sakharov, and perfected by Lev Artsimovich. Today, scientists also study the possible use of inertial confinement of the hot plasma using laser beams.
SEE ALSO Plasma (1879), E = mc2 (1905), Energy from the Nucleus (1942), Stellar Nucleosynthesis (1946), Solar Cells (1954), Dyson Sphere (1960).
A photo of the National Spherical Torus Experiment (NSTX), an innovative magnetic fusion device, based on a spherical tokamak concept. NSTX was constructed by the Princeton Plasma Physics Laboratory (PPPL) in collaboration with the Oak Ridge National Laboratory, Columbia University, and the University of Washington at Seattle.
Jack St. Clair Kilby (1923–2005), Robert Norton Noyce (1927–1990)
“It seems that the integrated circuit was destined to be invented,” writes technology-historian Mary Bellis. “Two separate inventors, unaware of each other’s activities, invented almost identical integrated circuits, or ICs, at nearly the same time.”
In electronics, an IC, or microchip, is a miniaturized electronic circuit that relies upon semiconductor devices and is used today in countless examples of electronic equipment, ranging from coffeemakers to fighter jets. The conductivity of a semiconductor material can be controlled by introduction of an electric field. With the invention of the monolithic IC (formed from a single crystal), the traditionally separate transistors, resistors, capacitors, and all wires could now be placed on a single crystal (or chip) made of semiconductor material. Compared with the manual assembly of discrete circuits of individual components, such as resistors and transistors, an IC can be made more efficiently using the process of photolithography, which involves selectively transferring geometric shapes on a mask to the surface of a material such as a silicon wafer. The speed of operations is also higher in ICs because the components are small and tightly packed.
Physicist Jack Kilby invented the IC in 1958. Working independently, physicist Robert Noyce invented the IC six months later. Noyce used silicon for the semiconductor material, and Kilby used germanium. Today, a postage-stamp-sized chip can contain over a billion transistors. The advances in capability and density—and decrease in price—led technologist Gordon Moore to say, “If the auto industry advanced as rapidly as the semiconductor industry, a Rolls Royce would get a half a million miles per gallon, and it would be cheaper to throw it away than to park it.”
Kilby invented the IC as a new employee at Texas Instruments during the company’s late-July vacation time when the halls of his employer were deserted. By September, Kilby had built a working model, and on February 6, Texas Instruments filed a patent.
SEE ALSO Kirchhoff’s Circuit Laws (1845), Transistor (1947), Cosmic Rays (1910), Quantum Computers (1981).
The exterior packaging of microchips (e.g., large rectangular shape at left) house the integrated circuits inside that contain the tiny components such as transistor devices. The housing protects the much smaller integrated circuit and provides a means of connecting the chip to a circuit board.
John Frederick William Herschel (1792–1871), William Alison Anders (b. 1933)
Due to the particular gravitational forces between the Moon and the Earth, the Moon takes just as long to rotate around its own axis as it does to revolve around the Earth; thus, the same side of the Moon always faces the Earth. The “dark side of the moon” is the phrase commonly used for the far side of the Moon that can never be seen from the Earth. In 1870, the famous astronomer Sir John Herschel wrote that the Moon’s far side might contain an ocean of ordinary water. Later, flying saucer buffs speculated that the far side could be harboring a hidden base for extraterrestrials. What secrets did it actually hold?
Finally, in 1959, we had our first glimpse of the far side when it was first photo graphed by the Soviet Luna 3 probe. The first atlas of the far side of the Moon was published by the USSR Academy of Sciences in 1960. Physicists have suggested that the far side might be used for a large radio telescope that is shielded from terrestrial radio interference.
The far side is actually not always dark, and both the side that faces us and the far side receive similar amounts of sunlight. Curiously, the near and far sides have vastly different appearances. In particular, the side toward us contains many large “maria” (relatively smooth areas that looked like seas to ancient astronomers). In contrast, the far side has a blasted appearance with more craters. One reason for this disparity arises from the increased volcanic activity around three billion years ago on the near side, which created the relatively smooth basaltic lavas of the maria. The far side crust may be thicker and thus was able to contain the interior molten material. Scientists still debate the possible causes.
In 1968, humans finally gazed directly on the far side of the moon during America’s Apollo 8 mission. Astronaut William Anders, who traveled to the moon, described the view: “The backside looks like a sand pile my kids have played in for some time. It’s all beat up, no definition, just a lot of bumps and holes.”
SEE ALSO Telescope (1608), Discovery of Saturn’s Rings (1610).
The far side of the moon, with its strangely rough and battered surface, looks very different than the surface facing the Earth. This view was photographed in 1972 by Apollo 16 astronauts while in lunar orbit.
William Olaf Stapledon (1886–1950), Freeman John Dyson (b. 1923)
In 1937, British philosopher and author Olaf Stapledon described an immense artificial structure in his novel Star Maker: “As the eons advanced … many a star without natural planets came to be surrounded by concentric rings of artificial worlds. In some cases the inner rings contained scores, the outer rings thousands of globes adapted to life at some particular distance from the Sun.”
Stimulated by Star Maker, in 1960, physicist Freeman Dyson published a technical paper in the prestigious journal Science on a hypothetical spherical shell that might encompass a star and capture a large percentage of its energy. As technological civilizations advanced, such structures would be desired in order to meet their vast energy needs. Dyson actually had in mind a swarm of artificial objects orbiting a star, but science-fiction authors, physicists, teachers, and students have ever since wondered about the possible properties of a rigid shell with a star at its center, with aliens potentially inhabiting the inner surface of the sphere.
In one embodiment, the Dyson Sphere would have a radius equal to the distance of the Earth to the Sun and would thus have a surface area 550 million times the Earth’s surface area. Interestingly, the central star would have no net gravitational attraction on the shell, which may dangerously drift relative to the star unless adjustments could be made to the shell’s position. Similarly, any creatures or objects on the inner surface of the sphere would not be gravitationally attracted to the sphere. In a related concept, creatures could still reside on a planet, and the shell could be used to capture energy from the star. Dyson had originally estimated that sufficient planetary and other material existed in the solar system to create such a sphere with a 3-meter thick shell. Dyson also speculated that Earthlings might be able to detect a far-away Dyson Sphere because it would absorb starlight and re-radiate energy in readily definable ways. Researchers have already attempted to find possible evidence of such constructs by searching for their infrared signals.
SEE ALSO Measuring the Solar System (1672), Fermi Paradox (1950), Solar Cells (1954), Tokamak (1956).
Artistic depiction of a Dyson sphere that might encompass a star and capture a large percentage of its energy. The lightning effects depicted here represent the capture of energy at the inner surface of the sphere.
Charles Hard Townes (b. 1915), Theodore Harold “Ted” Maiman (1927–2007)
“Laser technology has become important in a wide range of practical applications,” writes laser-expert Jeff Hecht, “ranging from medicine and consumer electronics to telecommunications and military technology. Lasers are also vital tools on the cutting edge of research—18 recipients of the Nobel Prize received the award for laser-related research, including the laser itself, holography, laser cooling, and Bose-Einstein Condensates.”
The word laser stands for light amplification by stimulated emission of radiation, and lasers make use of a subatomic process known as stimulated emission, first considered by Albert Einstein in 1917. In stimulated emission, a photon (a particle of light) of the appropriate energy causes an electron to drop to a lower energy level, which results in the creation of another photon. This second photon is said to be coherent with the first and has the same phase, frequency, polarization, and direction of travel as the first photon. If the photons are reflected so that they repeatedly traverse the same atoms, an amplification can take place and an intense radiation beam is emitted. Lasers can be created so that they emit electromagnetic radiations of various kinds, and thus there are X-ray lasers, ultraviolet lasers, infrared lasers, etc. The resultant beam may be highly collimated—NASA scientists have bounced laser beams generated on the Earth from reflectors left on the moon by astronauts. At the Moon’s surface, this beam is a mile or two wide (about 2.5 kilometers), which is actually a rather small spread when compared to ordinary light from a flashlight!
In 1953, physicist Charles Townes and students produced the first microwave laser (a maser), but it was not capable of continuous radiation emission. Theodore Maiman created the first practical working laser in 1960, using pulsed operation, and today the largest applications of lasers include DVD and CD players, fiber-optic communications, bar-code readers, and laser printers. Other uses include bloodless surgery and target-marking for weapons use. Research continues in the use of lasers capable of destroying tanks or airplanes.
SEE ALSO Brewster’s Optics (1815), Hologram (1947), Bose-Einstein Condensate (1995).
An optical engineer studies the interaction of several lasers that will be used aboard a laser weapons system being developed to defend against ballistic missile attacks. The U.S. Directed Energy Directorate conducts research into beam-control technologies.
Joseph William Kittinger II (b. 1928)
Perhaps many of you have heard the gruesome legend of the killer penny. If you were to drop a penny from the Empire State Building in New York City, the penny would gain speed and could kill someone walking on the street below, penetrating the brain.
Fortunately for pedestrians below, the physics of terminal velocity saves them from this ghastly demise. The penny falls about 500 feet (152 meters) before reaching its maximum velocity: about 50 miles an hour (80 kilometers/hour). A bullet travels at ten times this speed. The penny is not likely to kill anyone, thus debunking the legend. Updrafts also tend to slow the penny. The penny is not shaped like a bullet, and the odds are that the tumbling coin would barely break the skin.
As an object moves through a medium (such as air or water), it encounters a resistive force that slows it down. For objects in free-fall through air, the force depends on the square of the speed, the area of the object, and the density of air. The faster the object falls, the greater the opposing force becomes. As the penny accelerates downward, eventually the resistive force grows to such an extent that the penny finally falls at a constant speed, known as the terminal velocity. This occurs when the viscous drag on the object is equal to the force of gravity.
Skydivers reach a terminal velocity of about 120 miles per hour (190 kilometers/ hour) if they spread their feet and arms. If they assume a heads-down diving position, they reach a velocity of about 150 miles per hour (240 kilometers/hour).
The highest terminal velocity ever to have been reached by a human in free fall was achieved in 1960 by U.S. military officer Joseph Kittinger II, who is estimated to have reached 614 miles per hour (988 kilometers/hour) due to the high altitude (and hence lower air density) of his jump from a balloon. His jump started at 102,800 feet (31,300 meters), and he opened his parachute at 18,000 feet (5,500 meters).
SEE ALSO Acceleration of Falling Objects (1638), Escape Velocity (1728), Bernoulli’s Law of Fluid Dynamics (1738), Baseball Curveball (1870), Super Ball (1965).
Skydivers reach a terminal velocity of about 120 miles per hour (190 kilometers/hour) if they spread their feet and arms.
Robert Henry Dicke (1916–1997), Brandon Carter (b. 1942)
“As our knowledge of the cosmos has increased,” writes physicist James Trefil, “… it has become apparent that, had the universe been put together just a little differently, we could not be here to contemplate it. It is as though the universe has been made for us—a Garden of Eden of the grandest possible design.”
While this statement is subject to continued debate, the anthropic principle fascinates both scientists and laypeople and was first elucidated in detail in a publication by astrophysicist Robert Dicke in 1961 and later developed by physicist Brandon Carter and others. The controversial principle revolves around the observation that at least some physical parameters appear to be tuned to permit life forms to evolve. For example, we owe our very lives to the element carbon, which was first manufactured in stars before the Earth formed. The nuclear reactions that facilitate the production of carbon have the appearance, at least to some researchers, of being “just right” to facilitate carbon production.
If all of the stars in the universe were heavier than three solar masses, they would live for only about 500 million years, and multicellular life would not have time to evolve. If the rate of the universe’s expansion one second after the Big Bang had been smaller by even one part in a hundred thousand million million, the universe would have recollapsed before reaching its present size. On the other hand, the universe might have expanded so rapidly that protons and electrons never united to make hydrogen atoms. An extremely small change in the strength of gravity or of the nuclear weak force could prevent advanced life forms from evolving.
An infinite number of random (non-designed) universes could exist, ours being just one that permits carbon-based life. Some researchers have speculated that child universes are constantly budding off from parent universes and that the child universe inherits a set of physical laws similar to the parent, a process reminiscent of evolution of biological characteristics of life on Earth. Universes with many stars may be long-lived and have the opportunity to have many children with many stars; thus, perhaps our star-filled universe is not quite so unusual after all.
SEE ALSO Parallel Universes (1956), Fermi Paradox (1950), Living in a Simulation (1967).
If values for certain fundamental constants of physics were a little different, intelligent carbon-based life may have had great difficulty evolving. To some religious individuals, this gives the impression that the universe was fine-tuned to permit our existence.
Murray Gell-Mann (b. 1929), Sheldon Lee Glashow (b. 1932), George Zweig (b. 1937)
“Physicists had learned, by the 1930s, to build all matter out of just three kinds of particle: electrons, neutrons, and protons,” author Stephen Battersby writes. “But a procession of unwanted extras had begun to appear—neutrinos, the positron and antiproton, pions and muons, and kaons, lambdas and sigmas—so that by the middle of the 1960s, a hundred supposedly fundamental particles have been detected. It was a mess.”
Through a combination of theory and experiment, a mathematical model called the Standard Model explains most of particle physics observed so far by physicists. According to the model, elementary particles are grouped into two classes: bosons (e.g., particles that often transmit forces) and fermions. Fermions include various kinds of Quarks (3 quarks make up both the proton and Neutrons) and leptons (such as the Electron and Neutrino, the latter of which was discovered in 1956). Neutrinos are very difficult to detect because they have a minute (but not zero) mass and pass through ordinary matter almost undisturbed. Today, we know about many of these subatomic particles by smashing apart atoms in particle accelerators and observing the resulting fragments.
As suggested, the Standard Model explains forces as resulting from matter particles exchanging boson force-mediating particles that include photons and gluons. The Higgs particle is the only fundamental particle predicted by the Standard Model that has yet to be observed—and it explains why other elementary particles have masses. The force of gravity is thought to be generated by the exchange of massless gravitons, but these have not yet been experimentally detected. In fact, the Standard Model is incomplete, because it does not include the force of gravity. Some physicists are trying to add gravity to the Standard Model to produce a grand unified theory, or GUT.
In 1964, physicists Murray Gell-Mann and George Zweig proposed the concept of quarks, just a few years after Gell-Mann’s 1961 formulation of a particle classification system known as the Eightfold Way. In 1960, physicist Sheldon Glashow’s unification theories provided an early step toward the Standard Model.
SEE ALSO String Theory (1919), Neutron (1932), Neutrinos (1956), Quarks (1964), God Particle (1964), Supersymmetry (1971), Theory of Everything (1984), Large Hadron Collider (2009).
The Cosmotron. This was the first accelerator in the world to send particles with energies in the billion electron volt, or GeV, region. The Cosmotron synchrotron reached its full design energy of 3.3 GeV in 1953 and was used for studying subatomic particles.
In William R. Forstchen’s best-selling novel One Second After, a high-altitude nuclear bomb explodes, unleashing a catastrophic electromagnetic pulse (EMP) that instantly disables electrical devices such as those in airplanes, heart pacemakers, modern cars, and cell phones, and the U.S. descends into “literal and metaphorical darkness.” Food becomes scarce, society turns violent, and towns burn—all following a scenario that is completely plausible.
EMP usually refers to the burst of electromagnetic radiation that results from a nuclear explosion that disables many forms of electronic devices. In 1962, the U.S. conducted a nuclear test 250 miles (400 kilometers) above the Pacific Ocean. The test, called Starfish Prime, caused electrical damage in Hawaii, about 898 miles (1,445 kilometers) away. Streetlights went out. Burglar alarms went off. The microwave link of a telephone company was damaged. It is estimated today that if a single nuclear bomb was exploded 250 miles above Kansas, the entire continental U.S would be affected, due to the greater strength of the Earth’s magnetic field over the U.S. Even the water supply would be affected, given that it often relies on electrical pumps.
After nuclear detonation, the EMP starts with a short, intense burst of gamma rays (high-energy electromagnetic radiation). The gamma rays interact with the atoms in air molecules, and electrons are released through a process called the Compton Effect. The electrons ionize the atmosphere and generate a powerful electrical field. The strength and effect of the EMP depends highly on the altitude at which the bomb detonates and the local strength of the Earth’s magnetic field.
Note that it is also possible to produce less-powerful EMPs without nuclear weapons, for example, through explosively pumped fluxed compression generators, which are essentially normal electrical generators driven by an explosion using conventional fuel.
Electronic equipment can be protected from an EMP by placing it within a Faraday Cage, which is a metallic shield that can divert the electromagnetic energy directly to the ground.
SEE ALSO Compton Effect (1923), Little Boy Atomic Bomb (1945), Gamma-Ray Bursts (1967), HAARP (2007).
A side view of an E-4 advanced airborne command post (AABNCP) on the electromagnetic pulse (EMP) simulator for testing (Kirtland Air Force Base, New Mexico). The plane is designed to survive an EMP with systems intact.
Jacques Salomon Hadamard (1865–1963), Jules Henri Poincaré (1854–1912), Edward Norton Lorenz (1917–2008)
In Babylonian mythology, Tiamat was the goddess that personified the sea and was the frightening representation of primordial chaos. Chaos came to symbolize the unknown and the uncontrollable. Today, chaos theory is an exciting, growing field that involves the study of wide-ranging phenomena exhibiting a sensitive dependence on initial conditions. Although chaotic behavior often seems “random” and unpredictable, it often obeys strict mathematical rules derived from equations that can be formulated and studied. One important research tool to aid in the study of chaos is computer graphics. From chaotic toys with randomly blinking lights to wisps and eddies of cigarette smoke, chaotic behavior is generally irregular and disorderly; other examples include weather patterns, some neurological and cardiac activity, the stock market, and certain electrical networks of computers. Chaos theory has also often been applied to a wide range of visual art.
In science, certain famous and clear examples of chaotic physical systems exist, such as thermal convection in fluids, panel flutter in supersonic aircraft, oscillating chemical reactions, fluid dynamics, population growth, particles impacting on a periodically vibrating wall, various pendula and rotor motions, nonlinear electrical circuits, and buckled beams.
The early roots of chaos theory started around 1900 when mathematicians such as Jacques Hadamard and Henri Poincaré studied the complicated trajectories of moving bodies. In the early 1960s, Edward Lorenz, a research meteorologist at the Massachusetts Institute of Technology, used a system of equations to model convection in the atmosphere. Despite the simplicity of his formulas, he quickly found one of the hallmarks of chaos: extremely minute changes of the initial conditions led to unpredictable and different outcomes. In his 1963 paper, Lorenz explained that a butterfly flapping its wings in one part of the world could later affect the weather thousands of miles away. Today, we call this sensitivity the Butterfly Effect.
According to Babylonian mythology, Tiamat gave birth to dragons and serpents.
SEE ALSO Laplace’s Demon (1814), Self-Organized Criticality (1987), Fastest Tornado Speed (1999).
Chaos theory involves the study of wide-ranging phenomena exhibiting a sensitive dependence on initial conditions. Shown here is a portion of a Daniel White’s Mandelbulb, a 3-dimensional analog of the Mandelbrot set, which represents the complicated behavior of a simple mathematical system.
Maarten Schmidt (b. 1929)
“Quasars are among the most baffling objects in the universe because of their small size and prodigious energy output,” write the scientists at hubblesite.org. “Quasars are not much bigger than Earth’s solar system but pour out 100 to 1,000 times as much light as an entire galaxy containing a hundred billion stars.”
Although a mystery for decades, today the majority of scientists believe that quasars are very energetic and distant galaxies with an extremely massive central Black Hole that spews energy as nearby galactic material spirals into the black hole. The first quasars were discovered with radio telescopes (instruments that receive radio waves from space) without any corresponding visible object. In the early 1960s, visually faint objects were finally associated with these strange sources that were termed quasi-stellar radio sources, or quasars for short. The spectrum of these objects, which shows the variation in the intensity of the object’s radiation at different wavelengths, was initially puzzling. However, in 1963, Dutch-born American astronomer Maarten Schmidt made the exciting discovery that the spectral lines were simply coming from hydrogen, but that they were shifted far to the red end of the spectrum. This redshift, due to the expansion of the universe, implied that these quasars were part of galaxies that were extremely far away and ancient (see entries on Hubble’s Law and Doppler Effect).
More than 200,000 quasars are known today, and most do not have detectable radio emissions. Although quasars appear dim because they are between about 780 million and 28 billion light-years away—they are actually the most luminous and energetic objects known in the universe. It is estimated that quasars can swallow 10 stars per year, or 600 Earths per minute, and then “turn off” when the surrounding gas and dust has been consumed. At this point, the galaxy hosting the quasar becomes an ordinary galaxy. Quasars may have been more common in the early universe because they had not yet had a chance to consume the surrounding material.
SEE ALSO Telescope (1608), Black Holes (1783), Doppler Effect (1842), Hubble’s Law of Cosmic Expansion (1929), Gamma-Ray Bursts (1967).
A quasar, or growing black hole spewing energy, can be seen at the center of a galaxy in this artist’s concept. Astronomers using NASA’s Spitzer and Chandra space telescopes discovered similar quasars within a number of distant galaxies. X-ray emissions are illustrated by the white rays.
Edward Craven Walker (1918–2000)
The Lava Lamp (U.S. Patent 3,387,396) is an ornamental illuminated vessel with floating globules, and it is included in this book because of its ubiquity and for the simple yet important principles it embodies. Many educators have used the Lava Lamp for classroom demonstrations, experiments, and discussions on topics that include thermal radiation, convection, and conduction.
The Lava Lamp was invented in 1963 by Englishman Edward Craven Walker. Author Ben Ikenson writes, “A World War II veteran, Walker adopted the lingo and lifestyle of the flower children. Part Thomas Edison, part Austin Powers, he was a nudist in the psychedelic days of the UK—and possessed some pretty savvy marketing skills to boot. ‘If you buy my lamp, you won’t need to buy drugs,’ he was known to say.”
To make a Lava Lamp, one must find two liquids that are immiscible, meaning that, like oil and water, they will not blend or mix. In one embodiment, the lamp consists of a 40-watt Incandescent Light Bulb at the bottom, which heats a tall tapered glass bottle containing water and globules made of a mixture of wax and carbon tetrachloride. The wax is slightly denser than the water at room temperature. As the base of the lamp is heated, the wax expands more than the water and becomes fluid. As the wax’s specific gravity (density relative to water) decreases, the blobs rise to the top, and then the wax globules cool and sink. A metal coil at the base of the lamp serves to spread the heat and also to break the Surface Tension of the globules so that they may recombine when at the bottom.
The complex and unpredictable motions of the wax blobs within Lava Lamps have been used as a source of random numbers, and such a random-number generator is mentioned in U.S. Patent 5,732,138, issued in 1998.
Sadly, in 2004, a Lava Lamp killed Phillip Quinn when he attempted to heat it on his kitchen stove. The lamp exploded, and a piece of glass pierced his heart.
SEE ALSO Archimedes’ Principle of Buoyancy (250 B.C.), Stokes’ Law of Viscosity (1851), Surface Tension (1866), Incandescent Light Bulb (1878), Black Light (1903), Silly Putty (1943), Drinking Bird (1945).
Lava lamps demonstrate simple yet important physics principles, and many educators have used the Lava Lamp for classroom demonstrations, experiments, and discussions.
Robert Brout (b. 1928), Peter Ware Higgs (b. 1929), François Englert (b. 1932)
“While walking in the Scottish Highlands in 1964,” writes author Joanne Baker, “physicist Peter Higgs thought of a way to give particles their mass. He called this his ‘one big idea.’ Particles seemed more massive because they are slowed while swimming through a force field, now known as the Higgs field. It is carried by the Higgs boson, referred to as the ‘God particle’ by Nobel Laureate Leon Lederman.”
Elementary particles are grouped into two classes: bosons (particles that transmit forces) and fermions (particles such as Quarks, Electrons, and Neutrinos that make up matter). The Higgs boson is a particle in the Standard Model that has not yet been observed, and scientists hope that the Large Hadron Collider—a high-energy particle accelerator in Europe—may provide experimental evidence relating to the particle’s existence.
To help us visualize the Higgs field, imagine a lake of viscous honey that adheres to the otherwise massless fundamental particles that travel through the field. The field converts them into particles with mass. In the very early universe, theories suggest that all of the fundamental forces (i.e. strong, electromagnetic, weak, and gravitational) were united in one superforce, but as the universe cooled different forces emerged. Physicists have been able to combine the weak and electromagnetic forces into a unified “electroweak” force, and perhaps all of the forces may one day be unified. Moreover, physicists Peter Higgs, Robert Brout, and François Englert suggested that all particles had no mass soon after the Big Bang. As the Universe cooled, the Higgs boson and its associated field emerged. Some particles, such as massless photons of light, can travel through the sticky Higgs field without picking up mass. Others get bogged down like ants in molasses and become heavy.
The Higgs boson may be more than 100 times as massive as the proton. A large particle collider is required to find this boson because the higher the energy of collision, the more massive the particles in the debris.
SEE ALSO Standard Model (1961), Theory of Everything (1984), Large Hadron Collider (2009).
The Compact Muon Solenoid (CMS) is a particle detector located underground in a large cavern excavated at the site of the Large Hadron Collider. This detector will assist in the search for the Higgs boson and in gaining insight into the nature of Dark Matter.
Murray Gell-Mann (b. 1929), George Zweig (b. 1937)
Welcome to the particle zoo. In the 1960s, theorists realized that patterns in the relationships between various elementary particles, such as protons and neutrons, could be understood if these particles were not actually elementary but rather were composed of smaller particles called quarks.
Six types, or flavors, of quarks exist and are referred to as up, down, charm, strange, top and bottom. Only the up and down quarks are stable, and they are the most common in the universe. The other heavier quarks are produced in high-energy collisions. (Note that another class of particles called leptons, which include Electrons, are not composed of quarks.)
Quarks were independently proposed by physicists Murray Gell-Mann and George Zweig in 1964, and, by 1995, particle-accelerator experiments had yielded evidence for all six quarks. Quarks have fractional electric charge; for example, the up quark has a charge of +2/3, and the down quark has a charge of −1/3. Neutrons (which have no charge) are formed from two down quarks and one up quark, and the proton (which is positively charged) is composed of two up quarks and one down quark. The quarks are tightly bound together by a powerful short-range force called the color force, which is mediated by force-carrying particles called gluons. The theory that describes these strong interactions is called quantum chromodynamics. Gell-Mann coined the word quark for these particles after one of his perusals of the silly line in Finnegans Wake, “Three quarks for Muster mark.”
Right after the Big Bang, the universe was filled with a quark-gluon Plasma, because the temperature was too high for hadrons (i.e. particles like protons and neutrons) to form. Authors Judy Jones and William Wilson write, “Quarks pack a mean intellectual wallop. They imply that nature is three-sided…. Specks of infinity on the one hand, building blocks of the universe on the other, quarks represent science at its most ambitious—also its coyest.”
SEE ALSO Big Bang (13.7 Billion B.C.), Plasma (1879), Electron (1897), Neutron (1932), Quantum Electrodynamics (1948), Standard Model (1961).
Scientists used the photograph (left) of particle trails in a Brookhaven National Laboratory bubble chamber as evidence for the existence of a charmed baryon (a three-quark particle). A neutrino enters the picture from below (dashed line in right figure) and collides with a proton to produce additional particles that leave behind trails.
James Watson Cronin (b. 1931), Val Logsdon Fitch (b. 1923)
You, me, the birds, and the bees are alive today due to CP violation and various laws of physics—and their apparent effect on the ratio of matter to Antimatter during the Big Bang from which our universe evolved. As a result of CP violation, asymmetries are created with respect to certain transformations in the subatomic realm.
Many important ideas in physics manifest themselves as symmetries, for example, in a physical experiment in which some characteristic is conserved, or remains constant. The C portion of CP symmetry suggests that the laws of physics should be the same if a particle were interchanged with its antiparticle, for example by changing the sign of the electric charge and other quantum aspects. (Technically, the C stands for charge conjugation symmetry.) The P, or parity, symmetry refers to a reversal of space coordinates, for example, swapping left and right, or, more accurately, changing all three space dimensions x, y, z to −x, −y, and −z. For instance, parity conservation would mean that the mirror images of a reaction occur at the same rate (e.g. the atomic nucleus emits decay products up as often as down).
In 1964, physicists James Cronin and Val Fitch discovered that certain particles, called neutral kaons, did not obey CP conservations whereby equal numbers of antiparticles and particles are formed. In short, they showed that nuclear reactions mediated by the weak force (which governs the radioactive decay of elements) violated the CP symmetry combination. In this case, neutral kaons can transform into their antiparticles (in which each quark is replaced with the others’ antiquark) and vice versa, but with different probabilities.
During the Big Bang, CP violation and other as-yet unknown physical interactions at high energies played a role in the observed dominance of matter over antimatter in the universe. Without these kinds of interactions, nearly equal numbers of protons and antiprotons might have been created and annihilated each other, with no net creation of matter.
SEE ALSO Big Bang (13.7 Billion B.C.), Radioactivity (1896), Antimatter (1932), Quarks (1964), Theory of Everything (1984).
In the early 1960s, a beam from the Alternating Gradient Synchrotron at Brookhaven National Laboratory and the detectors shown here were used to prove the violation of conjugation (C) and parity (P)—winning the Nobel Prize in physics for James Cronin and Val Fitch.
John Stewart Bell (1928–1990)
In the entry EPR Paradox, we discussed quantum entanglement, which refers to an intimate connection between quantum particles, such as between two electrons or two photons. Neither particle has a definite spin before measurement. Once the pair of particles is entangled, a certain kind of change to one of them is reflected instantly in the other, even if, for example, one member of the pair is on the Earth while the other has traveled to the Moon. This entanglement is so counterintuitive that Albert Einstein thought it showed a flaw in quantum theory. One possibility considered was that such phenomena relied on some unknown “local hidden variables” outside of traditional quantum mechanical theory—and a particle was, in reality, still only influenced directly by its immediate surroundings. In short, Einstein did not accept that distant events could have an instantaneous or faster-than-light effect on local ones.
However, in 1964, physicist John Bell showed that no physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics. In fact, the nonlocality of our physical world appears to follow from both Bell’s Theorem and experimental results obtained since the early 1980s. In essence, Bell asks us first to suppose that the Earth particle and the Moon particle, in our example, have determinate values. Could such particles reproduce the results predicted by quantum mechanics for various ways scientists on the Earth and Moon might measure their particles? Bell proved mathematically that a statistical distribution of results would be produced that disagreed with that predicted by quantum mechanics. Thus, the particles may not carry determinate values. This is in contradiction to Einstein’s conclusions, and the assumption that the universe is “local” is wrong.
Philosophers, physicists, and mystics have made extensive use of Bell’s Theorem. Fritjof Capra writes, “Bell’s theorem dealt a shattering blow to Einstein’s position by showing that the conception of reality as consisting of separate parts, joined by local connections, is incompatible with quantum theory…. Bell’s theorem demonstrates that the universe is fundamentally interconnected, interdependent, and inseparable.”
SEE ALSO Complementarity Principle (1927), EPR Paradox (1935), Schrödinger’s Cat (1935), Quantum Computers (1981).
Philosophers, physicists, and mystics have made extensive use of Bell’s Theorem, which seemed to show Einstein was wrong and that the cosmos is fundamentally “interconnected, interdependent, and inseparable.”
“Bang, thump, bonk!” exclaimed the December 3, 1965 issue of Life magazine. “Willy-nilly the ball caroms down the hall as if it had a life of its own. This is Super Ball, surely the bouncingest spheroid ever, which has lept like a berserk grasshopper to the top of whatever charts psychologists may keep on U.S. fads.”
In 1965, California chemist Norman Stingley, along with the Wham-O Manufacturing Company, developed the amazing Super Ball made from the elastic compound called Zectron. If dropped from shoulder height, it could bounce nearly 90% of that height and could continue bouncing for a minute on a hard surface (a tennis ball’s bouncing lasts only ten seconds). In the language of physics, the coefficient of restitution, e, defined as the ratio of the velocity after collision to the velocity before collision, ranges from 0.8 to 0.9.
Released to the public in early summer of 1965, over six million Super Balls were bouncing around America by Fall. United States National Security Advisor McGeorge Bundy had five dozen shipped to the White House to amuse the White House staffers.
The secret of the Super Ball, often referred to as a bouncy ball, is polybutadiene, a rubber-like compound composed of long elastic chains of carbon atoms. When polybutadiene is heated at high pressure in the presence of sulfur, a chemical process called vulcanization converts these long chains to more durable material. Because the tiny sulfur bridges limit how much the Super Ball flexes, much of the bounce energy is returned to its motion. Other chemicals, such as di-ortho-tolylguanidine (DOTG), were added to increase the cross-linking of chains.
What would happen if one were to drop a Super Ball from the top of the Empire State Building? After the ball has dropped about 328 feet (around 100 meters, or 25-30 stories), it reaches a Terminal Velocity of about 70 miles per hour (113 kilometers/ hour), for a ball of a radius of one inch (2.5 centimeters). Assuming e = 0.85, the rebound velocity will be about 60 miles per hour (97 kilometers/hour), corresponding to 80 feet (24 meters, or 7 stories).
SEE ALSO Baseball Curveball (1870), Golf Ball Dimples (1905), Terminal Velocity (1960).
If dropped from shoulder height, a Super Ball could bounce nearly 90 percent of that height and continue bouncing for a minute on a hard surface. Its coefficient of restitution ranges from 0.8 to 0.9.
Arno Allan Penzias (b. 1933), Robert Woodrow Wilson (b. 1936)
The cosmic microwave background (CMB) is electromagnetic radiation filling the universe, a remnant of the dazzling “explosion” from which our universe evolved during the Big Bang 13.7 billion years ago. As the universe cooled and expanded, there was an increase in wavelengths of high-energy photons (such as in the gamma-ray and X-ray portion of the Electromagnetic Spectrum) and a shifting to lower-energy microwaves.
Around 1948, cosmologist George Gamow and colleagues suggested that this microwave background radiation might be detectable, and in 1965 physicists Arno Penzias and Robert Wilson of the Bell Telephone Laboratories in New Jersey measured a mysterious excess microwave noise that was associated with a thermal radiation field with a temperature of about −454 °F (3 K). After checking for various possible causes of this background “noise,” including pigeon droppings in their large outdoor detector, it was determined that they were really observing the most ancient radiation in the universe and providing evidence for the Big Bang model. Note that because photons of energy take time to reach the Earth from distant parts of the universe; whenever we look outward in space, we are also looking back in time.
More precise measurements were made by the COBE (Cosmic Background Explorer) satellite, launched in 1989, which determined a temperature of −454.47 °F (2.735 K). COBE also allowed researchers to measure small fluctuations in the intensity of the background radiation, which corresponded to the beginning of structures, such as galaxies, in the universe.
Luck matters for scientific discoveries. Author Bill Bryson writes, “Although Penzias and Wilson had not been looking for the cosmic background radiation, didn’t know what it was when they had found it, and hadn’t described or interpreted its character in any paper, they received the 1978 Nobel Prize in physics.” Connect an antenna to an analog TV; make sure it’s not tuned to a TV broadcast and “about 1 percent of the dancing static you see is accounted for by this ancient remnant of the Big Bang. The next time you complain that there is nothing on, remember you can always watch the birth of the universe.”
SEE ALSO Big Bang (13.7 Billion B.C.), Telescope (1608), Electromagnetic Spectrum (1864), X-rays (1895), Hubble’s Law of Cosmic Expansion (1929), Gamma-Ray Bursts (1967), Cosmic Inflation (1980).
The Horn reflector antenna at Bell Telephone Laboratories in Holmdel, New Jersey, was built in 1959 for pioneering work related to communication satellites. Penzias and Wilson discovered the cosmic microwave background using this instrument.
Paul Ulrich Villard (1860–1934)
Gamma-ray bursts (GRBs) are sudden, intense bursts of gamma rays, which are an extremely energetic form of light. “If you could see gamma rays with your eyes,” write authors Peter Ward and Donald Brownlee, “you would see the sky light up about once a night, but these distant events go unnoticed by our natural senses.” However, if a GRB would ever flash closer to the Earth, then “one minute you exist, and the next you are either dead or dying from radiation poisoning.” In fact, researchers have suggested that the mass extinction of life 440 million years ago in the late Ordovician period was caused by a GRB.
Until recently, GRBs were one of the biggest enigmas in high-energy astronomy. They were discovered accidentally in 1967 by U.S. military satellites that scanned for Soviet nuclear tests in violation of the atmospheric nuclear test-ban treaty. After the typically few-second burst, the initial event is usually followed by a longer-lived afterglow at longer wavelengths. Today, physicists believe that most GRBs come from a narrow beam of intense radiation released during a supernova explosion, as a rotating high-mass star collapses to form a black hole. So far, all observed GRBs appear to originate outside our Milky Way galaxy.
Scientists are not certain as to the precise mechanism that could cause the release of as much energy in a few seconds as the Sun produces in its entire lifetime. Scientists at NASA suggest that when a star collapses, an explosion sends a blast wave that moves through the star at close to the speed of light. The gamma rays are created when the blast wave collides with material still inside the star.
In 1900, chemist Paul Villard discovered gamma rays while studying the Radioactivity of radium. In 2009, astronomers detected a GRB from an exploding megastar that existed a mere 630 million years after the Big Bang kicked the Universe into operation some 13.7 billion years ago, making this GRB the most distant object ever seen and an inhabitant of a relatively unexplored epoch of our universe.
SEE ALSO Big Bang (13.7 Billion B.C.), Electromagnetic Spectrum (1864), Radioactivity (1896), Cosmic Rays (1910), Quasars (1963).
Hubble Space Telescope image of Wolf-Rayet star WR-124 and its surrounding nebula. These kinds of stars may be generators of long-duration GRBs. These stars are large stars that rapidly lose mass via strong stellar winds.
Konrad Zuse (1910–1995), Edward Fredkin (b. 1934), Stephen Wolfram (b. 1959), Max Tegmark (b. 1967)
As we learn more about the universe and are able to simulate complex worlds using computers, even serious scientists begin to question the nature of reality. Could we be living in a computer simulation?
In our own small pocket of the universe, we have already developed computers with the ability to simulate lifelike behaviors using software and mathematical rules. One day, we may create thinking beings that live in simulated spaces as complex and vibrant as a rain forest. Perhaps we’ll be able to simulate reality itself, and it is possible that more advanced beings are already doing this elsewhere in the universe.
What if the number of these simulations is larger than the number of universes? Astronomer Martin Rees suggests that if the simulations outnumber the universes, “as they would if one universe contained many computers making many simulations,” then it is likely that we are artificial life. Rees writes, “Once you accept the idea of the multiverse …, it’s a logical consequence that in some of those universes there will be the potential to simulate parts of themselves, and you may get a sort of infinite regress, so we don’t know where reality stops …, and we don’t know what our place is in this grand ensemble of universes and simulated universes.”
Astronomer Paul Davies has also noted, “Eventually, entire virtual worlds will be created inside computers, their conscious inhabitants unaware that they are the simulated products of somebody else’s technology. For every original world, there will be a stupendous number of available virtual worlds—some of which would even include machines simulating virtual worlds of their own, and so on ad infinitum.”
Other researchers, such as Konrad Zuse, Ed Fredkin, Stephen Wolfram, and Max Tegmark, have suggested that the physical universe may be running on a cellular automaton or discrete computing machinery—or be a purely mathematical construct. The hypothesis that the universe is a digital computer was pioneered by German engineer Zuse in 1967.
SEE ALSO Fermi Paradox (1950), Parallel Universes (1956), Anthropic Principle (1961).
As computers become more powerful, perhaps someday we will be able to simulate entire worlds and reality itself, and it is possible that more advanced beings are already doing this elsewhere in the universe.
Gerald Feinberg (1933–1992)
Tachyons are hypothetical subatomic particles that travel faster than the speed of light (FTL). “Although most physicists today place the probability of the existence of tachyons only slightly higher than the existence of unicorns,” writes physicist Nick Herbert, “research into the properties of these hypothetical FTL particles has not been entirely fruitless.” Because such particles might travel backward in time, author Paul Nahin humorously writes, “If tachyons are one day discovered, the day before the momentous occasion a notice from the discoverers should appear in newspapers announcing ‘Tachyons have been discovered tomorrow’.”
Albert Einstein’s theory of relativity doesn’t preclude objects from going faster than light speed; rather it says that nothing traveling slower than the speed of light (SL) can ever travel faster than 186,000 miles per second (about 299,000 kilometers/second), which is the speed of light in a vacuum. However, FTL objects may exist so long as they have never traveled slower than light. Using this framework of thought, we might place all things in the universe into three classes: those always traveling less than SL, those traveling exactly at SL (photons), and those always traveling faster than SL. In 1967, the American physicist Gerald Feinberg coined the word tachyon for such hypothetical FTL particles, from the Greek word tachys for fast.
One reason that objects cannot start at a speed less than light and go faster than SL is that Special Relativity states that an object’s mass would become infinite in the process. This relativistic mass increase is a well-tested phenomenon by high-energy physicists. Tachyons don’t produce this contradiction because they never existed at sublight speeds.
Perhaps tachyons were created at the moment of the Big Bang from which our Universe evolved. However, in minutes these tachyons would have plunged backward in time to the universe’s origin and been lost again in its primordial chaos. If tachyons are being created today, physicists feel they might detect them in Cosmic Ray showers or in records of particle collisions in the lab.
SEE ALSO Lorentz Transformation (1904), Special Theory of Relativity (1905), Cosmic Rays (1910), Time Travel (1949).
Tachyons are used in science fiction. If an alien, made of tachyons, approached you from his ship, you might see him arrive before you saw him leave his ship. The image of him leaving his ship would take longer to reach you than his actual FTL body.
Edme Mariotte (c. 1620–1684), Willem Gravesande (1688–1742), Simon Prebble (b. 1942)
Newton’s Cradle has fascinated physics teachers and students ever since it became well known in the late 1960s. Designed by English actor Simon Prebble, who coined the phrase Newton’s Cradle for the wooden-framed version sold by his company in 1967, the most common versions available today usually consist of five or seven metal balls suspended by wires so that they may oscillate along a single plane of motion. The balls are the same size and just touch when at rest. If one ball is pulled away and released, it collides with the stationary balls, stops, and a single ball at the other end swings upwards. The motions conserve both momentum and energy, although a detailed analysis involves more complicated considerations of the ball interactions.
When the released ball makes an impact with the other balls, a shock wave is produced that propagates through the balls. These kinds of impacts were demonstrated in the seventeenth century by French physicist Edme Mariotte. Dutch philosopher and mathematician Willem Gravesande also performed collision experiments with devices similar to Newton’s Cradle.
Today, discussions of Newton’s Cradle span a range of sizes. For example, one of the largest cradles ever made holds 20 bowling balls (15 pounds [6.9 kilograms] each), suspended using cables with a length of 20 feet (6.1 meters). The other end of the size scale is described in a 2006 paper published in nature titled “A Quantum Newton’s Cradle,” by physicists from Pennsylvania State University who constructed a quantum version of Newton’s Cradle. The authors write, “Generalization of Newton’s cradle to quantum mechanical particles lends it a ghostly air. Rather than just reflecting off each other, colliding particles can also transmit through each other.”
Numerous references to Newton’s Cradle appear in the American Journal of Physics, which focuses on physics teachers, suggesting that the cradle continues to be of interest for teaching purposes.
SEE ALSO Conservation of Momentum (1644), Newton’s Laws of Motion and Gravitation (1687), Conservation of Energy (1843), Foucault’s Pendulum (1851).
The motions of the spheres in Newton’s Cradle conserve both momentum and energy, although a detailed analysis involves more complicated considerations of the ball interactions.
Victor Georgievich Veselago (b. 1929)
Will scientists ever be able to create an invisibility cloak, such as the one used by the alien Romulans to render their warships invisible in Star Trek? Some of the earliest steps have already been taken toward this difficult goal with metamaterials, artificial materials with small-scale structures and patterns that are designed to manipulate electromagnetic waves in unusual ways.
Until the year 2001, all known materials had a positive index of refraction that controls the bending of light. However, in 2001, scientists from the University of California at San Diego described an unusual composite material that had a negative index, essentially reversing Snell’s Law. This odd material was a mix of fiberglass, copper rings, and wires capable of focusing light in novel ways. Early tests revealed that microwaves emerged from the material in the exact opposite direction from that predicted by Snell’s Law. More than a physical curiosity, these materials may one day lead to the development of new kinds of antennas and other electromagnetic devices. In theory, a sheet of negative-index material could act as a super-lens to create images of exceptional detail.
Although most early experiments were performed with microwaves, in 2007 a team led by physicist Henri Lezec achieved negative refraction for visible light. In order to create an object that acted as if it were made of negatively refracting material, Lezec’s team built a prism of layered metals perforated by a maze of nanoscale channels. This was the first time that physicists had devised a way to make visible light travel in a direction opposite from the way it traditionally bends when passing from one material to another. Some physicists suggest that the phenomenon may someday lead to optical microscopes for imaging objects as small as molecules and for creating cloaking devices that render objects invisible. Metamaterials were first theorized by Soviet physicist Victor Veslago in 1967. In 2008, scientists described a fishnet structure that had a negative refractive index for near-infrared light.
SEE ALSO Snell’s Law of Refraction (1621), Newton’s Prism (1672), Explaining the Rainbow (1304), Blackest Black (2008).
Artistic rendition of light-bending metamaterials developed by researchers working with the National Science Foundation. A layered material can cause light to refract, or bend, in a manner not seen with natural materials.
Ernst Gabor Straus (1922–1983), Victor L. Klee, Jr. (1925–2007), George Tokarsky (b. 1946)
American novelist Edith Wharton once wrote, “There are two ways of spreading light: to be the candle or the mirror that reflects it.” In physics, the law of reflection states that for mirror-like reflections, the angle at which the wave is incident on a surface is equal to the angle at which it is reflected. Imagine that we are in a dark room with flat walls covered with mirrors. The room has several turns and side-passages. If I light a candle somewhere in the room, would you be able to see it no matter where you stand in the room, and no matter what the room shape or in which side-passage you stand? Stated in terms of billiard balls, must there be a pool shot between any two points on a polygonal pool table?
If we happen to be trapped in an L-shaped room, you’d be able to see my candle no matter where you and I stand because the light ray can bounce off various walls to get to your eye. But can we imagine a perplexing polygonal room that is so complicated that a point exists that light never reaches? (For our problem, we consider a person and candle to be transparent and the candle to be a point source.)
This conundrum was first presented in print by mathematician Victor Klee in 1969, although it dates back to the 1950s when mathematician Ernst Straus pondered such problems. It seems incredible that no one knew the answer until 1995, when mathematician George Tokarsky of the University of Alberta discovered such a room that is not completely illuminable. His published floor plan of the room had 26 sides. Subsequently, Tokarsky found an example with 24 sides, and this strange room is the least-sided unilluminable polygonal room currently known. Physicists and mathematicians do not know if unilluminable polygonal rooms with fewer sides actually exist.
Other similar light-reflection problems exist. In 1958, mathematical physicist Roger Penrose and his colleague showed that unlit regions can exist in certain rooms with curved sides.
SEE ALSO Archimedes’ Burning Mirrors (212 B.C.), Snell’s Law of Refraction (1621), Brewster’s Optics (1815).
In 1995, mathematician George Tokarsky discovered this unilluminable 26-sided polygonal room. The room contains a location at which a candle can be held that leaves another point in the room in the dark.
Bruno Zumino (b. 1923), Bunji Sakita (1930–2002), Julius Wess (1934–2007)
“Physicists have conjured a theory about matter that sounds as if were straight out of a Star Trek plot,” writes journalist Charles Seife. “It proposes that every particle has an as-yet-undiscovered doppelganger, a shadowy twin superpartner that has vastly different properties from the particles we know…. If supersymmetry is correct, these … particles are probably the source of exotic Dark Matter … that makes up almost all of the mass in the cosmos.”
According to the theory of supersymmetry (SUSY), every particle in the Standard Model has a supersymmetric heavier twin. For example, Quarks (the tiny particles that combine to form other subatomic particles such as protons and neutrons) would have a heavier partner particle called a squark, which is short for supersymmetric quark. The supersymmetric partner of an electron is called a selectron. SUSY pioneers include physicists B. Sakita, J. Wess, and B. Zumino.
Part of the motivation of SUSY is the sheer aesthetics of the theory, since it adds a satisfying symmetry with respect to the properties of the known particles. If SUSY did not exist, writes Brian Greene, “It would be as if Bach, after developing numerous intertwining voices to fill out an ingenious pattern of musical symmetry, left out the final resolving measure.” SUSY is also an important feature of String Theory in which some of the most basic particles, like quarks and electrons, can be modeled by inconceivably tiny, essentially one-dimensional entities called strings. Science-journalist Anil Ananthaswamy writes, “The key to the theory is that in the high-energy soup of the early universe, particles and their super-partners were indistinguishable. Each pair co-existed as single massless entities. As the universe expanded and cooled, though, this supersymmetry broke down. Partners and super-partners went their separate ways, becoming individual particles with a distinct mass all their own.”
Seife concludes, “If these shadowy partners stay undetectable, then the theory of supersymmetry would be merely a mathematical toy. Like the Ptolemaic universe, it would appear to explain the workings of the cosmos, yet it would not reflect reality.”
SEE ALSO String Theory (1919), Standard Model (1961), Dark Matter (1933), Large Hadron Collider (2009).
According to the theory of supersymmetry (SUSY), every particle in the Standard Model has a massive “shadow” particle partner. In the high-energy conditions of the early universe, particles and their super-partners were indistinguishable.
Alan Harvey Guth (b. 1947)
The Big Bang theory states that our universe was in an extremely dense and hot state 13.7 billion years ago, and space has been expanding ever since. However, the theory is incomplete because it does not explain several observed features in the universe. In 1980, physicist Alan Guth proposed that 10−35 seconds (a 100 billion trillion trillionths of a second) after the Big Bang, the universe expanded (or inflated) in a mere 10−32 seconds from a size smaller than a proton to the size of a grapefruit—an increase in size of 50 orders of magnitude. Today, the observed temperature of the background radiation of the universe seems relatively constant even though the distant parts of our visible universe are so far apart that they do not appear to have been connected, unless we invoke Inflation that explains how these regions were originally in close proximity (and had reached the same temperature) and then separated faster than the speed of light.
Additionally, Inflation explains why the universe appears to be, on the whole, quite “flat”—in essence why parallel light rays remain parallel, except for deviations near bodies with high gravitation. Any curvature in the early universe would have been smoothed away, like stretching the surface of a ball until it is flat. Inflation ended 10−30 seconds after the Big Bang, allowing the universe to continue its expansion at a more leisurely rate.
Quantum fluctuations in the microscopic inflationary realm, magnified to cosmic size, become the seeds for larger structures in the universe. Science-journalist George Musser writes, “The process of Inflation never ceases to amaze cosmologists. It implies that giant bodies such as galaxies originated in teensy-weensy random fluctuations. Telescopes become microscopes, letting physicists see down to the roots of nature by looking up into the heavens.” Alan Guth writes that inflationary theory allows us to “consider such fascinating questions as whether other big bangs are continuing to happen far away, and whether it is possible in principle for a super-advanced civilization to recreate the big bang.”
SEE ALSO Big Bang (13.7 Billion B.C.), Comic Microwave Background (1965), Hubble’s Law of Cosmic Expansion (1929), Parallel Universes (1956), Dark Energy (1998), Cosmological Big Rip (36 Billion), Cosmic Isolation (100 Billion).
A map produced by the Wilkinson Microwave Anisotropy Probe (WMAP) showing a relatively uniform distribution of cosmic background radiation, produced by an early universe more than 13 billion years ago. Inflation theory suggests that the irregularities seen here are the seeds that became galaxies.
Richard Phillips Feynman (1918–1988), David Elieser Deutsch (b. 1953)
One of the first scientists to consider the possibility of a quantum computer was physicist Richard Feynman who, in 1981, wondered just how small computers could become. He knew that when computers finally reached the size of sets of atoms, the computer would be making use of the strange laws of quantum mechanics. Physicist David Deutsch in 1985 envisioned how such a computer would actually work, and he realized that calculations that took virtually an infinite time on a traditional computer could be performed quickly on a quantum computer.
Instead of using the usual binary code, which represents information as either a 0 or 1, a quantum computer uses qubits, which essentially are simultaneously both 0 and 1. Qubits are formed by the quantum states of particles, for example, the spin state of individual electrons. This superposition of states allows a quantum computer to effectively test every possible combination of qubits at the same time. A thousand-qubit system could test 21,000 potential solutions in the blink of an eye, thus vastly outperforming a conventional computer. To get a sense for the magnitude of 21,000 (which is approximately 10301), note that there are only about 1080 atoms in the visible universe.
Physicists Michael Nielsen and Isaac Chuang write, “It is tempting to dismiss quantum computation as yet another technological fad in the evolution of the computer that will pass in time…. This is a mistake, since quantum computation is an abstract paradigm for information processing that may have many different implementations in technology.”
Of course, many challenges still exist for creating a practical quantum computer. The slightest interaction or impurity from the surroundings of the computer could disrupt its operation. “These quantum engineers … will have to get information into the system in the first place,” writes author Brian Clegg, “then trigger the operation of the computer, and, finally, get the result out. None of these stages is trivial…. It’s as if you were trying to do a complex jigsaw puzzle in the dark with your hands tied behind your back.”
SEE ALSO Complementarity Principle (1927), EPR Paradox (1935), Parallel Universes (1956), Integrated Circuit (1958), Bell’s Theorem (1964), Quantum Teleportation (1993).
In 2009, physicists at the National Institute of Standards and Technology demonstrated reliable quantum information processing in the ion trap at the left center of this photograph. The ions are trapped inside the dark slit. By altering the voltages applied to each of the gold electrodes, scientists can move the ions between the six zones of the trap.
Sir Roger Penrose (b. 1931), Dan Shechtman (b. 1941)
I am often reminded of exotic quasicrystals when I read the Biblical description from Ezekiel 1:22 of an “awe-inspiring” or “terrible” crystal that is spread over the heads of living creatures. In the 1980s, quasicrystals shocked physicists with a surprising mixture of order and nonperiodicity, which means that they lack translational symmetry, and a shifted copy of the pattern will never match with its original.
Our story begins with Penrose tiles—two simple geometric shapes that, when put side by side, can cover a plane in a pattern with no gaps or overlaps, and the pattern does not repeat periodically like the simple hexagonal tile patterns on some bathroom floors. Penrose tilings, named after mathematical physicist Roger Penrose, have fivefold rotational symmetry, the same kind of a symmetry exhibited by a five-pointed star. If you rotate the entire tile pattern by 72 degrees, it looks the same as the original. Author Martin Gardner writes, “Although it is possible to construct Penrose patterns with a high degree of symmetry …, most patterns, like the universe, are a mystifying mixture of order and unexpected deviations from order. As the patterns expand, they seem to be always striving to repeat themselves but never quite managing it.”
Before Penrose’s discovery, most scientists believed that crystals based on fivefold symmetry would be impossible to construct, but quasicrystals resembling Penrose tile patterns have since been discovered, and they have remarkable properties. For example, metal quasicrystals are poor conductors of heat, and quasicrystals can be used as slippery nonstick coatings.
In the early 1980s, scientists had speculated about the possibility that the atomic structure of some crystals might be based on a nonperiodic lattice. In 1982, materials scientist Dan Shechtman discovered a nonperiodic structure in the electron micrographs of an aluminum-manganese alloy with an obvious fivefold symmetry reminiscent of a Penrose tiling. At the time, this finding was so startling that some said it was as shocking as finding a five-sided snowflake.
The hexagonal symmetry of a beehive is periodic.
A generalization of Penrose tiling and a possible model for quasicrystals based on an icosahedral tiling made from two rhombohedra. (Courtesy of Edmund Harriss.)
SEE ALSO Kepler’s “Six-Cornered Snowflake” (1611), Bragg’s Law of Crystal Diffraction (1912).
Penrose tiling with two simple geometric shapes that, when put side-by-side, can cover a plane in a pattern with no gaps or overlaps and that does not repeat periodically.
Michael Boris Green (b. 1946), John Henry Schwarz (b. 1941)
“My ambition is to live to see all of physics reduced to a formula so elegant and simple that it will fit easily on the front of a T-shirt,” wrote physicist Leon Lederman. “For the first time in the history of physics,” writes physicist Brian Greene, we “have a framework with the capacity to explain every fundamental feature upon which the universe is constructed [and that may] explain the properties of the fundamental particles and the properties of the forces by which they interact and influence one another.”
The theory of everything (TOE) would conceptually unite the four fundamental forces of nature, which are, in decreasing order of strengths: 1) the strong nuclear force—which holds the nucleus of the atom together, binds quarks into elementary particles, and makes the stars shine, 2) the electromagnetic force—between electric charges and between magnets, 3) the weak nuclear force—which governs the radioactive decay of elements, and 4) the gravitational force—which holds the Earth to the Sun. Around 1967, physicists showed how electromagnetism and the weak forces could be unified as the electroweak force.
Although not without controversy, one candidate for a possible TOE is M-theory, which postulates that the universe has ten dimensions of space and one of time. The notion of extra dimensions also may help resolve the hierarchy problem concerning why gravity is so much weaker than the other forces. One solution is that gravity leaks away into dimensions beyond our ordinary three spatial dimensions. If humanity did find the TOE, summarizing the four forces in a short equation, this would help physicists determine if time machines are possible and what happens at the center of black holes, and, as astrophysicist Stephen Hawking said, it gives us the ability to “read the mind of God.”
The entry is arbitrarily dated as 1984, the date of an important breakthrough in superstring theory by physicists Michael Green and John Schwarz. M-theory, an extension of String Theory, was developed in the 1990s.
SEE ALSO Big Bang (13.7 Billion B.C.), Maxwell’s Equations (1861), String Theory (1919), Randall-Sundrum Branes (1999), Standard Model (1961), Quantum Electrodynamics (1948), God Particle (1964).
Particle accelerators provide information on subatomic particles to help physicists develop a Theory of Everything. Shown here is the Cockroft-Walton generator, once used at Brookhaven National Laboratory to provide the initial acceleration to protons prior to injection into a linear accelerator and then a synchrotron.
Richard Buckminster “Bucky” Fuller (1895–1983), Robert Floyd Curl, Jr. (b. 1933), Harold (Harry) Walter Kroto (b. 1939), Richard Errett Smalley (1943–2005)
Whenever I think about buckyballs, I humorously imagine a team of microscopic soccer players kicking these rugged soccer-shaped carbon molecules and scoring goals in a range of scientific fields. Buckminsterfullerene (or buckyball, or C60, for short) is composed of 60 carbon atoms and was made in 1985 by chemists Robert Curl, Harold Kroto, and Richard Smalley. Every carbon atom is at the corner of one pentagon and two hexagons. The name derives from inventor Buckminster Fuller who created cage-like structures, like the geodesic dome, that reminded the C60 discovers of the buckyball. C60 was subsequently discovered in everything from candle soot to meteorites, and researchers have been able to place selected atoms within the C60 structure, like a bird in a cage. Because C60 readily accepts and donates electrons, it may one day be used in batteries and electronic devices. The first cylindrical nanotubes made of carbon were obtained in 1991. These tubes are quite sturdy and may one day serve as molecular-scale electrical wires.
Buckyballs always seem to be in the news. Researchers have studied C60 derivatives for drug delivery and for inhibiting HIV (human immunodeficiency virus). C60 is of interest theoretically for various quantum mechanical and superconducting characteristics. In 2009, chemist Junfeng Geng and colleagues discovered convenient ways to form buckywires on an industrial scale by joining buckyballs like a string of pearls. According to Technology Review, “Buckywires ought to be handy for all kinds of biological, electrical, optical, and magnetic applications…. These buckywires look as if they could be hugely efficient light harvesters because of their great surface area and the way they can conduct photon-liberated electrons. [They may have] electronic applications in wiring up molecular circuit boards.”
Also in 2009, researchers developed a new highly conductive material consisting of a crystalline network of negatively charged buckyballs with positively charged lithium ions moving through the structure. Experiments continue on these and related structures to determine if they may one day serve as “superionic” materials for batteries of the future.
SEE ALSO Battery (1800), De Broglie Relation (1924), Transistor (1947).
Buckminsterfullerne (or buckyball, or C60, for short) is composed of 60 carbon atoms. Every carbon atom is at the corner of one pentagon and two hexagons.
Hans Moravec (b. 1948), Max Tegmark (b. 1967)
The mind-boggling concept of quantum immortality, and related concepts discussed by technologist Hans Moravec in 1987 and later by physicist Max Tegmark, relies on the many-worlds interpretation (MWI) of quantum mechanics discussed in the entry on Parallel Universes. This theory holds that whenever the universe (“world”) is confronted by a choice of paths at the quantum level, it actually follows the various possibilities, splitting into multiple universes.
According to proponents of quantum immortality, the MWI implies that we may be able to live virtually forever. For example, suppose you are in an electric chair. In almost all parallel universes, the electric chair will kill you. However, there is a small set of alternate universes in which you somehow survive—for example an electrical component may fail when the executioner pulls the switch. You are alive in, and thus able to experience, one of the universes in which the electric chair malfunctions. From your own point of view, you live virtually forever.
Consider a thought experiment. Don’t try this at home, but imagine that you are in your basement next to a hammer that is triggered or not triggered based on the decay of a radioactive atom. With each run of the experiment, a 50-50 chance exists that the hammer will smash your skull, and you will die. If the MWI is correct, then each time you conduct the experiment, you will be split into one universe in which the hammer smashes and kills you and another universe in which the hammer does not move. Perform the experiment for a thousand times, and you may find yourself to be surprisingly alive. In the universe in which the hammer falls, you are dead. However, from the point of view of the living version of you, the hammer experiment will continue running and you will be alive, because at each branch in the multiverse there exists a version of you that survives. If the MWI is correct, you may slowly begin to notice that you never seem to die!
SEE ALSO Schrödinger’s Cat (1935), Parallel Universes (1956), Quantum Resurrection (100 Trillion).
According to proponents of quantum immortality, we can avoid the lurking specter of death virtually forever. Perhaps there is a small set of alternate universes in which you continue to survive, and thus from your own point of view, you live for an eternity.
Per Bak (1948–2002)
“Consider a collection of electrons, or a pile of sand grains, a bucket of fluid, an elastic network of springs, an ecosystem, or the community of stock-market dealers,” writes mathematical physicist Henrik Jensen. “Each of these systems consists of many components that interact through some kind of exchange of forces or information…. Is there some simplifying mechanism that produces a typical behavior shared by large classes of systems …?”
In 1987, physicists Per Bak, Chao Tang, and Kurt Wiesenfel published their concept of self-organized criticality (SOC), partly in response to this kind of question. SOC is often illustrated with avalanches in a pile of sand grains. One by one, grains are dropped onto a pile until the pile reaches a stationary critical state in which its slope fluctuates about a constant angle. At this point, each new grain is capable of inducing a sudden avalanche at various possible size scales. Although some numerical models of sand piles exhibit SOC, the behavior of real sand piles has sometimes been ambiguous. In the famous 1995 Oslo Rice-pile Experiment, performed at the University of Oslo in Norway, if the rice grains have a large aspect ratio, the piles exhibited SOC; however, for less-elongated grains, SOC was not found. Thus, SOC may be sensitive to the details of the system. When Sara Grumbacher and colleagues used tiny iron and glass spheres to study avalanche models, SOC was found in all cases.
SOC has been looked for in fields ranging from geophysics to evolutionary biology, economics, and cosmology, and may link many complex phenomena in which small changes result in sudden chain-reactions through the system. One key element of SOC involves power-law distributions. For a sandpile, this would imply that there will be far fewer large avalanches than small avalanches. For example, we might expect one avalanche a day involving 1000 grains but 100 avalanches involving 10 grains, and so on. In a wide variety of contexts, apparently complex structures or behaviors emerge in systems that can be characterized by simple rules.
Past research in SOC has involved the stability of rice-pile formations.
SEE ALSO Rogue Waves (1826), Soliton (1834), Chaos Theory (1963).
Studies have shown that snow avalanches may exhibit self-organized criticality. The relationships between frequency and size of avalanches may be helpful for quantifying the risk of avalanches.
Kip Stephen Thorne (b. 1940)
As discussed in the entry on Time Travel, Kurt Gödel’s time machine, proposed in 1949, worked on huge size scales—the entire universe had to rotate to make it function. At the other extreme of time-travel devices are cosmic wormholes created from subatomic quantum foam as proposed by Kip Thorne and colleagues in 1988 in their prestigious Physical Review Letters article. In their paper, they describe a wormhole connecting two regions that exit in different time periods. Thus, the wormhole may connect the past to the present. Since travel through the wormhole is nearly instantaneous, one could use the wormhole for backward time travel. Unlike the time machine in H. G. Wells’ The Time Machine, the Thorne machine requires vast amounts of energy to use—energy that our civilization cannot possibly produce for many years to come. Nevertheless, Thorne optimistically writes in his paper: “From a single wormhole, an arbitrarily advanced civilization can construct a machine for backward time travel.”
The Thorne traversable wormhole might be created by enlarging submicroscopic wormholes that exist in the quantum foam that pervades all of space. Once enlarged, one end of the wormhole is accelerated to extremely high speeds and then returned. Another approach involves placing a wormhole mouth near a very high gravity body and then returning it. In both cases, time dilation (slowing) causes the end of the wormhole that has been moved to have aged less than the end that has not moved with respect to your laboratory. For example, a clock on the accelerated end might read 2012 while a clock on the stationary end could read 2020. If you leaped into the 2020 end, you would arrive back in the year 2012. However, you could not go back in time to a date before the wormhole time machine was created. One difficulty for creating the wormhole time machine is that in order to keep the throat of the wormhole open, a significant amount of negative energy (e.g. associated with so-called exotic matter) would be required—something not technologically feasible to create today.
SEE ALSO Time Travel (1949), Casimir Effect (1948), Chronology Protection Conjecture (1992).
Artistic depiction of a wormhole in space. The wormhole may function as both a shortcut through space and a time machine. The two mouths (openings) of the wormhole are the yellow and blue regions.
Lyman Strong Spitzer, Jr. (1914–1997)
“Since the earliest days of astronomy,” write the folks at the Space Telescope Science Institute, “since the time of Galileo, astronomers have shared a single goal—to see more, see farther, see deeper. The Hubble Space Telescope’s launch in 1990 sped humanity to one of its greatest advances in that journey.” Unfortunately, ground-based telescope observations are distorted by the Earth’s atmosphere that makes stars seem to twinkle and that partially absorbs a range of electromagnetic radiation. Because the Hubble Space Telescope (HST) orbits outside of the atmosphere, it can capture high-quality images.
Incoming light from the heavens is reflected from the telescope’s concave main mirror (7.8 feet [2.4 meters] in diameter) into a smaller mirror that then focuses the light through a hole in the center of the main mirror. The light then travels toward various scientific instruments for recording visible, ultraviolet, and infrared light. Deployed by NASA using a space shuttle, the HST is the size of a Greyhound bus, powered by solar arrays, and uses Gyroscopes to stabilize its orbit and point at targets in space.
Numerous HST observations have led to breakthroughs in astrophysics. Using the HST, scientists were able to determine the age of the universe much more accurately than ever before by allowing scientists to carefully measure distance to Cepheid variable stars. The HST has revealed protoplanetary disks that are likely to be the birthplaces of new planets, galaxies in various stages of evolution, optical counterparts of Gamma-Ray Bursts in distance galaxies, the identity of Quasars, the occurrence of extrasolar planets around other stars, and the existence of Dark Energy that appears to be causing the universe to expand at an accelerating rate. HST data established the prevalence of giant black holes at the centers of galaxies and the fact that the masses of these black holes are correlated with other galactic properties.
In 1946, American astrophysicist Lyman Spitzer, Jr. justified and promoted the idea of a space observatory. His dreams were realized in his lifetime.
SEE ALSO Big Bang (13.7 Billion B.C.), Telescope (1608), Nebular Hypothesis (1796), Gyroscope (1852), Cepheid Variables Measure the Universe (1912), Hubble’s Law of Cosmic Expansion (1929), Quasars (1963), Gamma-Ray Bursts (1967), Dark Energy (1998).
Astronauts Steven L. Smith and John M. Grunsfeld appear as small figures as they replace gyroscopes inside the Hubble Space Telescope (1999).
Chronology Protection Conjecture
Stephen William Hawking (b. 1942)
If time travel to the past is possible, how can various paradoxes be avoided, such as your traveling back in time and killing your grandmother, thus preventing your birth in the first place? Travel to the past may not be ruled out by known physical laws and may be permitted by hypothetical techniques that employ wormholes (shortcuts through space and time) or high gravities (see Time Travel). If time travel is possible, why don’t we see evidence of such time travelers? Novelist Robert Silverberg eloquently stated the potential problem of time-traveling tourists: “Taken to its ultimate, the cumulative audience paradox yields us the picture of an audience of billions of time-travelers piled up in the past to witness the Crucifixion, filling all the holy land and spreading out into Turkey, into Arabia, even to India and Iran…. Yet at the original occurrence of that event, no such hordes were present…. A time is coming when we will throng the past to the choking point. We will fill all our yesterdays with ourselves and crowd out our own ancestors.”
Partly due to the fact that we have never seen a time traveler from the future, physicist Stephen Hawking formulated the Chronology Projection Conjecture, which proposes that the laws of physics prevent the creation of a time machine, particularly on macroscopic size scales. Today, debate continues as to the precise nature of the conjecture, or if it is actually valid. Could paradoxes be avoided simply through a string of coincidences that prevented you from killing your grandmother even if you could go back in time—or would backward time travel be prohibited by some fundamental law of nature such as a law concerning quantum mechanical aspects of gravity?
Perhaps if backward time travel was possible, our past would not be altered, because the moment someone traveled back in time, the time traveler would enter a parallel universe the instant the past is entered. The original universe would remain intact, but the new one would include whatever acts the time traveler made.
SEE ALSO Time Travel (1949), Fermi Paradox (1950), Parallel Universes (1956), Wormhole Time Machine (1988), Stephen Hawking on Star Trek (1993).
Stephen Hawking formulated the Chronology Projection Conjecture, which proposes that the laws of physics prevent the creation of a time machine, particularly on macroscopic size scales. Today, debate continues as to the precise nature of the conjecture.
Charles H. Bennett (b. 1943)
In Star Trek, when the captain had to escape from a dangerous situation on a planet, he asked a transporter engineer on the starship to “beam me up.” In seconds, the captain would disappear from the planet and reappear on the ship. Until recently, teleportation of matter was pure speculation.
In 1993, computer-scientist Charles Bennett and colleagues proposed an approach in which a particle’s quantum state might be transmitted over a distance using quantum entanglement (discussed in EPR Paradox). Once a pair of particles (like photons) is entangled, a certain kind of change to one of them is reflected instantly in the other, and it doesn’t matter if the pair is separated by inches or by interplanetary distances. Bennett proposed a method for scanning and transmitting part of the information of a particle’s quantum state to its distant partner. The partner’s state is subsequently modified using the scanned information so that it is in the state of the original particle. In the end, the first particle is no longer in its original state. Although we transfer a particle state, we can think of this as if the original particle magically jumped to the new location. If two particles of the same kind share identical quantum properties, they are indistinguishable. Because this method of teleportation has a step in which information is sent to the receiver by conventional means (such as a laser beam), teleportation does not occur at faster than light speed.
In 1997, researchers teleported a photon, and, in 2009, teleported the state of one ytterbium ion to another ytterbium ion in unconnected enclosures a meter apart. Currently, it is far beyond our technical capacity to perform quantum teleportation for people or even viruses.
Quantum teleportation may one day be useful for facilitating long-range quantum communications in quantum computers that perform certain tasks, such as encryption calculations and searches of information, much faster than traditional computers. Such computers may use quantum bits that exist in a superposition of states, like a coin that is simultaneously both tails and heads.
SEE ALSO Schrödinger’s Cat (1935), EPR Paradox (1935), Bell’s Theorem (1964), Quantum Computers (1981).
Researchers have been able to teleport a photon and also teleport information between two separate atoms (ytterbium ions) in unconnected enclosures. Centuries from now, will humans be teleported?
Stephen William Hawking (b. 1942)
According to surveys, astrophysicist Stephen Hawking is considered to be “the most famous scientist” at the start of the twenty-first century. Because of his inspiration, he is included in this book as a special entry. Like Einstein, Hawking also crossed over into popular culture, and he has appeared on many TV shows as himself, including Star Trek: The Next Generation. Because it is extremely rare for a top scientist to become a cultural icon, the title of this entry celebrates this aspect of his importance.
Many principles that concern Black Holes have been attributed to Stephen Hawking. Consider, for example, that the rate of evaporation of a Schwarzschild black hole of mass M can be formulated as dM/dt = −C/M2, where C is a constant, and t is time. Another law of Hawking states the temperature of a black hole is inversely proportional to its mass. Physicist Lee Smolin writes, “A black hole the mass of Mount Everest would be no larger than a single atomic nucleus, but would glow with a temperature greater than the center of a star.”
In 1974, Hawking determined that black holes should thermally create and emit subatomic particles, a process known as Hawking radiation, and in the same year he was elected as one of the youngest fellows of the Royal Society in London. Black holes emit this radiation and eventually evaporate and disappear. From 1979 until 2009, Hawking was the Lucasian Professor of Mathematics at the University of Cambridge, a post once held by Sir Isaac Newton. Hawking has also conjectured that the universe has no edge or boundary in imaginary time, which suggests that “the way the universe began was completely determined by the laws of science.” Hawking wrote in the
October 17, 1988 Der Spiegel that because “it is possible for the way the universe began to be determined by the laws of science …, it would not be necessary to appeal to God to decide how the universe began. This doesn’t prove that there is no God, only that God is not necessary.”
On Star Trek, Stephen Hawking plays poker with holographic representations of Isaac Newton and Albert Einstein.
SEE ALSO Newton as Inspiration (1687), Black Holes (1783), Einstein as Inspiration (1921), Chronology Protection Conjecture (1992).
U.S. President Barack Obama talks with Stephen Hawking in the White House before a ceremony presenting Hawking with the Presidential Medal of Freedom (2009). Hawking has a motor neuron disease that leaves him almost completely paralyzed.
Satyendra Nath Bose (1894–1974), Albert Einstein (1879–1955), Eric Allin Cornell (b. 1961), Carl Edwin Wieman (b. 1951)
The cold matter in a Bose-Einstein condensate (BEC) exhibits an exotic property in which atoms lose their identity and merge into a mysterious collective. To help visualize the process, imagine an ant colony with 100 ants. You lower the temperature to a frigid 170 billionths of a kelvin—colder than the deep reaches of interstellar space—and each ant morphs into an eerie cloud that spreads through the colony. Each ant cloud overlaps with every other one, so the colony is filled with a single dense cloud. No longer can you see individual insects; however, if you raise the temperature, the ant cloud differentiates and returns to the 100 individuals who continue to go about their ant business as if nothing untoward has happened.
A BEC is a state of matter of a very cold gas composed of bosons, particles that can occupy the same quantum state. At low temperatures, their wave functions can overlap, and interesting quantum effects can be observed on much larger size scales. First predicted by physicists Satyendra Nath Bose and Albert Einstein around 1925, BECs were not created in the laboratory until 1995 by physicists Eric Cornell and Carl Wieman, using a gas of rubidium-87 atoms (which are bosons), cooled to near absolute zero. Driven by the Heisenberg Uncertainty Principle—which dictates that as the velocity of the gas atoms decreases, the positions become uncertain—the atoms condense into one giant “superatom” behaving as a single entity, a quantum ice cube of sorts. Unlike an actual ice cube, the BEC is very fragile and disrupts easily to form a normal gas. Despite this, the BEC is being increasingly studied in numerous areas of physics including quantum theory, superfluidity, the slowing of light pulses, and even in the modeling of Black Holes.
Researchers can create such ultracold temperatures using Lasers and magnetic fields to slow and trap atoms. The laser beam actually can exert pressure against the atoms, slowing and cooling them at the same time.
SEE ALSO Heisenberg Uncertainty Principle (1927), Superfluids (1937), Laser (1960).
In the July 14, 1995, issue of Science magazine, researchers from JILA (formerly known as the Joint Institute for Laboratory Astrophysics) reported the creation of a BEC. The graphic shows successive representations of the condensation (represented as a blue peak). JILA is operated by NIST (National Institute of Standards and Technology) and the University of Colorado at Boulder.
“A strange thing happened to the universe five billion years ago,” writes science-journalist Dennis Overbye. “As if God had turned on an antigravity machine, the expansion of the cosmos speeded up, and galaxies began moving away from one another at an ever faster pace.” The cause appears to be dark energy—a form of energy that may permeate all of space and that is causing the universe to accelerate its expansion. Dark energy is so abundant that it accounts for nearly three-quarters of the total mass-energy of the universe. According to astrophysicist Neil deGrasse Tyson and astronomer Donald Goldsmith, “If cosmologists could only explain where the dark energy comes from … they could claim to have uncovered a fundamental secret of the universe.”
Evidence of the existence of dark energy came in 1998, during astrophysical observations of certain kinds of distant supernovae (exploding stars) that are receding from us at an accelerating rate. In the same year, American cosmologist Michael Turner coined the term dark energy.
If the acceleration of the universe continues, galaxies outside our local supercluster of galaxies will no longer be visible, because their recessional velocity will be greater than the speed of light. According to some scenarios, dark energy may eventually exterminate the universe in a Cosmological Big Rip as matter (in forms that range from atoms to planets) is torn apart. However, even without a Big Rip, the universe may become a lonely place (see Cosmic Isolation). Tyson writes, “Dark energy … will, in the end, undermine the ability of later generations to comprehend their universe. Unless contemporary astrophysicists across the galaxy keep remarkable records … future astrophysicists will know nothing of external galaxies…. Dark energy will deny them access to entire chapters from the book of the universe…. [Today] are we, too, missing some basic pieces of the universe that once was, [thus] leaving us groping for answers we may never find?”
SEE ALSO Hubble’s Law of Cosmic Expansion (1929), Dark Matter (1933), Cosmic Microwave Background (1965), Cosmic Inflation (1980), Cosmological Big Rip (36 Billion), Cosmic Isolation (100 Billion).
SNAP (which stands for Supernova Acceleration Probe, a cooperative venture between NASA and the U.S. Department of Energy) is a proposed space observatory for measuring the expansion of the Universe and for elucidating the nature of dark energy.
Lisa Randall (b. 1962), Raman Sundrum (b. 1964)
The Randall-Sundrum (RS) brane theory attempts to address the hierarchy problem in physics that concerns questions as to why the force of gravity appears to be so much weaker than other fundamental forces such as the electromagnetic force and the strong and weak nuclear force. Although gravity may seem strong, just remember that the electrostatic forces on a rubbed balloon are sufficient to hold it to a wall and defeat the gravity of the entire planet. According to the RS theory, gravity may be weak because it is concentrated in another dimension.
As evidence of the worldwide interest in the 1999 paper “A Large Mass Hierarchy from a Small Extra Dimension” by physicists Lisa Randall and Raman Sundrum, note that from 1999 to 2004, Dr. Randall was the most-cited theoretical physicist in the world for this and other works. Randall is also notable as she was the first tenured woman in the Princeton University physics department. One way to visualize the RS theory is to imagine that our ordinary world with its three obvious dimensions of space and one dimension of time is like a vast shower curtain, which physicists call a brane. You and I are like water droplets, spending our lives attached to the curtain, unaware that another brane may reside a short distance away in another spatial dimension. It is on this other hidden brane that gravitons, the elementary particles that give rise to gravity, may primarily reside. The other kinds of particles in the Standard Model, like Electrons and protons, are on the visible brane in which our visible universe resides. Gravity is actually as strong as the other forces, but it is diluted as it “leaks” into our visible brane. Photons that are responsible for our eyesight are stuck to the visible brain, and thus we are not able to see the hidden brane.
So far, no one has actually discovered a graviton. However, it is possible that high-energy particle accelerators may be able to allow scientists to identify this particle, which may also provide some evidence for the existence of additional dimensions.
SEE ALSO General Theory of Relativity (1915), String Theory (1919), Parallel Universes (1956), Standard Model (1961), Theory of Everything (1984), Dark Matter (1933), Large Hadron Collider (2009).
ATLAS is a particle detector at the site of the Large Hadron Collider. ATLAS is being used to search for possible evidence related to the origins of mass and the existence of extra dimensions.
Joshua Michael Aaron Ryder Wurman (b. October 1, 1960)
Dorothy’s fictional journey in The Wizard of Oz was not pure fantasy. Tornadoes are one of nature’s most destructive forces. When the early American pioneers traveled to the Central Plains and encountered tornadoes for the first time, some witnessed adult buffalos being carried away into the air. The relatively low pressure within a tornado’s swirling vortex causes cooling and condensation, making the storm visible as a funnel.
On May 3, 1999, scientists recorded the fastest tornado wind speed near the ground—roughly 318 miles (512 kilometers) per hour. Led by atmospheric scientist Joshua Wurman, a team began to follow a developing supercell thunderstorm—that is, a thunderstorm with an accompanying mesocyclone, which is a deep, continuously rotating updraft located a few miles up in the atmosphere. Using truck-mounted Doppler radar equipment, Wurman Fired pulses of microwaves toward the Oklahoma storm. The waves bounced off rain and other particles, changing their frequency and providing the researchers with an accurate estimation of wind speed at about 100 feet (30 meters) above the ground.
Thunderstorms are generally characterized by rising air, called updrafts. Scientists continue to study why these updrafts become twisting whirlwinds in some thunderstorms but not others. Updraft air rises from the ground and interacts with higher-altitude winds blowing from a different direction. The tornado funnel that forms is associated with the low-pressure region as air and dust rush into the vortex. Although air is rising in a tornado, the funnel itself starts from the storm cloud and grows to the ground as a tornado forms.
Most tornadoes occur in Tornado Alley of the middle United States. Tornadoes may be created by heated air close to the ground trapped under localized colder air higher in the atmosphere. The heavier colder air spills around the warmer air region, and the warmer, lighter area rises rapidly to replace the cold air. Tornadoes sometimes form in the U.S. when warm, moist air from the Gulf of Mexico collides with cool, dry air from the Rocky Mountains.
SEE ALSO Barometer (1643), Rogue Waves (1826), Doppler Effect (1842), Buys-Ballot’s Weather Law (1857).
A tornado observed by the VORTEX-99 team on May 3, 1999, in central Oklahoma.
Oliver Heaviside (1850–1925), Arthur Edwin Kennelly (1861–1939), Marchese Guglielmo Marconi (1874–1937)
If one follows the writings of conspiracy theorists, the high-frequency active auroral research program, or HAARP, is the ultimate secret missile defense tool, a means for disrupting the weather and communications around the world, or a method for controlling the minds of millions of people. However, the truth is somewhat less frightening but nonetheless fascinating.
HAARP is an experimental project funded, in part, by the U.S. Air Force, U.S. Navy, and the Defense Advanced Research Projects Agency (DARPA). Its purpose is to facilitate study of the ionosphere, one of the outermost layers of the atmosphere. Its 180-antenna array, located on a 35-acre (4,000 meters2) plot in Alaska, became fully operational in 2007. HAARP employs a high-frequency transmitter system that beams 3.6 million watts of radio waves into the ionosphere, which starts about 50 miles (80 kilometers) above ground. The effects of heating the ionosphere can then be studied with sensitive instruments on the ground at the facilities of HAARP.
Scientists are interested in studying the ionosphere because of its effect on both civilian and military communications systems. In this region of the atmosphere, sunlight creates charged particles (see Plasma). The Alaska location was chosen partly because it exhibits a wide variety of ionosphere conditions for study, including aurora emissions (see Aurora Borealis). Scientists can adjust the signal of HAARP to stimulate reactions in the lower ionosphere, causing radiation auroral currents that send low-frequency waves back to the Earth. Such waves reach deep into the ocean and might be used by the Navy to direct its submarine fleet—no matter how deeply submerged the sub.
In 1901, Guglielmo Marconi demonstrated transatlantic communication, and people wondered precisely how radio waves were able to bend around the Earth’s curvature. In 1902, engineers Oliver Heaviside and Arthur Kennelly independently suggested that a conducting layer existed in the upper atmosphere that would reflect radio waves back to the Earth. Today, the ionosphere facilitates long-range communications, and it can also lead to communication blackouts arising from the effect of a solar flare on the ionosphere.
The HAARP high-frequency antenna array.
SEE ALSO Plasma (1879), Aurora Borealis (1621), Green Flash (1882), Electromagnetic Pulse (1962).
HAARP research may lead to improved methods for allowing the U.S. Navy to more easily communicate with submarines deep beneath the ocean’s surface.
All manmade materials, even asphalt and charcoal, reflect some amount of light—but this has not prevented futurists from dreaming of a perfect black material that absorbs all the colors of light while reflecting nothing back. In 2008, reports began to circulate about a group of U.S. scientists who had made the “blackest black,” a superblack—the “darkest ever” substance known to science. The exotic material was created from carbon nanotubes that resemble sheets of carbon, only an atom thick, curled into a cylindrical shape. Theoretically, a perfect black material would absorb light of any wavelength shined on it at all angles.
Researchers at Rensselaer Polytechnic Institute and Rice University had constructed and studied a microscopic carpet of the nanotubes. In some sense, we can think of the “roughness” of this carpet as being adjusted to minimize the reflectance of light.
The black carpet contained tiny nanotubes that reflect only 0.045 percent of all light shined upon the substance. This black is more than 100 times darker than black paint! This “ultimate black” may one day be used to more efficiently capture energy from the Sun or to design more sensitive optical instruments. To limit reflection of light shining upon the superblack material, the researchers made the surface of the nanotube carpet irregular and rough. A significant portion of light is “trapped” in the tiny gaps between the loosely packed carpet strands.
Early tests of the superblack material were conducted using visible light. However, materials that block or highly absorb other wavelengths of electromagnetic radiation may one day be used in defense applications for which the military seeks to make objects difficult to detect.
The quest to produce the blackest black never ends. In 2009, researchers from Leiden University demonstrated that a thin layer of niobium nitride (NbN) is ultra-absorbent, with a light absorption of almost 100% at certain viewing angles. Also in 2009, Japanese researchers described a sheet of carbon nanotubes that absorbed nearly every photon of a wide range of tested wavelengths.
SEE ALSO Electromagnetic Spectrum (1864), Metamaterials (1967), Buckyballs (1985).
In 2008, scientists created the darkest material known at that time, a carpet of carbon nanotubes more than 100 times darker than the paint on a black sports car. The quest for the blackest black continues.
According to Britain’s The Guardian newspaper, “Particle physics is the unbelievable in pursuit of the unimaginable. To pinpoint the smallest fragments of the universe you have to build the biggest machine in the world. To recreate the first millionths of a second of creation you have to focus energy on an awesome scale.” Author Bill Bryson writes, “Particle physicists divine the secrets of the Universe in a startlingly straightforward way: by flinging particles together with violence and seeing what flies off. The process has been likened to firing two Swiss watches into each other and deducing how they work by examining their debris.”
Built by the European Organization for Nuclear Research (usually referred to as CERN), the Large Hadron Collider (LHC) is the world’s largest and highest-energy particle accelerator, designed primarily to create collisions between opposing beams of protons (which are one kind of hadron). The beams circulate around the circular LHC ring inside a continuous vacuum guided by powerful electromagnets, the particles gaining energy with every lap. The magnets exhibit Superconductivity and are cooled by a large liquid-helium cooling system. When in their superconduction states, the wiring and joints conduct current with very little resistance.
The LHC resides within a tunnel 17 miles (27 kilometers) in circumference across the Franco-Swiss border and may potentially allow physicists to gain a better understanding of the Higgs boson (also called the God Particle), a hypothetical particle that may explain why particles have mass. The LHC may also be used to find particles predicted by Supersymmetry, which suggests the existence of heavier partner particles for elementary particles (for example, selectrons are the predicted partners of electrons). Additionally, the LHC may be able to provide evidence for the existence of spatial dimensions beyond the three obvious spatial dimensions. In some sense, by colliding the two beams, the LHC is re-creating some of the kinds of conditions present just after the Big Bang. Teams of physicists analyze the particles created in the collisions using special detectors. In 2009, the first proton–proton collisions were recorded at the LHC.
SEE ALSO Superconductivity (1911), String Theory (1919), Cyclotron (1929), Standard Model (1961), God Particle (1964), Supersymmetry (1971), Randall-Sundrum Branes (1999).
Installing the ATLAS calorimeter for the LHC. The eight toroid magnets can be seen surrounding the calorimeter that is subsequently moved into the middle of the detector. This calorimeter measures the energies of particles produced when protons collide in the center of the detector.
Robert R. Caldwell (b. 1965)
The final fate of the universe is determined by many factors, including the degree to which Dark Energy drives the expansion of the universe. One possibility is that the acceleration will proceed at a constant rate, like a car that moves 1 mile per hour faster with each mile traveled. All galaxies will eventually recede from one another at speeds that can be greater than the speed of light, leaving each galaxy alone in a dark universe (see Cosmic Isolation). Eventually the stars all go out, like candles slowly burning away on a birthday cake. However, in other scenarios, the candles on the cake are ripped apart, as dark energy eventually destroys the universe in a “Big Rip” in which matter—ranging from subatomic particles to planets and stars—is torn apart. If the repulsive effect of dark energy were to somehow turn off, gravity would predominate in the cosmos, and the universe would collapse into a Big Crunch.
Physicist Robert Caldwell of Dartmouth College and colleagues first published the Big Rip hypotheses in 2003 in which the universe expands at an ever-increasing rate. At the same time, the size of our observable universe shrinks and eventually becomes subatomic in size. Although the precise date of this cosmic demise is uncertain, one example in Caldwell’s paper is worked out for a universe that ends roughly 22 billion years from now.
If the Big Rip eventually occurs, about 60 million years before the end of the universe, gravity would be too weak to hold individual galaxies together. Roughly three months before the final rip, the Solar system will be gravitationally unbound. The Earth explodes 30 minutes before the end. Atoms tear apart 10−19 seconds before everything ends. The nuclear force that binds the Quarks in Neutrons and protons has finally been overcome.
Note that in 1917, Albert Einstein actually suggested the idea of an anti-gravitational repulsion in the form of a cosmological constant to explain why the gravity of bodies in the universe did not cause the universe to contract.
SEE ALSO Big Bang (13.7 Billion B.C.), Hubble’s Law of Cosmic Expansion (1929), Cosmic Inflation (1980), Dark Energy (1998), Cosmic Isolation (100 Billion).
During the Big Rip, planets, stars, and all matter are torn apart.
Clive Staples “Jack” Lewis (1898–1963), Gerrit L. Verschuur (b. 1937), Lawrence M. Krauss (b. 1954)
The chances of an extraterrestrial race making physical contact with us may be quite small. Astronomer Gerrit Verschuur believes that if extraterrestrial civilizations are, like ours, in their infancy, then no more than 10 or 20 of them exist at this moment in our visible universe, and each such civilization is a lonely 2,000 light years apart from another. “We are,” says Vershuur, “effectively alone in the Galaxy.” In fact, C. S. Lewis, the Anglican lay theologian, proposed that the great distances separating intelligent life in the universe is a form of divine quarantine to “prevent the spiritual infection of a fallen species from spreading.”
Contact with other galaxies will be even more difficult in the future. Even if the Cosmological Big Rip does not occur, the expansion of our universe may be pulling galaxies away from each other faster than the speed of light and causing them to become invisible to us. Our descendants will observe that they live in a blob of stars, which results from gravity pulling together a few nearby galaxies into one supergalaxy. This blob may then sit in an endless and seemingly static blackness. This sky will not be totally black, because the stars in this supergalaxy will be visible, but Telescopes that peer beyond will see nothing. Physicists Lawrence Krauss and Robert Scherrer write that in 100 billion years a dead Earth may “float forlornly” through the supergalaxy, an “island of stars embedded in a vast emptiness.” Eventually, the supergalaxy itself disappears as it collapses into a Black Hole.
If we never encounter alien visitors, perhaps space-faring life is extremely rare and interstellar flight is extremely difficult. Another possibility is that there are signs of alien life all around us of which we are unaware. In 1973, radio astronomer John A. Ball proposed the zoo hypothesis about which he wrote that “the perfect zoo (or wilderness area or sanctuary) would be one in which the fauna do not interact with, and are unaware, of their zoo-keepers.”
SEE ALSO Black Holes (1783), Black Eye Galaxy (1779), Dark Matter (1933), Fermi Paradox (1950), Dark Energy (1998), Universe Fades (100 Trillion), Cosmological Big Rip (36 Billion).
This Hubble Telescope image of the Antennae Galaxies beautifully illustrates a pair of galaxies undergoing a collision. Our descendants may observe that they live in a blob of stars, which results from gravity pulling together a few nearby galaxies into one supergalaxy.
Fred Adams (b. 1961), Stephen William Hawking (b. 1942)
The poet Robert Frost wrote, “Some say the world will end in fire, some say in ice.” The ultimate destiny of our universe depends on its geometrical shape, the behavior of Dark Energy, the amount of matter, and other factors. Astrophysicists Fred Adams and Gregory Laughlin have described the dark ending as our current star-filled cosmos eventually evolves to a vast sea of subatomic particles while stars, galaxies, and even Black Holes fade.
In one scenario, the death of the universe unfolds in several acts. In our current era, the energy generated by stars drives astrophysical processes. Even though our universe is about 13.7 billion years old, the vast majority of stars have barely begun to shine. Alas, all stars will die after 100 trillion years, and star formation will be halted because galaxies will have run out of gas—the raw material for making new stars. At this point, the stelliferous, or star-filled, era draws to a close.
During the second era, the universe continues to expand while energy reserves and galaxies shrink—and material clusters at galactic centers. Brown dwarfs, objects that don’t have sufficient mass to shine as stars do, linger on. By this point in time, gravity will have already drawn together the burned-out remains of dead stars, and these shrunken objects will have formed super-dense objects such as White Dwarfs, Neutron Stars, and black holes. Eventually even these white dwarfs and neutron stars disintegrate due to the decay of protons.
The third era—the era of black holes—is one in which gravity has turned entire galaxies into invisible, supermassive black holes. Through a process of energy radiation described by astrophysicist Stephen Hawking in the 1970s, black holes eventually dissipate their tremendous mass. This means a black hole with the mass of a large galaxy will evaporate completely in 1098 to 10100 years.
What is left as the curtain closes on the black hole era? What fills the lonely cosmic void? Could any creatures survive? In the end, our universe may consist of a diffuse sea of Electrons.
SEE ALSO Black Holes (1783), Stephen Hawking on Star Trek (1993), Dark Energy (1998), Cosmological Big Rip (36 Billion).
Artistic view of gravitationally linked brown dwarfs, discovered in 2006.
Ludwig Eduard Boltzmann (1844-1906)
As discussed in the preceding few entries, the fate of the universe is unknown, and some theories posit the continual creation of universes that “bud” from our own. However, let’s focus on our own universe. One possibility is that our universe will continue to expand forever, and particles will become increasingly sparse. This seems like a sad end, doesn’t it? However, even in this empty universe, quantum mechanics tells us that residual energy fields will have random fluctuations. Particles will spring out of the vacuum as if out of nowhere. Usually, this activity is small, and large fluctuations are rare. But particles do emerge, and given a long amount of time, something big is bound to appear, for example, a hydrogen atom, or even a small molecule like ethylene, H2C=CH2. This may seem unimpressive, but if our future is infinite, we can wait a long time, and almost anything could pop into existence. Most of the gunk that emerges will be an amorphous mess, but every now and then, a tiny number of ants, planets, people, or Jupiter-sized brains made from gold will emerge. Given an infinite amount of time, you will reappear, according to physicist Katherine Freese. Quantum resurrection may await all of us. Be happy.
Today, serious researchers even contemplate the universe being overrun by Boltzmann Brains—naked, free-floating brains in outer space. Of course, the Boltzmann Brains are highly improbable objects, and there is virtually no chance that one has appeared in the 13.7 billion years our universe has existed. According to one calculation by physicist Tom Banks, the probability of thermal fluctuations producing a brain is e to the power of −1025. However, given an infinitely large space existing for an infinitely long time, these spooky conscious observers spring into existence. Today, there is a growing literature on the implications of Boltzmann Brains, kick-started by a 2002 publication by researchers Lisa Dyson, Matthew Kleban, and Leonard Susskind that seemed to imply that the typical intelligent observer may arise through thermal fluctuations, rather than cosmology and evolution.
SEE ALSO Casimir Effect (1948), Quantum Immortality (1987).
Boltzmann Brains, or thermally produced disembodied intelligences, may someday dominate our universe and outnumber all the naturally evolved intelligences that had ever existed before them.