• 16 •
Big troubles, imagined and real

Frank Wilczek

Modern physics suggests several exotic ways in which things could go terribly wrong on a very large scale. Most, but not all, are highly speculative, unlikely, or remote. Rare catastrophes might well have decisive influences on the evolution of life in the universe. So also might slow but inexorable changes in the cosmic environment in the future.

16.1 Why look for trouble?

Only a twisted mind will find joy in contemplating exotic ways to shower doom on the world as we know it. Putting aside that hedonistic motivation, there are several good reasons for physicists to investigate doomsday scenarios that include the following:

Looking before leaping : Experimental physics often aims to produce extreme conditions that do not occur naturally on Earth (or perhaps elsewhere in the universe). Modern high-energy accelerators are one example; nuclear weapons labs are another. With new conditions come new possibilities, including – perhaps – the possibility of large-scale catstrophe. Also, new technologies enabled by advances in physics and kindred engineering disciplines might trigger social or ecological instabilities. The wisdom of ‘Look before you leap’ is one important motivation for considering worst-case scenarios.

Preparing to prepare: Other drastic changes and challenges must be anticipated, even if we forego daring leaps. Such changes and challenges include exhaustion of energy supplies, possible asteroid or cometary impacts, orbital evolution and precessional instability of Earth, evolution of the Sun, and – in the verylong run – some form of ‘heat death of the universe’. Many of these are long-term problems, but tough ones that, if neglected, will only loom larger. So we should prepare, or at least prepare to prepare, well in advance of crises.

Wondering : Catastrophes might leave a mark on cosmic evolution, in both the physical and (exo)biological senses. Certainly, recent work has established a major role for catastrophes in sculpting terrestrial evolution (see http://www.answers.com/topic/timeline-of-evolution). So to understand the universe, we must take into account their possible occurrence. In particular, serious consideration of Fermi’s question ‘Where are they?’, or logical pursuit of anthropic reasoning, cannot be separated from thinking about how things could go drastically wrong.

This will be a very unbalanced essay. The most urgent and realistic catastrophe scenarios, I think, arise from well-known and much-discussed dangers: the possible use of nuclear weapons and the alteration of global climate. Here those dangers will be mentioned only in passing. The focus instead will be scenarios for catastrophe that are not-so-urgent and/or highly speculative, but involve interesting issues in fundamental physics and cosmology. Thinking about these exotic scenarios needs no apology; but I do want to make it clear that I, in no way, want to exaggerate their relative importance, or to minimize the importance of plainer, more imminent dangers.

16.2 Looking before leaping

16.2.1 Accelerator disasters

Some accelerators are designed to be dangerous. Those accelerators are the colliders that bring together solid uranium or plutonium ‘beams’ to produce a fission reaction – in other words, nuclear weapons. Famously, the physicists of the Manhattan project made remarkably accurate estimates of the unprecedented amount of energy their ‘accelerator’ would release (Rhodes, 1986). Before the Alamogordo test, Enrico Fermi seriously considered the possibility that they might be producing a doomsday weapon, that would ignite the atmosphere. He concluded, correctly, that it would not. (Later calculations that an all-out nuclear exchange between the United States and Soviet Union might produce a world-wide firestorm and/or inject enough dust into the atmosphere to produce nuclear winter were not universally accepted; fortunately, they have not been put to the test. Lesser, but still staggering catastrophe is certain [see http://www.sciencedaily.com/releases/2006/12/061211090729.htm].)

So physicists, for better or worse, got that one right. What about accelerators that are designed not as weapons, but as tools for research? Might they be dangerous?

When we are dealing with well-understood physics, we can do conventional safety engineering. Such engineering is not foolproof – bridges do collapse, astronauts do perish – but at least we foresee the scope of potential problems. In contrast, the whole point of great accelerator projects like the Brook haven Relativistic Heavy lon Collider (RHIC) or the Counseil Europeen pour la Recherche Nucleaire (CERN) Large Hadron Collider (LHC) is to produce extreme conditions that take us beyond what is well understood. In that context, safety engineering enters the domain of theoretical physics.

In discussing possible dangers associated with frontier research accelerators, the first thing to say is that while these machines are designed to produce unprecedented density of energy, that density is packed within such a miniscule volume of space that the total energy is, by most standards, tiny. Thus a proton-proton collision at the LHC involves about 40 erg of energy – less energy than a dried pea acquires in falling through 1 centimetre. Were that energy to be converted into mass, it would amount to about one-ten thousandth of a gram. Furthermore, the high energy density is maintained only very briefly, roughly for 10−24 seconds.

To envision significant dangers that might be triggered with such limited input, we have to exercise considerable imagination. We have to imagine that a tiny seed disturbance will grow vast, by tapping into hidden instabilities. Yet the example of nuclear weapons should give pause. Nuclear weapons tap into instabilities that were totally unsuspected just five decades before their design. Both ultraheavy(for fission) and ultralight (for fusion) nuclei can release energy by cooking toward the more stable nuclei of intermediate size.

Three possibilities have dominated the discussion of disaster scenarios at research accelerators. I will now discuss each one briefly. Much more extensive, authoritative technical discussions are available (Jaffe et al., 2000).

Black holes: The effect of gravityis extra ordinarily feeble in accelerator environments, according to both conventional theoryand experiment. (That is to say, the results of precision experiments to investigate delicate properties of the other fundamental interactions agree with theoretical calculations that predict gravityis negligible, and therefore ignore it.) Conventional theorysuggests that the relative strength of the gravitational compared to the electromagnetic interactions is, bydimensional analysis, approximately

Image

where G is the Newton constant, a is the fine-structure constant, and we adopt units with h = c = 1. Even for LHC energies E ~ 10 TeV, this is such a tiny ratio that more refined estimates are gratuitous.

But what if, within a future accelerator, the behaviour of gravityis drastically modified? Is there anyreason to think it might? At present, there is no empirical evidence for deviations from general relativity, but speculation that drastic changes in gravitymight set in starting at E ~ 1 TeV have been popular recentlyin parts of the theoretical physics community(Antoniadis et al., 1998; Arkani-Hamed et al., 1998; Randall and Sundrm, 1999). There are two broad motivations for such speculation:

Precocious unification? Physicists seek to unify their description of the different interactions. We have compelling ideas about how to unify our description of the strong, electromagnetic, and weak interactions. But the tiny ratio Equation (1) makes it challenging to put gravity on the same footing. One line of thought is that unification takes place only at extraordinarily high energies, namely, E ~ 1015 TeV the Planck energy. At this energy, which also corresponds to an extraordinarily small distance of approximately 10−33cm, the coupling ratio is near unity. Nature has supplied a tantalizing hint, from the other interactions, that this is indeed the scale at which unification becomes manifest (Dimopoulos et al., 1981, 1991). A competing line of thought has it that unification could take place at lower energies. That could happen if Equation (1) fails drastically, in such a way that the ratio increases much more rapidly. Then the deepest unity of physics would be revealed directly at energies that we might hope to access – an exciting prospect.

Extra dimensions: One way that this could happen, is if there are extra, curled-up spatial dimensions, as suggested by superstring theory. The short-distance behaviour of gravity will then be drastically modified at lengths below the size of the extra dimensions. Schematic world-models implementing these ideas have been proposed. While existing models appear highly contrived, at least to my eye, they can be fashioned so as not to avoid blatant contradiction with established facts. They provide a concrete framework in which the idea that gravity becomes strong at accessible energies can be realized.

If gravity becomes strong at E ~ 1 − 102 TeV, then particle collisions at those energies could produce tiny black holes. As the black holes encounter and swallow up ordinary matter, they become bigger black holes … and we have ourselves a disaster scenario! Fortunately, a more careful look is reassuring. While the words ‘black hole’ conjure up the image of a great gaping maw, the (highly) conjectural black holes that might be produced at an accelerator are not like that. They would weigh about one ten-thousandth of a gram, with a Compton radius of 10−18cm and, formally, a Schwarzschild radius of 10−47cm. (The fact that the Compton radius, associated with the irreducible quanum-mechanical uncertainty in position, is larger than the Schwarzschild radius, that is, the nominal radius inside which light is trapped, emphases the quantum-mechanical character of these ‘black holes’.) Accordingly, their capture zone is extremely small, and they would be very slow eaters. If, that is, these mini-black holes did not spontaneously decay. Small black holes are subject to the Hawking (1974) radiation process, and very small ones are predicted to decay very rapidly, on timescales of order 10−18 seconds or less. This is not enough time for a particle moving at the speed of light to encounter more than a few atoms. (And the probability of a hit is in any case miniscule, as mentioned above.) Recent theoretical work even suggests that there is an alternative, dual description of the higher-dimension gravity theory in terms of a four-dimensional strongly interacting quantum field theory, analogous to quantum chromodynamics (QCD) (Maldacena, 1998, 2005). In that description, the short-lived mini-black holes appear only as subtle features in the distribution of particles emerging from collisions; they are similar to the highly unstable resonances of QCD. One might choose to question both the Hawking process and the dual description of strong gravity, both of which are theoretical conceptions with no direct empirical support. But these ideas are inseparable from, and less speculative than, the theories that motivate the mini-black hole hypothesis; so denying the former erodes the foundation of the latter.

Strangelets: From gravity, the feeblest force in the world of elementary particles, we turn to QCD, the strongest, to confront our next speculative disaster scenario. For non-experts, a few words ofreview are in order. QCD is our theory of the so-called strong interaction (Close, 2006). The ingredients of QCD are elementary particles called quarks and gluons. We have precise, well-tested equations that describe the behaviour of quarks and gluons. There are six different flavours of quarks. The flavours are denoted u, d, s, c, b, t for up, down strange, charm, bottom, top. The heavyquarks c, b, and t are highly unstable. Though they are of great interest to physicists, they play no significant role in the present-day natural world, and they have not been implicated in any, even remotely plausible disaster scenario. The lightest quarks, u and d, together with gluons, are the primary building blocks of protons and neutrons, and thus of ordinary atomic nuclei. Crudely speaking, protons are composites uud of two up quarks and a down quark, and neutrons are composites udd of one up quark and two down quarks. (More accurately, protons and neutrons are complex objects that contain quark-antiquark pairs and gluons in addition to those three ‘valence’ quarks.) The mass-energy of the (u, d ) quarks is ~(5, 10) MeV, respectively, which is very small compared to the mass-energy of a proton or neutron of approximately 940 MeV. Almost all the mass of the nucleons-that is, protons and neutrons – arises from the energy of quarks and gluons inside, according to m = E/c2.

Strange quarks occupy an intermediate position. This is because their intrinsic mass-energy, approximately 100 MeV, is comparable to the energies associated with interquark interactions. Strange quarks are known to be constituents of so-called hyperons. The lightest hyperon is the Λ, with a mass of approximately 1116 MeV. The internal structure of the Λ resembles that of nucleons, but it is built from uds rather than uud or udd.

Under ordinary conditions, hyperons are unstable, with lifetimes of the order 10 – 10 seconds or less. The Λ hyperon decays into a nucleon and a π meson, for example. This process involves conversion of an s quark into a u or d quark, and so it cannot proceed through the strong QCD interactions, which do not change quark flavours. (For comparison, a typical lifetime for particles – ‘resonances’ – that decay by strong interactions is ~10−24 seconds.) Hyperons are not so extremely heavy or unstable that they play no role whatsoever in the natural world. They are calculated to be present with small but not insignificant density during supernova explosions and within neutron stars.

The reason for the presence of hyperons in neutron stars is closely related to the concept of ‘strangelets’, so let us briefly review it. It is connected to the Pauli exclusion principle. According to that principle, no two fermions can occupy the same quantum state. Neutrons (and protons) are fermions, so the exclusion principle applies to them. In a neutron star’s interior, very high pressures – and therefore very high densities – are achieved, due to the weight of the overlying layers. In order to obey the Pauli exclusion principle, then, nucleons must squeeze into additional quantum states, with higher energy. Eventually, the extra energy gets so high that it becomes economical to trade a high-energy nucleon for a hyperon. Although the hyperon has larger mass, the marginal cost of that additional mass-energy is less than the cost of the nucleon’s exclusion principle-energy.

At even more extreme densities, the boundaries between individual nucleons and hyperons break down, and it becomes more appropriate to describe matter directly in terms of quarks. Then we speak of quark matter. In quark matter, a story very similar to what we just discussed again applies, now with the lighter u and d quarks in place of nucleons and the s quarks in place of hyperons. There is a quantitative difference, however, because the s quark mass is less significant than the hyperon-nucleon mass difference. Quark matter is therefore expected to be rich in strange quarks, and is sometimes referred to as strange matter. Thus there are excellent reasons to think that under high pressure, hadronic – that is, quark-based – matter undergoes a qualitative change, in that it comes to contain a significant fraction of strange quarks. Bodmer and Witten (1984) posed an interesting question: Might this new kind of matter, with higher density and significant strangeness, which theory tells us is surely produced at high pressure, remain stable at zero pressure? If so, then the lowest energy state of a collection of quarks would not be the familiar nuclear matter, based on protons and neutrons, but a bit of strange matter – a strangelet. In a (hypothetical) strangelet, extra strange quarks permit higher density to be achieved, without severe penalty from the exclusion principle. If there are attractive interquark forces, gains in interaction energy might compensate for the costs of additional strange quark mass.

At first hearing, the answer to the question posed in the preceding paragraph seems obvious: No, on empirical grounds. For, if ordinary nuclear matter is not the most energetically favourable form, why is it the form we find around us (and, of course, in us)? Or, to put it another way, if ordinary matter could decay into matter based on strangelets, why has it not done so already? On reflection, however, the issue is not so clear. If only sufficiently large strangelets are favourable – that is, if only large strangelets have lower energy than ordinary matter containing the same net number of quarks – ordinary matter would have a very difficult time converting into them. Specifically, the conversion would require many simultaneous conversions of u or d quarks into strange quarks. Since each such quark conversion is a weak interaction process, the rate for multiple simultaneous conversions is incrediblysmall.

We know that for small numbers of quarks, ordinary nuclear matter is the most favourable form, that is, that small strangelets do not exist. If a denser, differently organized version of the Λ existed, for example, nucleons would decay into it rapidly, for that decay requires only one weak conversion. Experiments searching for an alternative Λ – Λ – the so-called ‘H particle’ – have come up empty handed, indicating that such a particle could not be much lighter than two separate Λ particles, let alone light enough to be stable (Borer et al., 1994).

After all this preparation, we are ready to describe the strangelet disaster scenario. A strangelet large enough to be stable is produced at an accelerator. It then grows by swallowing up ordinary nuclei, liberating energy. And there is nothing to stop it from continuing to grow until it produces a catastrophic explosion (and then, having burped, resumes its meal), or eats up a big chunk of Earth, or both.

For this scenario to occur, four conditions must be met:

1. Strange matter must be absolutely stable in bulk.

2. Strangelets would have to be at least metastable for modest numbers of quarks, be cause only objects containing small numbers of strange quarks might conceivably be produced in an accelerator collision.

3. Assuming that small metastable strangelets exist, it must be possible to produce them at an accelerator.

4. The stable configuration of a strangelet must be negatively charged (see below).

Only the last condition is not self-explanatory. A positively charged strangelet would resemble an ordinary atomic nucleus (though, to be sure, with an unusually small ratio of charge to mass). Like an ordinary atomic nucleus, it would surround itself with electrons, forming an exotic sort of atom. It would not eat other ordinary atoms, for the same reasons that ordinary atoms do not spontaneously eat one another – no cold fusion! – namely, the Coulomb barrier. As discussed in detail in Jaffe et al. (2000), there is no evidence that any of these conditions is met. Indeed, there is substantial theoretical evidence that none is met, and direct experimental evidence that neither condition (2) nor (3) can be met. Here are the summary conclusions of that report:

1. At present, despite vigorous searches, there is no evidence whatsoever on the existence of stable strange matter anywhere in the Universe.

2. On rather general grounds, theory suggests that strange matter becomes unstable in small lumps due to surface effects. Strangelets small enough to be produced in heavy ion collisions are not expected to be stable enough to be dangerous.

3. Theory suggests that heavy ion collisions (and hadron-hadron collisions in general) are not a good place to produce strangelets. Furthermore, it suggests that the production probability is lower at RHIC than at lower energy heavy ion facilities like the Alternating Gradient Synchrotron (AGS) and CERN. Models and data from lower energy heavy ion colliders indicate that the probability of producing a strangelet decreases very rapidly with the strangelet’s atomic mass.

4. It is over whelmingly likely that the most stable configuration of strange matter has positive electric charge.

It is not appropriate to review all the detailed and rather technical arguments supporting these conclusions here, but two simple qualitative points, that suggest conclusions (3) and (4) above, are easy to appreciate.

Conclusion 3: To produce a strangelet at an accelerator, the crucial condition is that one produces a region where there are many strange quarks (and few strange antiquarks) and not too much excess energy. Too much energy density is disadvantageous, because it will cause the quarks to fly apart: when things are hot you get steam, not ice cubes. Although higher energy at an accelerator will make it easier to produce strange-antistrange quark pairs, higher energy also makes it harder to segregate quarks from antiquarks, and to suppress extraneous background (i.e., extra light quarks and antiquarks, and gluons). Thus conditions for production of strangelets are less favourable at frontier, ultra-high accelerators than at older, lower-energy accelerators – for which, of course, the (null) results are already in. For similar reasons, one does not expect that strangelets will be produced as cosmological relics of the big bang, even if they are stable in isolation.

Conclusion 4: The maximum leeway for avoiding Pauli exclusion, and the best case for minimizing other known interaction energies, occurs with equal numbers of u, d, and s quarks. This leads to electrical neutrality, since the charges of those quarks are ⅔, – ⅓, – ⅓ times the charge of the proton, respectively. Since the s quark, being significantly heavier than the others, is more expensive, one expects that there will be fewer s quarks than in this otherwise ideal balance (and nearly equal numbers of u and d quarks, since both their masses are tiny). This leads to an overall positive charge.

The strangelet disaster scenario, though ultimately unrealistic, is not silly. It brings in subtle and interesting physical questions, that require serious thought, calculation, and experiment to address in a satisfactory way. Indeed, if the strange quark were significantly lighter than it is in our world, then big strangelets would be stable, and small ones at least metastable. In such an alternative universe, life in anything like the form we know it, based on ordinary nuclear matter, might be precarious or impossible.

Vacuum instability: In the equations of modern physics, the entity we perceive as empty space, and call vacuum, is a highly structured medium full of spontaneous activity and a variety of fields. The spontaneous activity is variously called quantum fluctuations, zero-point motion, or virtual particles. It is directly responsible for several famous phenomena in quantum physics, including Casimir forces, the Lamb shift, and asymptotic freedom. In a more abstract sense, within the framework of quantum field theory, all forces can be traced to the interaction of real with virtual particles (Feynman, 1988; Wilczek, 1999; Zee, 2003).

The space-filling fields can also be viewed as material condensates, just as an electromagnetic field can be considered as a condensate of photons. One such condensation is understood deeply. It is the quark-anti quark condensate that plays an important role in strong interaction theory. A field of quark-antiquark pairs of opposite helicity fills space-time.1 That quark-antiquark field affects the behaviour of particles that move through it. That is one way we know it is there! Another is by direct solution of the well-established equations of QCD. Low-energy n mesons can be modeled as disturbances in the quark-antiquark field; many properties of n mesons are successfully predicted using that model.

Another condensate plays a central role in our well-established theory of electroweak interactions, though its composition is presently unknown. This is the so-called Higgs condensate. The equations of the established electroweak theory indicate that the entity we perceive as empty space is in reality an exotic sort of superconductor. Conventional superconductors are super(b) conductors of electric currents, the currents that photons care about. Empty space, we learn in electroweak physics, is a super(b) conductor of other currents: specifically, the currents that W and Z bosons care about. Ordinary superconductivity is mediated by the flow of paired electrons – Cooper pairs – in a metal. Cosmic superconductivity is mediated by the flow of something else. No presently known form of matter has the right properties to do the job; for that purpose, we must postulate the existence of new form(s) of matter. The simplest hypothesis, at least in the sense that it introduces the fewest new particles, is the so-called minimal standard model. In the minimal standard model, we introduce just one new particle, the so-called Higgs particle. According to this model, cosmic superconductivity is due to a condensation of Higgs particles. More complex hypotheses, notably including low-energy supersym metry, introduce several contributions to the electroweak condensate. These models predict that there are several contributors to the electroweak condensate, and that there is a complex of several ‘Higgs particles’, not just one. A major goal of on going research at the Fermilab Tevatron and the CERN LHC is to find the Higgs particle, or particles.

Since ‘empty’ space is richly structured, it is natural to consider whether that structure might change. Other materials exist in different forms – might emptyspace? To put it another way, could empty space exist in different phases, supporting in effect different laws of physics?

There is every reason to think the answer is ‘Yes’. We can calculate, for example, that at sufficiently high temperature the quark-antiquark condensate of QCD will boil away. And although the details are much less clear, essentially all models of electroweak symmetry breaking likewise predict that at sufficiently high temperatures the Higgs condensate will boil away. Thus in the early moments of the big bang, empty space went through several different phases, with qualitatively different laws of physics. (For example, when the Higgs condensate melts, the W and Z bosons become massless particles, on the same footing as photons. So then the weak interactions are no longer so weak!) Somewhat more speculatively, the central idea of inflationary cosmology is that in the very early universe, empty space was in a different phase, in which it had non-zero energy density and negative pressure.

The empirical success of inflationary cosmology therefore provides circumstantial evidence that empty space once existed in a different phase.

More generally, the structure of our basic framework for understanding fundamental physics, relativistic quantum field theory, comfortably supports theories in which there are alternative phases of empty space. The different phases correspond to different configurations of fields (condensates) filling space. For example, attractive ideas about unification of the apparently different forces of Nature postulate that these forces appear on the same footing in the primary equations of physics, but that in their solution, the symmetry is spoiled by space-filling fields. Superstring theory, in particular, supports vast numbers of such solutions, and postulates that our world is described by one of them: for certainly, our world exhibits much less symmetry than the primary equations of superstring theory.

Given, then, that empty space can exist in different phases, it is natural to ask: Might our phase, that is, the form of physical laws that we presently observe, be suboptimal? Might, in other words, our vacuum be only metastable? If so, we can envisage a terminal ecological catastrophe, when the field configuration of empty space changes, and with it the effective laws of physics, instantly and utterly destabilizing matter and life in the form we know it.

How could such a transition occur? The theory of empty space transitions is entirely analogous to the established theory of other, more conventional first-order phase transitions. Since our present-day field configuration is (at least) metastable, any more favourable configuration would have to be significantly different, and to be separated from ours by intermediate configurations that are less favourable than ours (i.e., that have higher energy density). It is most likely that a transition to the more favourable phase would begin with the emergence of a rather small bubble of the new phase, so that the required rearrangement of fields is not too drastic and the energetic cost of intermediate configurations is not prohibitive. On the other hand the bubble cannot be too small, for the volume energy gained in the interior must compensate unfavourable surface energy (since between the new phase and the old metastable phase one has unfavourable intermediate configurations).

Once a sufficiently large bubble is formed, it could expand. Energy liberated in the bulk transition between old and new vacuum goes into accelerating the wall separating them, which quickly attains near-light speed. Thus the victims of the catastrophe receive little warning: by the time they can see the approaching bubble, it is upon them.

How might the initial bubble form? It might form spontaneously, as a quantum fluctuation. Or it might be nucleated by some physical event, such as – perhaps? – the deposition of lots of energy into a small volume at an accelerator.

There is not much we can do about quantum fluctuations, it seems, but it would be prudent to refrain from activity that might trigger a terminal ecological catastrophe. While the general ideas of modern physics support speculation about alternative vacuum phases, at present there is no concrete candidate for a dangerous field whose instability we might trigger. We are surely in the most stable state of QCD. The Higgs field or fields involved in electroweak symmetry breaking might have instabilities – we do not yet know enough about them to be sure. But the difficulty of producing even individual Higgs particles is already a crude indication that triggering instabilities which require coordinated condensation of many such particles at an accelerator would be prohibitively difficult. In fact there seems to be no reliable calculation of rates of this sort – that is, rates for nucleating phase transitions from particle collisions – even in model field theories. It is an interesting problem of theoretical physics. Fortunately, the considerations of the following paragraph assure us that it is not a practical problem for safety engineering.

As the matching bookend to our initial considerations on size, energy, and mass, let us conclude our discussion of speculative accelerator disaster scenarios with another simple and general consideration, almost independent of detailed theoretical considerations, and which makes it implausible that any of these scenarios apply to reality. It is that Nature has, in effect, been doing accelerator experiments on a grand scale for a very long time (Hut, 1984; Hut and Rees, 1984). For, cosmic rays achieve energies that even the most advanced terrestrial accelerators will not match at any time soon. (For experts: Even by the criterion of center-of-mass energy, collisions of the highest energy cosmic rays with stationary targets beat top-of-the-line accelerators.) In the history of the universe, many collisions have occurred over a very wide spectrum of energies and ambient conditions (Jaffe et al., 2000). Yet in the history of astronomy, no candidate unexplained catastrophe has ever been observed. And many such cosmic rays have impacted Earth, yet Earth abides and we are here. This is reassuring (Bostrom and Tegmark, 2005).

16.2.2 Runaway technologies

Neither general source of reassurance – neither miniscule scale nor natural precedent – necessarily applies to other emergent technologies.

Technologies that are desirable in themselves can get out of control, leading to catastrophic exhaustion of resources or accumulation of externalities. Jared Diamond has argued that history presents several examples of this phenomenon (Diamond, 2005), on scales ranging from small island cultures to major civilizations. The power and agricultural technologies of modern industrial civilization appear to have brought us to the cusp of severe challenges of both these sorts, as water resources, not to speak of oil supplies, come under increasing strain, and carbon dioxide, together with other pollutants, accumulates in the biosphere. Here it is not a question of whether dangerous technologies will be employed – they already are – but on what scale, how rapidly, and how we can manage the consequences.

As we have already discussed in the context of fundamental physics at accelerators, runaway instabilities could also be triggered by inadequately considered research projects. In that particular case, the dangers seem farfetched. But it need not always be so. Vonnegut’s ‘Ice 9’ was a fictional example (Vonnegut, 1963), very much along the lines of the runaway strangelet scenario – a new form of water, that converts the old. An artificial protein that turned out to catalyse crystallization of natural proteins – an artifical ‘prion’ – would be another example of the same concept, from yet a different realm of science.

Perhaps more plausibly, runaway technological instabilities could be triggered as an unintended byproduct of applications (as in the introduction of cane toads to Australia) or sloppy practices (as in the Chernobyl disaster); or by deliberate pranksterism (as in computer virus hacking), warfare, or terrorism.

Two technologies presently entering the horizon of possibility have, by their nature, especially marked potential to lead to runaways:

Autonomous, capable robots: As robots become more capable and autonomous, and as their goals are specified more broadly and abstractly, they could become formidable antagonists. The danger potential of robots developed for military applications is especially evident. This theme has been much explored in science fiction, notably in the writings of Isaac Asimov (1950) and in the Star Wars movies.

Self-reproducing machines, including artificial organisms: The danger posed by sudden introduction of new organisms into unprepared populations is exemplified by the devastation of New World populations by smallpox from the Old World, among several other catastrophes that have had a major influence on human history. This is documented in William McNeill’s (1976) marvelous Plagues and Peoples. Natural organisms that have been re-engineered, or ‘machines’ of any sort capable of self-reproduction, are by their nature poised on the brink of exponential spread. Again, this theme has been much explored in science fiction, notably in Greg Bear’s Blood Music [19]. The chain reactions of nuclear technology also belong, in a broad conceptual sense, to this class – though they involve exceedingly primitive ‘machines’, that is, self-reproducing nuclear reactions.

16.3 Preparing to Prepare

Runaway technologies: The problem of runaway technologies is multi-faceted. We have already mentioned several quite distinct potential instabilities, involving different technologies, that have little in common. Each deserves separate, careful attention, and perhaps there is not much useful that can be said in general. I will make just one general comment. The majority of people, and of scientists and engineers, by far, are well-intentioned; they would much prefer not to be involved in any catastrophe, technological or otherwise. Broad-based democratic institutions and open exchange of information can coalesce this distributed good intention into an effective instrument of action.

Impacts: We have discussed some exotic – and, it turns out, unrealistic – physical processes that could cause global catastrophes. The possibility that asteroids or other cosmic debris might impact Earth, and cause massive devastation, is not academic – it has happened repeatedly in the past. We now have the means to address this danger, and certainly should do so (http://impact.arc.nasa.gov/intro.cfm).

Astronomical instabilities: Besides impacts, there are other astronomical effects that will cause Earth to become much less hospitable on long time scales. Ice ages can result from small changes in Earth’s obliquity, the eccentricity of its orbit, and the alignment of its axis with the eccentricity (which varies as the axis precesses) (see http://www.aip.org/history/climate/cycles.htm). These changes occur on time scales of tens of thousands of years. At present the obliquity oscillates within the range 22.1-24.5°. However as the day lengthens and the moon recedes, over time scales of a billion years or so, the obliquity enters a chaotic zone, and much larger changes occur (Laskar et al., 1993). Presumably, this leads to climate changes that are both extreme and highly variable. Finally, over yet longer time scales, our Sun evolves, gradually becoming hotter and eventually entering a red giant phase.

These adverse and at least broadly predictable changes in the global environment obviously pose great challenges for the continuation of human civilization. Possible responses include moving (underground, underwater, or into space), re-engineering our physiology to be more tolerant (either through bio-engineering, or through man-machine hybridization), or some combination thereof.

Heat death: Over still longer time scales, some version of the ‘heat death of the universe’ seems inevitable. This exotic catastrophe is the ultimate challenge facing the mind in the universe.

Stars will burn out, the material for making new ones will be exhausted, the universe will continue to expand – it now appears, at an accelerating rate − and, in general, useful energy will become a scarce commodity. The ultimate renewable technology is likely to be pure thought, as I will now describe.

It is reasonable to suppose that the goal of a future-mind will be to optimize a mathematical measure of its wellbeing or achievement, based on its internal state. (Economists speak of ‘maximizing utility’, normal people of ‘finding happiness’.) The future-mind could discover, by its powerful introspective abilities or through experience, its best possible state the Magic Moment – or several excellent ones. It could build up a library of favourite states. That would be like a library of favourite movies, but more vivid, since to recreate magic moments accurately would be equivalent to living through them. Since the joys of discovery, triumph, and fulfillment require novelty, to re-live a magic moment properly, the future-mind would have to suppress memory of that moment’s previous realizations.

A future-mind focused upon magic moments is well matched to the limitations of reversible computers, which expend no energy. Reversible computers cannot store new memories, and they are as likely to run backwards as forwards. Those limitations bar adaptation and evolution, but invite eternal cycling through magic moments. Since energy becomes a scarce quantity in an expanding universe, that scenario might well describe the long-term future of mind in the cosmos.

16.4 Wondering

A famous paradox led Enrico Fermi to ask, with genuine puzzlement, ‘Where are they?’ He was referring to advanced technological civilizations in our Galaxy, which he reckoned ought to be visible to us.

Simple considerations strongly suggest that technological civilizations whose works are readily visible throughout our Galaxy (that is, given current or imminent observation technology techniques we currently have available, or soon will) ought to be common. But they are not. Like the famous dog that did not bark in the night time, the absence of such advanced technological civilizations speaks through silence.

Main-sequence stars like our Sun provide energy at a stable rate for several billions of years. There are billions of such stars in our Galaxy. Although our census of planets around other stars is still in its infancy, it seems likely that many millions of these stars host, within their so-called habitable zones, Earth-like planets. Such bodies meet the minimal requirements for life in something close to the form we know it, notably including the possibility of liquid water.

On Earth, a species capable of technological civilization first appeared about one hundred thousand years ago. We can argue about defining the precise time when technological civilization itself emerged. Was it with the beginning of agriculture, of written language, or of modern science? But whatever definition we choose, its age will be significantly less than one hundred thousand years.

In any case, for Fermi’s question, the most relevant time is not one hundred thousand years, but more nearly one hundred years. This marks the period of technological ‘breakout’, when our civilization began to release energies and radiations on a scale that may be visible throughout our Galaxy. Exactly what that visibility requires is an interesting and complicated question, whose answer depends on the hypothetical observers. We might already be visible to a sophisticated extraterrestrial intelligence, through our radio broadcasts or our effects on the atmosphere, to a sophisticated extraterrestrial version of SETI. The precise answer hardly matters, however, if anything like the current trend of technological growth continues. Whether we are barely visible to sophisticated though distant observers today, or not quite, after another thousand years of technological expansion at anything like the prevailing pace, we should be easily visible. For, to maintain even modest growth in energy consumption, we will need to operate on astrophysical scales.

A 1000 years is just one millionth of the billion-year span over which complex life has been evolving on Earth. The exact placement of breakout within the multi-billion year timescale of evolution depends on historical accidents. With a different sequence of the impact events that lead to mass extinctions, or earlier occurrence of lucky symbioses and chromosome doublings, Earth’s breakout might have occurred one billion years ago, instead of 100 years.

The same considerations apply to those other Earth-like planets. Indeed, many such planets, orbiting older stars, came out of the starting gate billions of years before we did. Among the millions of experiments in evolution in our Galaxy, we should expect that many achieved breakout much earlier, and thus became visible long ago. So: Where are they?

Several answers to that paradoxical question have been proposed. Perhaps this simple estimate of the number of life-friendly planets is for some subtle reason wildly over-optimistic. For example, our Moon plays a crucial role in stabilizing the Earth’s obliquity, and thus its climate; probably, such large moons are rare (ours is believed to have been formed as a consequence of an unusual, giant impact), and plausibly extreme, rapidly variable climate is enough to inhibit the evolution of intelligent life. Perhaps on Earth the critical symbioses and chromosome doublings were unusually lucky, and the impacts extraordinarily well-timed. Perhaps, for these reasons or others, even iflife of some kind is widespread, technologically capable species are extremely rare, and we happen to be the first in our neighbourhood.

Or, in the spirit of this essay, perhaps breakout technology inevitably leads to catastrophic runaway technology, so that the period when it is visible is sharply limited. Or – an optimistic variant of this – perhaps a sophisticated, mature society avoids that danger by turning inward, foregoing power engineering in favour of information engineering. In effect, it thus chooses to become invisible from afar. Personally, I find these answers to Fermi’s question to be the most plausible. In any case, they are plausible enough to put us on notice.

Suggestions for further reading

Jaffe, R., Busza, W., Sandweiss, J., and Wilczek, F. (2000). Review of speculative ‘disaster scenarios’ at RHIC. Rev. Mod. Phys., 72, 1125–1140, available on the web at arxiv.org:hepph/9910333. A major report on accelerator disaster scenarios, written at the request of the director of Brookhaven National Laboratory, J. Marburger, before the commissioning of the RHIC. It includes a non-technical summary together with technical appendices containing quantitative discussions of relevant physics issues, including cosmic ray rates. The discussion of strangelets is especially complete.

Rhodes, R. (1986). The Making of the Atomic Bomb (Simon &Schuster). A rich history of the one realistic ‘accelerator catastrophe’. It is simply one of the greatest books ever written. It includes a great deal of physics, as well as history and high politics. Many of the issues that first arose with the making of the atom bomb remain, of course, very much alive today.

Kurzweil, R. (2005). The Singularity Is Near (Viking Penguin). Makes a case that runaway technologies are endemic – and that is a good thing! It is thought-provoking, if not entirely convincing.

References

Antoniadis, I., Arkani-Hamed, N., Dimopoulos, S., and Dvali, G. (1998). Phys. Lett. B, 436, 257.

Arkani-Hamed, N., Dimopoulos, S., and Dvali, G. (1998). Phys. Lett., 429, 263.

Asimov, I. (1950). I, Robot (New York: Gnome Press).

Bear, G. (1985). Blood Music (New York: Arbor House).

Borer, K., Dittus, F., Frei, D., Hugentobler, E., Klingenberg, R., Moser, U., Pretzl, K., Schacher, J., Stoffel, F., Volken, W., Elsener, K., Lohmann, K.D., Baglin, C., Bussière, A., Guillaud, J.P., Appelquist, G., Bohm, C., Hovander, B., Selldèn, B., and Zhang, Q.P. (1994). Strangelet search in S-W collisions at 200A Ge V/c. Phys. Rev. Lett., 72, 1415–1418.

Bostrom, N. and Tegmark, M. (2005). How unlikely is a doomsday catastrophe. Nature, 438, 754–756. http://www.arxiv.org/astro-ph/0512204v2

Close, F. (2006). The New Cosmic Onion (New York and London: Taylor &Francis).

Diamond, J. (2005). Collapse: How Societies Choose to Fail or Succeed (New York: Viking).

Dimopoulos, S., Raby, S., and Wilczek, F. (1981). Supersymmetry &the scale of unification. Phys. Rev., D24, 1681–1683.

Dimopoulos, S., Raby, S., and Wilczek, F. (1991). Unification of Couplings. Physics Today, 44, October 25, pp. 25–33.

Feynman, R. (1988). QED: The Strange Theory of Light and Matter (Princeton, NJ: Princeton University Press).

Hawking, S.W. (1974). Black hole explosion? Nature, 248, 30–31.

Hut, P. (1984). Is it safe to distribute the vacuum? Nucl. Phys., A418, 301C.

Hut, P. and Rees, M.J. How stable is our vacuum? Report-83-0042 (Princeton: IAS).

Jaffe, R., Busza, W., Sandweiss, J., and Wilczek, F. (2000). Review of speculative “disaster scenarios” at RHK. Rev. Mod. Phys., 72, 1125–1140. Laskar, J., Joutel, F., and Robutel, P. (1993). Stabilization of the earth’s obliguity by the Moon. Nature, 361, 615–617.

Maldacena, J. (1998). The cage-N limit of superconformal field theories &supergravity. Adv. Theor. Math. Phys., 2, 231 -252.

Maldacena, J. (2005). The illusion of gravity. Scientific American, November, 56–63.

McNeill, W. (1976). Plagues and Peoples (New York: Bantam).

Randall, L. and Sundrm, R. (1999). Large mass hierarchy from a small extra demensia. Phys. Rev. Lett., 83, 3370–3373.

Rhodes, R. (1986). The Making of the Atomic Bomb (New York: Simon &Schuster).

Schroder, P., Smith, R., and Apps, K. (2001). Solar evolution &the distant future of earth. Astron. Geophys., 42(6), 26–32.

Vonnegut, K. (1963). Cat’s Cradle (New York: Holt, Rinehart, &Wilson).

Wilczek, F. (1999). Quantum field theory. Rev. Mod. Phys., 71, S85-S05.

Witten, E. (1984). Cosmic separation of phases. Phys. Rev., D30, 272–285.

Zee, A. (2003). Quantum Field Theory in a Nutshell (Princeton, NJ: Princeton University Press).