image 6 image
ASCENDING TO THE ULTIMATE MULTIVERSE
We’ve got to think of infinities of infinities. Indeed, there’s perhaps even a higher hierarchy of infinities.
MARTIN REES, “IN THE MATRIX”
Worse still, there is no end to the hierarchy of levels … gods and worlds, creators and creatures, in an infinite regress, embedded within each other.
PAUL DAVIES, “UNIVERSES GALORE: WHERE WILL IT ALL END?”
The Quantum Multiverse
“What If the Wave Function Never Collapses?”
The most widely known and popularly dramatized multiverse scenario—the one that sends Family Guy’s Stewie and Brian through an endless series of alternative Quahogs—stems from the Many-Worlds Interpretation of quantum mechanics, often abbreviated MWI. Initially posited in 1957 by an American graduate student named Hugh Everett (1930–1982),1 the MWI is an alternative to the Copenhagen Interpretation, associated with Niels Bohr (1885–1962) and Werner Heisenberg (1901–1976), who were among the first to account for the bizarre behavior of matter at subatomic scales. One of the central laws of quantum mechanics is the principle of “complementarity,” according to which a particle’s position and momentum (to name one “complementary” pair) cannot be determined at the same time. The moment a physicist measures a particle’s location with precision, it becomes impossible to measure the particle’s momentum, and the moment she measures the momentum, the position becomes imprecise. Brian Greene offers the “rough analogy” of trying to take a picture of a fly: “If your shutter speed is high, you’ll get a sharp image that records the fly’s location at the moment you snapped the picture. But because the photo is crisp, the fly appears motionless; the image gives no information about the fly’s speed. If you set your shutter speed low, the resulting blurry image will convey something of the fly’s motion, but because of that blurriness it also provides an imprecise measurement of the fly’s location.”2 Perhaps the most central lesson of quantum mechanics, then, is that the kind of answer you get depends on the kind of question you ask.
According to the Copenhagen Interpretation, the reason it is impossible to determine position and momentum at the same time is that a particle does not have a determinate position or momentum until it is measured.3 The state of a subatomic particle cannot be expressed as a simply located point or series of points on a Cartesian plane; rather, it is expressed in terms of a “wave function” that details the varying probabilities of its possible states. When the particle is not being observed, it exists in a “superposition,” occupying each of its possible states at once. But then when an observation is made, the wave function “collapses,” and the particle takes on a definitive place, speed, energy, or spin—whichever property is being measured. Quantum particles are therefore a bit like the main characters in Toy Story (or Jim Henson’s earlier, underappreciated The Christmas Toy),4 who race around the room when they are alone but then freeze when a human opens the door. This account, without the layperson’s illustrations, was more or less the story that prompted Everett to wonder, “What if the wave function never collapses?”5
According to Everett’s explanation, if the wave function never collapses, then every possible outcome actually happens—each in a different universe. So every time a particle “decides” on a specific position, the universe has “split” into multiple branches—one for every position the particle might possibly take.6 Every time the wind blows one way or another in the vicinity of a sailboat, the universe has split into some worlds in which the boat gains speed, some in which the boat changes course, and some in which the crew is stuck in the doldrums for a day and a half. And every time you order a burger with fries, there is another universe in which you have just ordered a salad and another in which you have just seen a mouse run across the restaurant floor and fainted.
If these musings sound familiar, it is because such bizarre possibilities arose in the notion of a “quilted” multiverse, which operates under the sole assumption that the amount of matter and space-time in the universe is infinite. That was Max Tegmark’s Level I. The compendium of the MWI’s ever-branching universes constitutes what Tegmark calls the “Level III multiverse,” which (metaphysically at least) looks strikingly like the infinite set of overlapping spheres in Level I. As Tegmark explains these strata, “[T]he only difference between Level I and Level III is where your doppelgängers reside. In Level I they live elsewhere in good old three-dimensional space. In Level III they live on another quantum branch in infinite-dimensional Hilbert Space.”7 The quantum multiverse, then, is neither spatially arrayed nor temporally sequential. Even in principle, one could never reach these other worlds by traveling far enough “out there” or long enough “back then.”
Perhaps unsurprisingly, Everett’s theory was not an overnight success. It took the work of numerous others—most energetically, Bryce DeWitt (1923–2004)—to develop the MWI further and then to publicize it to the broader scientific community and laity. In 1970, DeWitt enumerated the extraordinary implications of the MWI; in effect, he explained, “every quantum transition taking place on every star, in every galaxy, in every remote corner of the universe is splitting our local world on earth into myriads of copies of itself.” He also conceded that the idea might be hard for most people to assimilate, acknowledging that “the idea of 10100+ slightly imperfect copies of oneself all constantly splitting into further copies, which ultimately become unrecognizable, is not easy to reconcile with common sense.” “Here,” DeWitt summarized his audience’s fears, “is schizophrenia with a vengeance.”8 And yet, as David Deutsch would go on to argue fervently in the decades ahead, the early proponents of the MWI suggested that it was actually the simplest explanation of quantum phenomena.9
According to its proponents, the MWI is preferable to the so-called Copenhagen Interpretation for two major reasons. First, it maintains that objects are subject to the same laws, regardless of their size. The many-worlders accuse the Copenhagen adherents of dividing the world into large things that behave classically and small things that behave quantumly: pitch a thousand baseballs through two holes in a wall, and each ball will go through only one hole at a time. Do the same thing with photons and a light-sensitive screen, and all the photons will seem to go through both holes at once.10 For the many-worlders, this strange behavior is not confined to the subatomic world; rather, it reveals what is going on at the macroscopic level as well (this case can be made for Copenhagen, too, but the many-worlders do not tend to pay the argument much attention).11 According to the MWI, not just each particle but the whole universe exists as a wave function of all its possible states, and every decision—whether microscopic, human, or galactic in scope—sends worlds branching off in every direction.
The second advantage the MWI claims over Copenhagen is that its quantum phenomena do not rely on observational influence to become concrete. Taken to its fullest expression, the ontological implication of Copenhagen is that there is no “reality” independent of the acts of observation, measurement, and interpretation. As Karen Barad has argued with unparalleled clarity, Bohr’s interpretation means that phenomena are not just given; rather, they are formed only by means of relationships between observers, objects, and the experimental apparatus that conjoins them.12 To many-worlders such as Colin Bruce, however, the whole idea seems to be “solipsistic,” anthropocentric, and ultimately absurd. What counts as an observer? Bruce asks. “A mouse, a frog, a slug? … Does the observer need a Ph.D.? … Was the entire universe waiting to collapse into a definite state until the first ape-man came along?”13 In short, according to the MWI, there is a world that is independent of our (or of a slug’s) interactions with it; as Bruce proclaims, “only in a[n] [MWI] quantum world does it become possible to measure something without affecting it at all.”14 So although this quantum world is composed of an unthinkable number of independent subworlds, all of them can be described in one fell swoop by what many-worlders refer to as the “wave function of the universe.” Rather than “collapsing” in relation to differing circumstances, the wave function continues unimpeded, “smoothly and deterministically” forever, regardless of who is observing what when.15
In his recent blog post “Does This Ontological Commitment Make Me Look Fat?” Sean Carroll therefore maintains that “the ontological commitments of the … many-worlds interpretation are actually quite thin.” Because the wave function is unified and totally imperturbable, he argues, the MWI is “simpler than versions of QM [quantum mechanics] that add a completely separate evolution law to account for ‘collapse’ of the wave function. That doesn’t mean it’s right or wrong, but it doesn’t lose points because there are a lot of universes.”16
When it comes to taking sides with Copenhagen or the MWI, the physics community seems (ironically) to be split. Each camp tends to claim that “everyone who is anyone” is on its side and that no one takes the other interpretation seriously anymore.17 By anyone’s account, however, the MWI works just as well as Copenhagen to explain and predict quantum phenomena in this world. So as science writer Martin Gardner points out, any given theorist’s “preference” for the MWI might be more functional than it is ontological; she might consider the “other worlds” not to be “physically real” but “useful abstractions such as numbers and triangles.”18 After all, a mathematician can easily say that she “believes in” the idea of a perfect triangle without affirming the physical existence of a perfect triangle. Maybe the other worlds of the MWI are more like mathematical abstractions than they are “parallel universes”?
No More Drama: Stephen Hawking’s Model-Dependent Realism
Although some many-worlders such as Deutsch, Bruce, and Leonard Susskind would reject outright this “nonrealist” reading, it finds a complicated negotiation in the recent work of Stephen Hawking and Thomas Hertog. As far as Hawking is concerned, neither the inflationary scenario nor any of the cyclical models provides a “satisfactory” account of the origins of the universe.19 He offers a number of reasons for this dissatisfaction, but most of them boil down to his contention that anything that might have happened “before” the big bang is both inaccessible and irrelevant to the universe we are in.20 All these models, Hawking argues, work in the wrong direction, from the “bottom up.” They begin with a hypothetical precosmic state and then show how that hypothetical state might have produced a cosmos like ours. But they leave unexplained how this hypothetical precosmic state came about in the first place and why. In sum, writes Hawking, “all the pre-big-bang scenarios can do is shift the problem of the initial state from 13.7 [billion] years ago to the infinite past.”21 He therefore suggests moving in the opposite direction, from the present backward, and in this sense following what he calls a “top-down approach” to cosmology.
Hawking’s distinction between “bottom-up” and “top-down” cosmologies corresponds roughly to Michel Foucault’s distinction between “history” and “genealogy.”22 Whereas history begins from a fixed point in the past and then follows a linear course to the present, genealogy works from the present backward through its multiple sources so that the story branches farther out the farther back it goes (think here of a family tree). Hawking and Hertog’s cosmology similarly begins from our point in space-time and then branches into more and more possible states as it moves into the past, ultimately reaching a quantum “beginning” that comprises “every possible history” of the universe.23
Most of these possible histories correspond to a possible universe, each of which has “its own probability” of expanding or collapsing, of having a strong or weak electromagnetic force, of having three or ten dimensions, and so on. This scenario allows Hawking and Hertog to argue that each possible quantum universe corresponds to one of the vacua of the string theory landscape. But unlike in the inflationary scenario, each of these possible worlds lies “within” the same universe; that is, each of them emerged out of the same big bang. Moreover, according to the “no-boundary” proposal that Hawking set forth in the 1980s with James Hartle, the big bang is not the product of anything outside itself.24 At the quantum level, they argue, the distinction between time and space disappears, so there is simply no such thing as the “beginning of time.” Rather than being bounded by anything outside itself, the Hartle–Hawking universe is therefore “completely self-contained and not affected by anything outside itself.”25 It is this no-boundary proposal that Hawking and Hertog now offer as a cosmological candidate to “populate” string theory’s landscape. Insofar as their scenario has no need to posit a primordial scene beyond our event horizon, they argue, it is far preferable to eternal inflation or any of the cyclical models.
Crucially for Hawking, this no-boundary proposal stems from epistemological rather than ontological commitments. To say that the universe is not affected by anything outside the universe is not to say “there is nothing outside the universe.” Hawking and Hertog freely admit that there may well be an “original” metaversal sea before and beyond our universal bubble, but they insist that once the bubble forms, the original sea “is irrelevant for observers inside the new universe.”26 Because the eternal inflationary model spends so much time generating regions of space-time forever beyond us, Hawking and Hertog argue, it is not so much wrong as it is unhelpful; as far as they are concerned, “the mosaic structure of an eternally inflating universe is a redundant theoretical construction, which should be excised by Ockham’s razor.”27 At this stage, one might feel compelled to ask how it is that if Hawking and Hertog affirm Ockham’s razor, they can also affirm the many-worlds mantra that “the universe will not have a single history but every possible history, each with its own probability.”28 How is it that all possible worlds within our event horizon are less “redundant” than all possible worlds outside it?
Hawking’s response to this query is relentlessly pragmatic. He acknowledges that “the no boundary proposal predicts a quantum amplitude for every number of large spatial dimensions from 0 to 10.”29 This means that “there will be an amplitude for the Universe to be 11-dimensional Minkowski space, i.e. with ten large spatial dimensions. However,” Hawking continues, “the value of this amplitude is of no significance, because we do not live in eleven dimensions.” In other words, in the face of the swarm of possibilities predicted by both quantum mechanics and the string theory landscape, we can ignore all worlds apart from the ones that bear the attributes of our universe. And we do not need to worry about how “rare” or “common,” how improbable or probable, our universe turns out to be. “As long as the amplitude for three large spatial dimensions is not exactly zero,” Hawking argues, “it does not matter how small it is compared with that for other numbers of dimensions.” Again, the issue is not that universes with different sorts of dimensionality do not exist; rather, it is that any consideration of those extradimensional worlds is “irrelevant, because we have already measured that we are in four dimensions.”30
It is probably important here to point out that Hawking’s most recent work with Hertog has not been widely accepted among physicists. Philosophically, however, it is a fascinating position to take—and it might have implications that reach beyond this specific cosmological model. Hawking is making neither the metaphysical claim that all possible worlds actually exist (the “realist” position) nor the equally metaphysical claim that they are simply mathematical abstractions (the “antirealist” position).31 Rather, he is saying that because the quantum no-boundary proposal matches observations and predicts them, it is a good model. But there might be other, equally good models. For comparison, as Hawking and Leonard Mlodinow remind us, the sun is no more “at rest” in the heavens than the earth is. “The real advantage of the Copernican system” is not that it tells us how the universe really is, but “that the equations of motion and rest are much simpler in the frame of reference in which the sun is at rest.”32 But there is nothing wrong with using an earth-centered frame of reference as long as you are willing to slog through the unnecessarily difficult calculations required to make it work. Hawking and Mlodinow therefore call their position a “model-dependent realism,” arguing that our access to the world is always mediated by the model through which we access it. In other words, “there is no picture- or theory-independent concept of reality.” And so it does not matter how “bizarre” the MWI of quantum mechanics might sound. It “has passed every experimental test to which it has been subjected,” so there is no good reason to reject it.33 But there is also no good reason for ecstatic reflections on the numberless copies of slightly different yous—or on the creatures that might live in eleven-dimensional space. “Life may, after all, be possible in eleven dimensions,” Hawking acknowledges, but who cares? “We know how to live in four.”34
Putting It Together: The Multiversal Bath
Hawking is not the only theorist to connect the MWI with the string theory landscape. Susskind has also claimed that the two map onto each other, although his cosmological bets are on inflation rather than the no-boundary proposal and on ordinary realism rather than model-dependent realism.35 This possibility—that inflation, string theory, and the MWI might be describing the same multiverse—has found sustained exploration in the work of cosmologist Laura Mersini-Houghton. Although this work has gained a great deal of recognition since the publication in the early spring of 2013 of the most recent data from the Planck satellite, it has not yet generated widespread discussion. Nevertheless, I am bringing it under sustained consideration because, in addition to looking increasingly plausible in relation to the newest data, Mersini-Houghton’s model integrates a number of different multiversal scenarios, and it more radically begins to think them through philosophically.
For Mersini-Houghton, the most puzzling “fine-tuning” question is how the universe began in such a highly ordered state. As we have learned in the context of cyclical cosmologies, the Second Law of Thermodynamics dictates that entropy, or disorder, always increases in a closed system. To offer some common examples, a glass of water with ice cubes is in a relatively ordered state; it has a relatively low measure of entropy. As the ice cubes melt, the entropy increases until all the water molecules reach the same temperature, attaining a relatively disordered, high-entropic, or “chaotic” state.36 The same glass sitting on a table, in turn, has a low measure of entropy compared with the glass the cat has just knocked onto the floor, shattering it into a thousand pieces.
What is remarkable about these processes is their seeming inexorability: a glass will shatter when it is knocked to the ground, but those shards of glass, when jostled around, will be very unlikely to reconstitute themselves into a tumbler. Ice cubes melt but do not spontaneously freeze again; eggs break but do not gather themselves back together. To be more precise, the Second Law does not say that such things never happen; it just says such situations are exceedingly rare. As Sean Carroll explains this rarity, “[T]here are more ways to arrange a given number of atoms into a high-entropy configuration than a low-entropy one.”37 In other words, there are a staggering number of ways in which a glass can shatter into pieces, but only one configuration that will order all those pieces into a drinking glass, so the probability of this one configuration’s occurring is extraordinarily low. For all intents and purposes, systems always move from order to chaos in a direction cosmogonists might call “de-creative.”
This unidirectional movement from low entropy to high entropy constitutes what is known as the “arrow of time”—the inexorable passage of all things from the past through the present to the future. Insofar as our universe seems indeed to have an arrow of time—that is, insofar as eggs do not unbreak and lives do not play themselves backward—the level of cosmic entropy must be progressively increasing. This means, looking back, that the universe must have begun in a very highly ordered, low entropic state; the primordial plasma must have been something like a great, unbroken cosmic egg. The history of the universe would, in turn, be one of increasing disorder as it moves from radiation to gravity to dark energy. As Carroll explains, “[T]he universe starts out in a state of very low entropy; particles packed together smoothly. It evolves through a state of medium entropy: the lumpy distribution of stars and galaxies we see around us today. It ultimately reaches a state of high entropy: nearly empty space, featuring only the occasional stray low-energy particle.” Carroll continues to explain that the question then becomes, “[W]hy was the entropy low to begin with? It seems very unnatural, given that low-entropy states are so rare.”38 In effect, starting with a highly ordered, highly energetic universe is as perplexing as shards of glass spontaneously assembling themselves into a tumbler; Mersini-Houghton cites Roger Penrose’s estimate “that such an event has only one chance in image possibilities.”39 And yet without such a low-entropic and high-energetic starting point, there would be no arrow of time—which is to say no universe as we know it.
According to Mersini-Houghton, the extraordinarily improbable event of our universe’s being at all can be understood only “if we accept that other more probable events can be conceived”—that is, we can understand the birth of our universe only if we posit a multiverse.40 But the inflationary multiverse alone is insufficient because, as Hawking has also argued, it assumes the high-energy inflation it needs rather than deriving it from physical processes. Unlike Hawking, however, Mersini-Houghton is not rejecting inflation; rather, she is simply suggesting that it starts the story too late. Inflation, she argues, must itself be explained, and this can be done only by getting back behind inflation to the quantum conditions that produce it.41 Also unlike Hawking, Mersini-Houghton is therefore saying that understanding the processes “before” the bang is crucial to understanding the processes after it.
This, then, is Mersini-Houghton’s starting point: a primordial “multiverse bath,” a chaotic sea of virtual particles in which two forces vie for dominance, both trying to bring the system into equilibrium. “Matter degrees of freedom” operate attractively, trying to pull each “patch” of proto-space-time into a black-hole “crunch.” The bath itself also exerts an attractive force, trying to keep each patch “entangled” with the multiverse. Meanwhile, “gravitational [degrees of freedom]” operate repulsively, trying to expand each patch outward into its own region of space-time. The scene, in short, is a precosmic “battlefield,” an eternal tug-of-war between the forces of attraction and repulsion. If matter and the bath “win,” then the patch collapses into a “terminal universe.” But if lambda is strong enough to “survive the backreaction of matter and the bath,” then the patch inflates into a “survivor universe.”42 Here, then, is the origin of inflation and the reason our universe started out with such low entropy (which is also to say high energy): the only universes that nucleate successfully are the ones “born” at energies high enough to survive the attractive pull of matter, which have a sufficiently strong repulsive push to inflate outward into “survivor” universes.43
For Mersini-Houghton, the initial bath, which she also calls the “underlying reality,” corresponds to the landscape of string theory and the wave function of the MWI insofar as it comprises “the ensemble of all possible initial conditions and energies.” Moreover, this scenario begins to provide a “super-selection rule” that explains how certain universes on the landscape come into being, but others do not. Thanks to the battle between matter and lambda, “only a fraction of them, initial states with high energies Λ, are selected as physically relevant ‘survivor’ universes.”44 As one of these survivors finally inflates, Mersini-Houghton explains, her metaphors understandably mixing and crossing, the universe becomes “disentangled” from the original bath that produced it and is “pinned down” into an independent “branch” of the quantum-string-inflationary multiverse. Because the resulting baby universe starts at such a low entropic state, it now has a unidirectional arrow of time and promptly “forgets” where it came from (figure 6.1). “An infant is a pucker of the earth’s skin,” writes the poet Annie Dillard, “so are we. We arise like budding yeasts and break off; we forget our beginnings.”45 Ratcheting this imagery out to the cosmos, Mersini-Houghton offers a similar, if more Latinate, explanation: “There is information loss about the underlying reality of the multiverse,” she writes. “As the bubble decoheres[,] such entanglement with the bath is deemed irrelevant, and these correlations ignored.”46
image
FIGURE 6.1 The birth of universes from the landscape multiverse, according to Laura Mersini-Houghton’s model. (Illustration by Kenan Rubenstein)
And yet the information and correlations might not disappear completely. In fact, Mersini-Houghton predicted that the primordial entanglement of our universe with other universes as well as with the bath from which they all nucleated should have “left its imprints on the cosmic microwave background and large scale structures,”47 and it seems that some of these imprints have been found. In 2006, for example, Mersini-Houghton’s team claimed “that a giant void of a size of about 12 degrees in the sky, should be found at about 8 billion light years away. Amazingly,” she continues, “this giant void was observed only a few months later in 2007” by the Very Large Array Telescope, and it has been confirmed by the 2013 Planck results.48 The team predicted one other, comparably sized void and is searching for it. In the meantime, Mersini-Houghton and her colleagues also predicted a large set of inhomogeneities on the Cosmic Microwave Background (CMB)—an “unusual pattern in the motion of around 800 galaxy clusters” for which inflation cannot account.49 According to Mersini-Houghton, these inhomogeneities would have been caused by “the backreaction of superhorizon matter modes”—that is, the attractive forces that tried to keep the universe from inflating beyond the bath in the first place.50 A team led by astronomer Alexander Kashlinsky strikingly discovered such an “unusual pattern” in 2009, naming it the “dark flow”—another inhomogeneity confirmed in 2013.51 As far as Mersini-Houghton and her colleague Richard Holman are concerned, these discoveries (along with two more technical predictions, both of which have also been discovered)52 provide compelling “evidence for the birth of the universe from the landscape multiverse”; in fact, they offer a kind of photographic record of this birth, testifying to “the nonlocal entanglement of our universe with all other patches and massive modes beyond the horizon.”53
To be sure, this claim is formidable in terms of its scientific and ontological implications alike. More precisely, scientific rigor and ontological complexity here become totally bound up with each other. The possibility that the multiverse might have observational effects on our universe certainly increases its standing as a scientific hypothesis: not only does the multiverse emerge from inflation, string theory, and quantum mechanics, not only is it “needed” theoretically to account for the unlikely parameters of our universe, but it can also (perhaps) be seen. But the condition under which it might be seen, given our inescapable situation within this world, is the “entanglement” of our universe “with all other patches” of the multiversal bath.54 Put more simply, the only multiverse that can be a proper object of scientific study is what Mersini-Houghton calls a “connected multiverse.”55
This “connected multiverse” occupies a complicated ontological terrain. It is not “all-one”; that is, all universes on the multiverse landscape are not immediately present or connected to one another. But neither is the multiverse a set of totally separate worlds, each with its own sets of laws, floating along in total indifference to one another. Rather, according to Mersini-Houghton, the “more plausible scenario” is that “correlations among different domains do exist,”56 even though these domains are not simply accessible to one another. She derives the “plausibility” of these correlations from “the sole assumption that quantum mechanics is valid at all energy scales,”57 specifically the “sacred principle” of quantum mechanics that says that information, even in a black hole, can never be totally lost.58 If it is the case that even preinflationary information is retained in some form, then, Mersini-Houghton predicts, “the entanglement [of our universe with the multiverse bath] leaves its traces everywhere in the present observable sky.” So although our universe has become its “own” branch of the multiverse, it also remains complexly connected to the entangled whole; in her words, “[I]f our universe started in a mixed state in the landscape multiverse, then it can never evolve into a pure state at late times.” Of course, the effects of this impurity diminish as the universe develops; “at present the strength of nonlocal entanglements is very small.”59 But the strength grows the farther out and back one looks; the imprints of other universes and of the bath itself will be found at the “edges” of our own. So regardless of the age or size of the observable universe, its early history will always bear marks of its entangled beginnings.
As far as Mersini-Houghton is concerned, this complex connectivity does not seem to have a philosophical analogue; as she suggests in passing, we do not have an “ontology of the multiverse.”60 But the “connected multiverse” under consideration is not totally without philosophical precedent. Here we might think back to the Cusan cosmos, which we find is neither one nor many, nor is it both one and many,61 but many from the perspective of any world and one from the perspective of God. “The perspective of God” would here translate into the perspective of Mersini-Houghton’s multiversal “bath,” the “underlying reality” in which “time has no direction, no beginning and no end.”62 From the point of view of what Mersini-Houghton calls “fundamental time” (a phenomenon that earlier philosophers would have called “eternity”), all worlds are part of a shared multiversal space-time.63 From the perspective of any particular world, however, the arrow of time is irreversible, setting each universe on its own independent course forever, marked only by the scars of its early entanglements. So the “connected” multiverse is either one or many, depending on how you look at it.
In addition to establishing this link to Nicholas of Cusa, we might also consider plugging back into Giordano Bruno’s “multimodal” ontology and to William James, who coined the term multiverse to name a specific kind of cosmic co-implication: neither all-one nor total fragmentation, but a “strung-along type” of shifting unity, never disbanded yet never “absolutely complete.”64 Of Mersini-Houghton’s connected multiverse, which displays a “connectivity through the nonlocal entanglement of our domain with everything else on the multiverse,”65 we might well be able to say, along with James, that “our ‘multiverse’ still makes a ‘universe’; for every part, tho it may not be in actual or immediate connexion, is nevertheless in some possible or mediate connexion, with every other part however remote.” More radically even than James, however, the theory of quantum entanglement suggests that remote regions can be “in inextricable interfusion” with one another without “hang[ing] together with [their] very next neighbors”—that is, without being materially connected, either directly or in a Jamesian cosmic chain.66 To say, as Mersini-Houghton does, that the multiverse is “nonlocal” is to say that its connections transcend the logic of connectivity, that “here” is ontologically inseparable from the “there” without which here would not be here. The multiverse, then, is not one, but neither is it simply many; rather, it is many by virtue of its complex unity and united in its irreducible manyness.
Black Holes and Baby Universes
Numerous as the Stars
In the previous section, readers may have noticed the strikingly parental (specifically maternal) metaphors that surface in relation to Laura Mersini-Houghton’s connected multiverse. The original state is a primordial sea, or “bath,” from which universes are “born.”67 The birth process is one of competing pushes and pulls that, when successful, leave indelible marks on the neonate. This new universe becomes progressively independent as it ages and grows, but it never loses the physical marks of its original dependency—one might think of the discovered “void” as a great cosmic navel, the “dark flow” as a kind of port-wine birthmark left forever on the CMB. Such images are not limited to Mersini-Houghton’s work, however; one finds a complementary set of metaphors in Lee Smolin’s very different cosmological scenario, which suggests that universes might be born on the “other side” of black holes.
Following the publication of Einstein’s theory of general relativity in 1916, a physicist named Karl Schwarzschild (1873–1916) calculated that if a large enough mass were crammed into a small enough space, it would attract the surrounding area so forcefully that it would rip through space-time itself, producing what John Wheeler later named a “black hole.”68 Black holes are one of the possible products of a dying star: as a star exhausts its energy, it may collapse into a singularity, drawing all the matter, energy, and space-time around it into a vortex surrounded by an “event horizon” from which nothing (at least nothing recognizable) can ever escape.69 It is through this singularity that Smolin connects black holes to cosmogony. As he explains in his book The Life of the Cosmos, “[A] collapsing star forms a black hole, within which it is compressed to a very dense state. The universe began in a similarly very dense state from which it expands. Is it possible that these are one and the same dense state?” If it is possible, then “what is beyond the horizon of a black hole” would be “the beginning of another universe.”70
Of course, we would not be able to see the formation of this other universe, says Smolin, because the star’s collapse would give way to an explosion only “after the black hole horizon had formed around it” (88). But from the inside of this black hole, the stellar explosion would look like a big bang. If all of this is the case, then by extension our universe is sitting on the inside of some other universe’s black hole, connected to this “parent” universe by a vanishing “umbilical cord” that has torn through the fabric of space and time.
Smolin places his black-hole cosmogony in the lineage of the cyclic cosmologies of the 1930s and 1960s—those “phoenix” universes that posited a universal contraction, crunch, “bounce,” and reexpansion.71 “What we are doing,” he explains, “is applying this bounce hypothesis not to the universe as a whole, but to every black hole in it” (88). The result is not a single universe recycled throughout eternity, but a “continually growing community of ‘universes,’ each of which is born from an explosion following the collapse of a star into a black hole” (88). Smolin more commonly refers to this community as an ever-growing “family,” with each universe producing “progeny” that themselves “reproduce” by forming as many stars (which is to say, potential black holes) as possible.
Smolin begins his reproductive cosmogony from what he admits is an arbitrary starting point. It might be that universes have existed throughout eternity, or it might be that a vast number of them suddenly appeared at the same time. As far as Smolin is concerned, we will never know what was “there” at the origin of things; all we know is that at least one universe has survived the cosmogonic process (101). So for simplicity’s sake, he suggests, “let us imagine that the universes in our collection are all progeny of a single, initial universe” (96). What would this single, initial universe have looked like? Again, Smolin confesses not to know and so suggests that we “make the best of our ignorance and assume that its parameters are chosen randomly” (96). The universe tries on various values of lambda, of matter, of the electroweak force, and the results are likely to be fairly disappointing for a long number of cycles. If its constants are “chosen” randomly, he writes, then “it is very unlikely that the parameters of this first universe are finely tuned to values that result in a big universe full of stars” (96); rather, the new world will most likely either collapse instantaneously or inflate too quickly for anything to form.
“For a long time,” Smolin imagines, “the world is nothing but a series of tiny universes, each of which grows out of the one before it…. What is happening is that, one by one, different possible parameters are being picked randomly and the consequences of each tried out” (97). Now we have no idea, to channel David Hume’s Philo, how many “worlds might have been botched and bungled … ere this system was struck out.”72 But we do know that at least one universe eventually worked, so each universe in our lineage must have had at least enough matter to collapse and bounce one time. One of these random universes will eventually obtain the parameters that allow it to form stars, which is to say black holes, which is to say new universes. If the reproduction of universes looks anything like the reproduction of plants or animals, Smolin speculates, then these new universes will have traits resembling those of their “parents,” so most of them will go on to produce stars, black holes, and baby universes of their own (100–101).
For Smolin, the most attractive feature of this scenario is that it keeps a safe distance from the anthropic principle.73 In fact, he argues, this scenario replaces anthropic reasoning with a better-established scientific process—that is, natural selection. Rather than suggesting that the universe takes on the parameters it needs to create life (or, more weakly, that life emerges in those universes that can support it), the black-hole scenario simply says that the only types of universe that survive are those that can produce new universes like themselves. So, as John Gribbin summarizes Smolin’s anti-anthropic reasoning, our universe has the parameters it does “not because it is a good home for life, but because it is good at making black holes.”74
Crucially for Smolin, this scenario also steers clear of theism at all turns by configuring the cosmos as a self-contained system. Borrowing (wittingly or unwittingly) some early Derridean terminology, Smolin states elsewhere that the first principle of scientific cosmology is that “there is nothing outside the universe.”75 The black-hole scenario adheres to this principle by positing a strictly immanent principle of generation and selection. Unlike the inflationary model, which configures universes as the offspring of a sole “ancestor, which is the primordial vacuum,”76 Smolin’s scenario allows each baby universe to produce its own offspring, eliminating the need for any single, extracosmic origin at all. Viewed as a whole, then, Smolin’s cosmos looks like an inversion of Hawking’s, branching genealogically outward as it goes forward in time, not backward. And the genealogical metaphorics are not lost on Smolin; as he imagines “a multiverse formed by black hole bouncing,” it “looks like a family tree. Each universe has an ancestor, which is another universe. Our universe has at least 1018 children; if they are like ours, they each have roughly the same number of children.”77 This scenario allows Smolin to avoid positing any sort of creator; worlds are eternally generated by means of an immanent reproductive process. Ironically, however, Smolin avoids the old extracosmic God by taking literally that same God’s promise to make his chosen creature’s descendants as “numerous as the stars of heaven.”78
Cosmic Manufacturers
Smolin’s proposal has not been widely adopted, but it is certainly well known and has produced a series of fascinating physicophilosophical speculations. One proposal—whose authors argue that it can apply to black-hole, inflationary, and quantum cosmologies—demonstrates that if one of the eventual “baby universes” has the same conditions as the original universe, then it might, in fact, be the original universe. Time eventually closes these worlds into an autocreative loop; in a catch-phrase, “the Universe can be its own mother.”79 Specifically, in being the mother of all universes, the universe can at one point give birth to itself, which is to say the universe that gives birth to all universes.
A better-known but no less speculative riff on Smolin’s work involves conscious intervention into universe formation. The line of thinking goes like this: if it is the case that anything sufficiently compressed can form a black hole, and if, as Smolin suggests, black holes produce universes inside their horizons, then what is to stop people like us from making black holes and thereby creating universes? This possibility has been explored in greatest depth by cosmologist Edward Harrison, who combines Smolin’s proposal with Guth’s inflationary cosmology to suggest that “all” we would need to create a universe would be “a small spherical nugget of space filled with a high-value inflaton field”80 and enough energy to compress it into a black hole. Then we could be the mother of a few universes ourselves.
Those who have been following the progress of the Large Hadron Collider in Geneva, Switzerland, may remember the news storm surrounding its launch in 2008; some scientists, the blogs and papers told us, were worried that the collider might create a black hole that would swallow our entire planet.81 Two decades earlier, however, Steven Blau, Eduardo Guendelman, and Alan Guth had shown that even if humans were able to compress a patch of space-time into a black hole, the resulting bounce would not take place within our universe.82 Instead, as Harrison appropriates and explains this process, the interior of a black hole “immediately inflates not in our universe, but in a re-entrant bubble-like spacetime that is connected to our universe via the umbilical cord of the black hole. The black hole rapidly evaporates by Hawking radiation and severs the connecting link with the new universe.”83
As many physicists charge and as Harrison admits, scientists are excruciatingly far from being able to generate the amount of energy that would be required to make a universe in this fashion.84 But Harrison suggests that our limitations in this regard are more quantitative than qualitative; even if, as Brian Greene estimates, “the compressive force we would need to apply is trillions and trillions of times beyond what we can now muster,”85 Harrison insists that this problem remains one of degree rather than kind. In principle, Harrison argues, we know how to do it. As a thought experiment, then, he suggests we imagine “that these practical difficulties can indeed be overcome, and that civilizations vastly more advanced than ourselves are able to steer the course of eternal inflation in their vicinity by initiating the production of mini-universes in their laboratories.” At first, they would most likely create a slew of failed universes, botching and bungling worlds until they finally created one that worked. Within this universe would eventually evolve “yet-more-advanced civilizations (perhaps inheriting information from their makers to accelerate them on their way),” whose inhabitants would, in turn, continue to improve the parameters of their baby universes.86 The result would be a universe that would be impossibly well tuned to support life, much like our own universe. Once the manufacturers got the hang of their art, they would be able to create as many universes as they liked—one a day, perhaps, or even one an hour. In very little time, then, manufactured universes would outnumber naturally occurring universes by a large measure. If it is indeed possible to create universes at all, it is therefore likely that ours has been manufactured. This, then, is Harrison’s answer to the question of why our universe looks so fine-tuned: it is. Our universe is conducive to the emergence of life and is, in turn, comprehensible to the intelligent beings who emerge therein because the universe was created by intelligent beings like us (just of far greater intelligence) who carefully set the parameters so that planets and stars and beings like us might eventually come along and decode the secrets of the universe.87
If this argument sounds familiar, it is because it is uncannily akin to the “argument from design” so inimical to secular physicists. Harrison seems unaware of the extent of this resonance; in fact, he offers his “do-it-yourself universes” as a “third choice,” an alternative to the equally unappealing options of “either a supreme being beyond rational inquiry or a multiworld wasteland of mostly dark and barren universes”—that is, the inflationary scenario.88 John Gribbin, who is a great enthusiast of Harrison’s proposal, amplifies this distinction, insisting that the “intelligent designers” in Harrison’s laboratories look nothing like the “Intelligent Designer” of theism because these new “gods” are “not incomprehensible.”89 Rather, as Harrison argues, they are “comprehensible beings who had thought processes basically similar to our own.”90
Although I appreciate the distinction, and although many “philosophical theists” do indeed proclaim the omnipotence and omniscience of the deity (characteristics that Harrison’s intelligent designers do not share), it seems important to point out that there is nothing “incomprehensible” about the God of intelligent design. Far from being “beyond rational inquiry,” this God is proven through rational inquiry, thanks to (1) the fine-tuning of the universe, what Hume’s Cleanthes calls “the curious adapting of means to ends,” and (2) the resemblance of this universe to what he calls “products of human contrivance, of human design, thought, wisdom, and intelligence.” Far from being “incomprehensible,” the old designer God is said to be “somewhat similar to … man; though possessed of much larger faculties.”91 Likewise, Harrison’s new designer-gods are intelligent beings “like us” (specifically like “us physicists”), although possessed of superior brainpower, technology, and sources of energy.92 It is not clear, then, that the designer universe really is a “third choice”; in fact, it sounds remarkably like the old first choice, just with some updated ancestor gods. And until we make a universe of our own, this “argument from manufacture” will make just as tenuous an analogical leap from the known to the unknown as does the argument from design.
Sim Cosmos
Brian Greene is among those physicists who believe it is highly unlikely that we are living in a manufactured universe. Even if it were physically possible to make such a black-hole universe (and he is fairly certain that it never will be), he cannot imagine why future scientists would want to. After all, as he explains in The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos, they would never even know if they had been successful. Because a new universe would exist on the other side of a black-hole horizon, its manufacturers would never be able to tell whether the new world had inflated at all, much less how long it would live, whether it looked like its parent, how many stars it had, and the like.93 A far more likely possibility, Greene argues, is that future scientists might make a simulated, or virtual, universe.
The reason Greene considers it more likely that scientists of the future will create simulated worlds than baby black holes is that “we’re already doing it” (288). From Sim City and Second Life to military-training software to Sindome: A Cyberpunk Role-Playing Game Set 85 Years in the Future, programmers have already created virtual worlds, laying out the infrastructure that ordinary humans (that is, not computer geniuses) can fill in with content of their own design.94 So, Greene contends, “the question is how realistic the worlds will become” (288). Is it possible that the “Wii Mii” you created yesterday—designing her to have interests in soccer and astronomy, giving her a job as a nurse at a local hospital, and placing her in a two-bedroom house with a dog—might one day think that she has done all this for herself? Might technology one day reach the point of being able to simulate consciousness?
Some philosophers of science call the (allegedly impending) moment at which the capacity of computers will exceed that of a human mind the “singularity,” borrowing the term from cosmology.95 Such a singularity would issue in a “posthuman” era in which the distinction between organism and machine would break down completely. At this stage, it would presumably be relatively easy to make “realistic” simulations; an overcaffeinated afternoon’s work on the part of an average posthuman would produce a virtual cosmos filled with planets, stars, and a few billion lonely sims, full of false memories of the past and wondering if there is anything else “out there.”
With this scenario as even a far-off possibility, the “transhumanist” philosopher Nick Bostrom offers the following argument:
At least one of the following propositions is true:
1.   The fraction of human-level civilizations that reach a posthuman stage is very close to zero.
2.   The fraction of posthuman civilizations that are interested in running … simulations is very close to zero;
3.   The fraction of all people with our kind of experiences who are living in a simulation is very close to one.96
Walking a bit more slowly through this argument, Bostrom is saying that it is possible that (1) civilizations approaching a posthuman phase will destroy themselves with their advanced weapons and technology before they ever reach the singularity. This would mean that conscious simulations will never be achieved, nor have they ever been achieved in the past. Or it is possible that (2) once civilizations attain the intellectual sophistication required to run a conscious simulation, their posthumans will not want to—either because they will judge it unethical to create beings who mistakenly believe they are free or because, to put it bluntly, they will have outgrown video games. If this is the case, then although there might be a few simulated universes, their number would be “very close to zero,” so we can be sure that most worlds are real (that is, not simulated). If, however, intelligent beings can reach a posthumanist phase without obliterating themselves, and if they remain interested in creating simulated universes, then (3) they would presumably create universes all the time. Adolescent posthumans would be creating worlds on their wePhones between classes. This would mean that the ratio of “real” worlds to virtual worlds would be minuscule, and the probability that we ourselves are living in such a simulation would be “very close to one.” In short, if we believe that it will ever be possible to simulate conscious beings, then we must also believe that we are probably among those simulated beings—our universe, as Philo imagines in a particularly manic section of Hume’s Dialogues, “only the first rude essay of some infant deity.”97
Bostrom is commonly described as the philosopher who proclaims our universe to be a virtual one, but his position is more complicated than this description indicates. As he explains in a follow-up article to his widely circulated article “Are We Living in a Computer Simulation?” he does not argue that we are living in such a simulation. “In fact,” he writes, “I believe that we are probably not simulated.”98 But if this is the case, if condition (3) is not true, then either condition (1) or (2) would have to be true; that is, if we are not simulated, it must be either because humans tend to destroy themselves on the way to posthumanism or because they lose interest in simulations. The crux of the argument is simply that that if we affirm the eventual possibility of simulated consciousness, then our own consciousness is most likely simulated.99
The question, then, is whether it will ever be possible to simulate consciousness at all. For those who believe that it will, the major premise is that “life and complexity means [sic] information processing power”; that is, everything from algae to fir trees to the brains of elephants can be explained in terms of computing operations.100 The minor premise is that the computers of the future will one day be sufficiently powerful to run all the operations in the universe at once. The conclusion, of course, is that at such a time simulated worlds will be possible, and from there Bostrom’s third stipulation holds: we ourselves are probably living in a simulation.
Greene, who delightfully proclaims this possibility in various forms of popular media,101 explains the calculations this way in The Hidden Reality. A human brain can perform 1017 operations per second. This number is baffling; Greene estimates that the processing power required to simulate a human brain would be “a hundred million laptops, or a hundred supercomputers” (283). Over a hundred-year lifetime, the same brain will perform a total of 1024 operations, and if we multiply that number by all the humans who have walked the earth—one is struck all over again by the power of compaction in exponential notation—the total number of human operations “since Lucy … is about 1035” (286). According to Greene, a computer the size of the earth could perform between 1033 and 1042 operations per second; hence, “the collective computational capacity of the human species could be achieved with a run of less than two minutes on an earth-sized computer” (287). Such a scale may seem to put the proposal of a simulated humanity beyond reach, but if quantum computing becomes as sophisticated as some project that it will, then the earth-size machine it would now require could be shrunk to the size of a laptop (287). So, Greene concludes, there is nothing in principle to prevent the simulation, not only of humans, but of a whole universe containing them—again, given that the (often unacknowledged) major premise is true and that organic life can indeed be reduced to information processing.
In Meditations on First Philosophy (1641), René Descartes famously worries whether he can trust his senses or thoughts at all.102 The people walking outside below his window might be automata, he imagines, and the mountains in the distance some sophisticated projection. For all he knows, it might even be that 2 + 2 = 7 and that the angles of a triangle actually add up to 193 degrees—but that the universe is being run by an “evil deceiver” whose psychotic will it is to ensure his creatures are miserably wrong about everything. In the meditations that follow, Descartes uses this fantasy as a means of doubting everything—from his body to his memory to his surroundings—seeking from this point of radical doubt to find one thing of which he can be certain. His first discovery is well known: “thought exists,” he proclaims; even if everything he thinks is wrong, thinking is nevertheless going on (2.27). So although the evil deceiver might have tricked him into believing that he has legs or that objects fall downward or that 2 + 2 = 4 when the answer is really 7, “he will never bring it about that I am nothing so long as I shall think that I am something” (2.25). This, therefore, becomes Descartes’s “Archimedean point,” the one thing of which he can be certain. If thought exists, then he exists; and if he exists, then, as the rest of the Meditations demonstrate, anything Descartes “clearly and distinctly” perceives to be true “is necessarily true” (5.70). Again, this exercise is well known—from T-shirts to coffee mugs to puns about Descartes blipping out of existence when he declines to order dessert (“I think not”)—most people have a sense that Descartes’s epistemology, his theory of what we can know, hinges on the demonstration of his own existence as a thinking thing.
What is less commonly discussed is that Descartes’s philosophy also hinges on a demonstration of the existence of God. Descartes devotes lengthy sections of Meditations 3 and 5 to proving not only that there is a God, but that this God is good. The reason for this undertaking is that if God does not exist, and if he (per Descartes) is not good, then there may well be an evil deceiver (“God” might even be that deceiver!), and Descartes may well be a brain in a vat in his diabolical laboratory. And who knows—maybe the demon can deceive Descartes into thinking that he is thinking, when he is actually doing no such thing; maybe the evil deceiver is just that deranged. On its own, then, the infamous “I think” (cogito) is insufficient. It is only with the knowledge “that there is a God, and … that everything else depends on him, and that he is not a deceiver” that Descartes can be certain that he thinks, and that he can think clearly and distinctly, and that he has a body, and that other beings exist outside his mind, and that apples fall downward, and that the angles of a triangle add up to 180 degrees (5.70, emphasis added). “And thus,” Descartes concludes, driving semester after semester of philosophy students crazy, “I see plainly that the certainty and truth of every science depends [sic] exclusively upon the knowledge of the true God” (5.71).
The point of this sudden excursus through Descartes is simply to point out that with the simulation argument the evil deceiver has returned—and multiplied. Greene testifies to this outcome in The Hidden Reality (without mentioning Descartes) when he asks, “[O]nce we conclude that there’s a high likelihood that we’re living in a computer simulation, how do we trust anything, including the very reasoning that led to the conclusion? … Will the sun rise tomorrow?” (288–89). As Descartes feared, without the assurance that the universe was created by a loving God, it does become a real possibility (at least for some) that there might be “not a supremely good God, the source of truth, but rather an evil genius, supremely powerful and clever, who has directed his entire effort at deceiving me.” In this event, says Descartes, “the heavens, the air, the earth, colors, shapes, sounds, and all external things” would be “nothing but the bedeviling hoaxes of my dreams, with which he lays snares for my credulity” (1.22). What, then, are we to do?
For Paul Davies, the simulation argument is the point at which the whole multiverse hypothesis becomes a “reductio ad absurdum,” undermining itself completely. “If the perceived universe is a fake,” he explains, “then so are its laws, and we have no justification in extrapolating the fake physics to the whole of reality…. And since we would have no idea at all what the laws of physics might be in the real universe—and no reason to expect them to resemble ‘our’ laws—then we cannot assume that the real laws will permit a multiverse.”103 So, he concludes, “while it may be true that our universe is a fake, it seems to me that drawing that conclusion would spell the end of scientific inquiry.”104 We will return to this and other critiques of the multiverse in the final chapter, but for the moment it will suffice to mention that other thinkers approach the simulation possibility a bit more lightheartedly. Along with Davies, they concede that the moment we presume we are being simulated, we must abandon the Enlightenment dream to know the world as it “really is”; all we can hope to know is confined to the world as it appears to us at this moment right now. So, as economist Robin Hanson suggests, the question might have to shift from an epistemological one to an ethical one. Rather than wondering how to gain knowledge in a simulation, he argues, we ought to be asking “how to live in a simulation.”105 Most pressingly, we ought to be asking: How are we to live in such a way that our simulators do not decide to pull the plug on our universe or replace us with a cuter, more interesting set of sims?
Hanson frames his argument much the way Blaise Pascal frames his famous “wager” on the existence of God.106 We may or may not be living in a simulation, Hanson begins. It is likely that we will never know for sure. Therefore, “all else equal,” you should live as if you are being simulated and do your best to ensure your continued (virtual) existence. “All else equal,” Hanson suggests, “you should … expect to and try to participate in pivotal events, be entertaining and praiseworthy, and keep the famous people around you happy and interested in you.” Although it is hard to know what “all else equal” means here, the main assumption behind Hanson’s behavioral prescriptions is that our simulators are likely to keep running the simulations that entertain them. Our efforts should therefore be directed toward keeping them happy with us; as Hanson points out, “your motivation to save for retirement, or to help the poor in Ethiopia, might be muted by realizing that in your simulation, you will never retire and there is no Ethiopia.” So, he concludes, “all else equal you should care less about the future of humanity and live more for today.”107
Again, it is difficult to know how, exactly, “all else” might be “equal” under these circumstances. How does one weigh the possibility that people might be dying of starvation against the possibility that they might be “the bedeviling hoaxes” of some narcissistic simulators? Moreover, ought we to assume such narcissism on the part of our simulators in the first place? What if they are running a moral experiment to pull the plug on anyone who fails to respond to the suffering of other simulated creatures? Hanson is aware that he is making an unprovable assumption about the programmers’ character, but he thinks that it is nevertheless a safe bet. The traits the simulators would encourage in us would likely be the same ones they themselves possess, and “it would seem inconsistent of them to greatly emphasize humility, for example.” After all, they themselves are “willing to play God.” So, Hanson exclaims, “be funny, outrageous, violent, sexy, strange, pathetic, heroic … in a word, ‘dramatic.’”108 Otherwise, the simulators might not let you be at all.
It seems important at this juncture to state the obvious: we have somehow again collided with theology. The moment we engage in speculations and wagers about the existence of Simulators,109 the moment we say (as we did in relation to the black-hole producers) that “they” are just like “us,” only smarter and more powerful, the moment we try to enumerate their attributes, divine their will for humanity, and attempt to conduct ourselves in accordance with it, there is no conceivable way to distinguish our technoprophetic ruminations from constructive theology in its most classic form. To make matters worse, not only do the Simulators look like the old gods in new lab coats, but they also prevent our knowing anything “true” about the world at all. Most physicists are therefore eager to point out that the simulation argument stems from philosophy, not physics, and even that “simulated realities are not welcomed into the scientific world-view.”110 That having been said, simulation arguments have not been shut out of the “scientific world-view” entirely; there are at least a few (particularly prolific and media-savvy) physicist-cosmologists who find the idea at least worthy of serious thought as they work their way through different theories of the multiverse.111 What is ironic, however, is not only that the simulated multiverse dismantles any claim to know the world as it is (including whether there is a multiverse or not), but also that it undermines much of the philosophical impetus behind the recent multiversal turn in scientific cosmology. As John Barrow reminds us,
[T]he multiverse scenario is favoured by many cosmologists as a way to avoid the conclusion that the Universe was specially designed for life by a Grand Designer. Others see it as a way of avoiding having anything to say about the problem of fine tuning at all. But now we see, once conscious observers are allowed to intervene in the Universe … there is a new problem. We end up with a scenario in which the gods reappear in unlimited numbers in the guise of the simulators who have power of life and death over the simulated realities that they bring into being.112
In a similar vein, Davies charges that “the simulated beings … stand in the same relation to the simulating system as human beings stand in relation to the God (or gods) of traditional religion.”113 The only considerable difference between these new creator-gods and the old creator-gods/God (whether demiurgic or omnipotent) is that the black-hole manufacturers and universe simulators make their worlds neither out of some primordial chaos of materials nor out of nothing, but out of other worlds—that is, the ones that they themselves are in. In other words, both the lab-designer and the simulation arguments rely on earlier cosmic processes to get “natural” worlds going before (post)humans gain the power and intelligence necessary to make their own.
Edward Harrison navigates this difficulty by relying on a combination of Lee Smolin’s and Alan Guth’s cosmologies. In the beginning, he ventures, collapsing regions of space-time inflate on the “other” side of black holes, evolve according to natural selection, and then eventually produce “at least one universe … with intelligence at about our level.”114 When the universe’s humanoids become capable of creating universes, the number of manufactured universes begins to exceed the number of galaxy-rich “natural” universes—until the probability of any given creature’s living in a simulation becomes “almost one.” Martin Rees takes a similar tactic, using different resources. Integrating the simulation argument not with Smolin, but with inflationary cosmology and string theory, Rees argues that of the infinite number of 10500 universes, surely some will be able to produce the quantum computers that can simulate worlds. As he puts it, “[O]nce you accept the idea of the multiverse, and that some universes will have immense potentiality for complexity, it’s a logical consequence that in some of those universes there will be the potential to simulate parts of themselves.” But the fairly dizzying result of all this is that “you may get [a] sort of infinite regress, so that we don’t know where reality stops and where the minds and ideas take over, and we don’t know what our place is in this grand ensemble of universes and simulated universes.”115
Multiverse of Multiverses (Forever and Ever)
Martin Rees’s concern about the confusion between “reality” and “ideas” places this snowballing multiverse scenario into a philosophical lineage at least as old as Plato. The terminology, however, is a bit misleading: for Plato, “reality” was the “Ideas.” The realm of the Forms was the realm of Being, Truth, Eternity, whereas the material world was in one or another way derivative: either a deceptive shadow of the Forms (as in the Allegory of the Cave) or a “mixed” incarnation and a “moving image” of the Forms themselves (as in the cosmology of the Timaeus).116 But either way, the ideal is what is really real for Plato because it (presumably) never changes, (presumably) transcends spatial and temporal location, and (presumably) holds for all beings everywhere. The most common illustrations of this principle tend to be mathematical: an ideal triangle is more real than any material triangle in the universe; it is the Triangle itself, whereas all earthly triangles do their best only to approximate it. It is in this sense that theoretical physicist and multiverse theorist Max Tegmark proclaims himself a “Platonist”; as far as he is concerned, the reason mathematics “describes the universe so well” is not that math is a useful abstraction of physics (which would be the “Aristotelian” position), but that the physical world is an incarnation of mathematics itself: “[T]he universe,” he declares, “is inherently mathematical.”117
Declaring that “most theoretical physicists” share this view, Tegmark ascribes their incessant propensity to ask “why” to their thoroughgoing Platonism.118 As he understands it, “the Platonic paradigm raises the question of why the universe is the way it is. To an Aristotelian, this is a meaningless question: the universe just is. But a Platonist cannot help but wonder why it could not have been different. If the universe is inherently mathematical, then why was only one of the many mathematical structures singled out to describe a universe?”119 At this point, it becomes hard to know what Tegmark means by a “Platonist” and therefore hard to answer the question. For Plato, at least, the reason the universe has the specific mathematical configuration it does is that this configuration allows it most clearly to resemble the “perfect living creature”—that is, the Form that contains all other Forms within it.120 This resemblance crucially hinges on the eternity and singularity of the cosmos; regardless of how seriously one takes the demiurge, Plato’s explanation for the fine-tuning of the cosmos is that the universe has precisely the mathematical configuration it needs to ensure, first, that it lives forever and, second, that it has no other worlds outside itself.121
Perhaps unsurprisingly, this is not the way that Tegmark answers his own “Platonic” question about the properties of our universe. Rather, offering an overly concrete interpretation of the “reality” of the mathematical, he suggests “that all mathematical structures exist physically as well. Every mathematical structure corresponds to a parallel universe.” And just as one might begin to respond, with some perplexity, that the “reality” of the Platonic Forms consists precisely in their not existing physically, Tegmark adds the further “Platonic principle” that “the elements of this multiverse do not reside in the same space but exist outside space and time.”122 This principle is what Tegmark calls the “Mathematical Universe Hypothesis”: every possible universe actually exists—physically—in some kind of Idea-scape “outside space and time.” It is as if Timaeus’s “Perfect Living Creature” were not just the form of creation, but the sum of creation.
I do not want to belabor the point of Tegmark’s exceedingly strange interpretation of Platonism—merely to mark how important it seems to him to speak under that mantle (in one piece, he calls his stance a “radical Platonism” insofar as it asserts that the ideas “exist ‘out there’ in a physical sense”).123 Tegmark’s clearer philosophical ancestors are the American philosophers David Lewis, whose theory of “modal realism” states that all metaphysically (not just mathematically) possible histories must be actual,124 and Robert Nozick, who argues that the characteristics of our universe make sense only if every other set of characteristics is actual somewhere else. Without referring to the anthropic principle, Nozick operates according to what he calls the “fecundity assumption,” a descendant of Arthur Lovejoy’s “principle of plenitude,” which of course took its cue from Lucretius.125 From this genealogical perspective, then, Tegmark is far more of an Atomist than he is a Platonist.
Rather than signaling much of a metaphysical lineage, Tegmark’s claim to Plato might instead have something to do with the foundational place that Plato occupies in the history of Western mathematics, philosophy, and cosmology, for, as far as Tegmark is concerned, his own Mathematical Universe is the consummation of all these fields of knowledge. His claim might also have something to do with the Neoplatonic notion of a progressive cosmic “hierarchy,” a ranked Chain of Being that extends from rocks and worms to animals, humans, angels, archangels, and finally to the Absolute itself.126 After all, Tegmark’s multiverses are arranged in what he himself calls “the multiverse hierarchy”: Level I is nestled within Level II; Level III brings Level I into infinite dimensional space; and the landscape describes them all.127 But the great chain of kosmoi does not end here. Having walked his readers up this steep mountain of multiverses, Tegmark announces that Level IV, which is to say his own mathematical multiverse, is the highest level imaginable—even higher than the landscape, for although, in the words of Helge Kragh, “10500 universes are a lot, the number is infinitesimal compared to the number of possible universes.”128 A multiverse that comprises all mathematically possible universes “brings closure to the hierarchy of multiverses,” Tegmark claims, “because any self-consistent fundamental physical theory can be phrased as some kind of mathematical structure.”129
This is the reason that Brian Greene calls Tegmark’s Mathematical Universe Hypothesis the “Ultimate Multiverse,” and Davies calls it a “multiverse with a vengeance.”130 To wit, there is room in this multiverse for everyone: room for quilted multiverses and inflationary multiverses, room for Atomists and Stoics, room for Thales’s water and Anaximenes’s air, room for “one boring pair of cymbals clashing out the same old song.”131 In the compendium of all possible universes, some worlds will be stacked like slices of bread, and some will fly down the throat of a Calabi–Yau manifold: “A universe governed by Newton’s equations and populated solely by solid billiard balls … is a real universe; an empty universe with 666 spatial dimensions … is a universe too.”132 “How about a universe that obeys the laws of classical physics, with no quantum effects?” asks Tegmark. “How about time that comes in discrete steps, as for computers, instead of being continuous? How about a universe that is simply an empty dodecahedron? In the Level IV multiverse, all these alternative realities actually exist.”133 So some worlds will be linear, and some will be cyclical; some will be singular, and some will be plural; some will be infinite, and some will be finite; some will branch forward, and some will branch back. Some worlds will be manufactured, and some will be simulated; some designers will be kind, and some will be cruel, some capable and some all but incompetent.
And, presumably, some of the set of all possible worlds will have a creator-god who breathes over primordial waters, who separates the seas from dry land.
How on earth did we get back here?