DO WE LIVE IN A SIMULATION?

Human nature, the laws of physics, and the march of technological progress

In 1998, almost no one you’d meet on the street would have given this question a moment’s thought. By the end of 1999, the possibility had been discussed by millions of people around the globe. Why? Because they had seen The Matrix. The central premise of the film is that the human population of Earth is lying in vats of nutrient, their energy being harvested by a race of machines.

To keep us from reacting to this horror, we are granted an existence in a simulated reality, accessed via a direct connection to our brains. All our conscious experiences, then, are nothing more than the product of a computer program.

It’s not an unprecedented idea. Philosophers since Descartes have argued about whether our perception of reality could be the product of deception, and science-fiction writers have used a similar premise many times. In 1966, for example, Philip K. Dick published a story where people bought “implanted memories” that enabled them to experience things they had never done. The TV series Doctor Who introduced a massive computer system called the “Matrix” in 1976; this could also be directly connected to the brain to allow out-of-body experiences.

But the 1999 movie The Matrix obviously hit the screens at just the right time. Within a few years of its release, physicists were discussing the idea at scientific conferences, and every time they did, it was the movie that was referenced. Strange as that may seem, there was good reason. The idea that we live in a simulated reality was one of the few plausible answers to a very old question that had just resurfaced in physics.

Looking out at the universe, astronomers have noticed something strange. They almost hesitate to mention it, but it is like an elephant in the room, and has to be acknowledged. This universe is remarkably good for us. Change it a little bit—tweak one of the laws of nature, say—and we simply wouldn’t have arisen. It is almost as if the universe was purposely designed for our habitation. If that is the case, could the designer be a race of superintelligent beings who have some reason—maybe work, maybe pleasure—to will our existence?

It’s a big “if,” of course—perhaps the biggest “if” in physics. The discussion of that “if” even has a name: the “anthropic principle.” It’s a misnomer really. For starters, it’s more of a suggestion than a principle. And, although anthropic means “human-centered,” that’s not really what it’s about. The person who coined the term, astrophysicist Brandon Carter, meant it to encompass not just human life, but the existence of intelligent life in general.

Carter came up with the anthropic principle at a time when physicists were coming to terms with a new paradigm: the Big Bang. Until the idea of a beginning to the universe was widely accepted, physicists had assumed there was no such thing as a “special” time in the universe’s history. The universe had always existed, and would always exist, pretty much as it is.

With the 1963 discovery of the cosmic microwave background radiation, though, everything changed. Once the radiation was recognized as an echo of the moment of creation, the universe was seen to have an unfolding history, punctuated by significant events. The trouble was, one of the central premises of astronomy has always been the Copernican principle, which asserts that humans do not hold any special place in space, nor in time. With the Big Bang, the Copernican principle was under threat.

A special universe?

But, Carter said, whatever our prejudices, we have to acknowledge there is something special about our relationship with the universe. “Although our situation is not necessarily central, it is inevitably privileged to some extent,” he told an assembly of scientists in 1974. That privilege comes, first, through the laws that govern the universe’s evolution.

There are a number of reasons why one might think that these laws were designed to give us a comfortable existence. The first is the rather convenient strength of gravity. After the Big Bang, space was expanding, forcing all the particles of matter further and further away from each other. The force of gravity was working against that expansion, however: the mutual gravitational attraction of the particles pulled them toward each other.

There are three ways in which this could have worked out. First, the expansion of space could have overwhelmed gravity’s pull. In this scenario, known as the “open” universe, every particle of matter would be pushed further and further apart, and the increasing separation would make the gravitational pull weaker and weaker. In this situation, galaxies—maybe even the stars themselves—would not have formed.

What if gravity’s pull overwhelmed the push of expanding space? Then stars and galaxies might have briefly formed, but the strength of gravity means they would have quickly collapsed in on themselves and each other, and the universe would have imploded in a huge gravitational crunch. This is the “closed” universe.

The third, “critical” scenario involves a delicate balance between push and pull. Here the density of matter in the universe is such that, just after the Big Bang, the gravitational pull almost perfectly offsets the expansion of space. It pulls matter together just enough for stars to form, and for the stars to gather into galaxies. Thanks to their mutual gravitational pull, the expansion of the space between them is slowed, and the universe is granted a long and fruitful life.

A cosmic coincidence

So, what is the difference between these scenarios? When astronomers crunch the numbers, they first look at the critical universe. For this, they need to examine the density of matter in the universe, a parameter they call “Omega.” It turns out that, for the critical universe scenario to occur, Omega had to have a particular value at one second after the Big Bang. Astronomers set this at one. And if Omega had been greater or less than one by an astonishingly small amount—one in a million billion—the universe would have either crunched closed or flung matter far apart way before life could establish itself in the benign environment surrounding a young star such as our sun.

It’s not the only cosmic coincidence. If the strength of gravity is conveniently but finely balanced against the initial expansion of space, allowing stars like our sun to form, consider the efficiency with which the sun releases energy by fusing hydrogen atoms to form helium. The efficiency is around 0.007. That is, when the atomic masses of the hydrogen atoms are compared to the mass of the new-formed helium, 0.7 percent has disappeared. This is the energy—mainly heat—that powers life on Earth.

So how much leeway is there here? Raising the efficiency of transformation means allowing a slightly stronger “glue” between the particles in the nucleus of an atom. If the efficiency were higher than 0.008, all the hydrogen created in the Big Bang would have been turned into helium almost immediately, and there would be none left to burn in stars. It would give a dead universe, in other words. Going the other way, lowering the efficiency to 0.006 would mean nuclear glue so weak that helium would never form, and the sun would never ignite. Again, no life would be possible.

Then there is the fact that the electric force is around 1040 times bigger than gravity. This gives atoms their fundamental characteristics. There is a mutual repulsion between the positively charged nucleus and the negatively charge orbiting electrons. But there is also a mutual attraction due to gravity. Alter the ratio between them by a small amount, and you change the characteristics of atoms so much that it alters the characteristics of stars—go one way and you would create a universe where planets don’t form around planets like our sun. Go in the other direction and you threaten the existence of the supernovae that forged the carbon atoms that underpin the chemistry of life. There are other examples. Reduce the neutron mass by a fraction of 1 percent, and no atoms would form, for example.

Monkeying with the universe

It all sounds like a fix, doesn’t it? The great English astronomer Fred Hoyle thought so. He once complained that the universe was so biofriendly that it looked like “a put-up job.” Someone or something, he suggested, was “monkeying” with the laws of physics to facilitate the production of life.

So what does a scientist do about this? Besides saying God did it—which leads scientists nowhere in the quest for an answer—there are three options. The first option is to turn the problem on its head. We wouldn’t be around to worry about these things if the universe were any different. Of course it is so precisely balanced for life. We could not be in a universe that was any different. Such an approach forces us to consider the existence of other universes where the laws of physics give different values to those crucial numbers. Besides being dead universes, however, they are also scientific dead ends. We cannot access them, so we have to content ourselves with not finding a satisfactory answer to the question of our universe’s fine-tuning for life. A second approach is similarly unsatisfactory: we put the fine tuning down to the existence of a supernatural designer, a being that transcends the natural laws. Here, too, we have no hope of discerning whether the approach is the right one.

The third option is the one we have been working toward: that the universe is so well suited to our existence because it was designed for our existence. The designers in this case are not deities. They are beings like us. Only much, much more advanced in their control of technology. So advanced, in fact, that they can create two amazing things. First, beings that exhibit what we consider to be consciousness. And second, a world for those beings to experience with their consciousness. This is the logical sequence known as the simulation argument. The first person to pull it all together was a philosopher called Nick Bostrom. In 2001, he began circulating a paper entitled “Are You Living in a Computer Simulation?.” His answer was, yes, quite possibly.

Creating the world anew

Bostrom’s argument is fairly straightforward. Stop and think about the computing power now at your disposal. Compare that to the power available a decade ago. What about two decades ago? Now translate that into the future. If our civilization survives the next millennium, the computing power available to its population will be of a magnitude that is unimaginable to us today.

Now come back to the present. What is one of the most popular types of computer games? Simulation. Take the extraordinary success of the simulation Second Life, for example. It gives people the opportunity for an alternative existence—an opportunity that millions have grabbed with both hands. Other simulation games allow you to play the deity, controlling others, or just watching how their fates unfold. Something about the human mind loves to get involved in another world. And why should things be any different one thousand years into the future?

image

Bostrom’s argument is that one out of the following three propositions has to be true. The first is that humans are overwhelmingly likely to become extinct before reaching a level of sophistication where they are able to run computer simulations—virtual reality—that would mirror what we experience as reality. The second is that any such civilizations that survive are extremely unlikely to run such simulations. The third is that we are almost certainly living in such a computer simulation.

The first proposition seems unlikely. There is no a priori reason why we will necessarily wipe ourselves out, or be wiped out. The second seems even more unlikely: our own delight in simulations gives no room for supposing that, given even more simulating power, we won’t use it. Which leaves the third proposition. Given the fact that we are talking about a far future, where an almost infinite number of civilizations spread throughout the “original” universe will be running simulations, what are the chances that we are in that original universe and not a simulation? Infinitesimal. In other words, we are almost certainly living in a simulation.

It’s not something to get depressed about—the world is as real as it has ever been. What’s more, unlike the ideas of universes run by supernatural gods, the simulation argument just might be open to testing. The first point to recognize is that it does indeed answer the question about fine-tuning. The simulation’s creators must have a reason to create it. It seems sensible to suggest, therefore, that the overwhelming majority of simulations will have to work well enough to be interesting to their creators and users. Our experience with creating simulation environments suggests that this means populating them with beings that can enjoy their “existence,” which, in turn, tends to involve an ability to interact with the simulated world and its inhabitants.

A plausible simulation will therefore encourage the development of something we would regard as complex life. As we have seen with our look at the laws of nature, that gives a fairly narrow range of possibilities for the set-up. That at least provides a plausible explanation for the fine-tuning. Now we have to look for a scientific test for such an explanation. Again, this can be found within our own experience of creating simulations.

Conservative computing

One of the central rules of programming is that you don’t waste precious computing resources. That means that any simulation will not be infinitely smooth. It will be built well enough to give its conscious avatars a sense of continuity in the world around them—but no better than is necessary. That means a sudden, close look might expose the gaps in the programming.

We may, in fact, have already done so. We already know that the theories we have devised to describe our reality have apparent inconsistencies. The quantum world, which seems to describe things we encounter at subatomic scales, for instance, does not make sense to the human mind. It allows particles to have multiple existences, occupying two spatial positions at the same time or simultaneously moving in opposite directions.

Similarly, relativity, which we use to describe reality when we are considering large, cosmological scales, fails to describe the most extreme of cosmological conditions, such as the interior of a black hole, or the geometry of the moment of the Big Bang. Could it be that these frustrating limitations to our theories reflect the limits of the programming behind our reality?

There is further evidence to consider. One of the most significant aims of modern science is to “unify” the laws of physics. At the moment, the main thrust of that is to marry together relativity and quantum theory. However, it is a marriage that no one has yet managed to broker. Might that be because it is fundamentally impossible?

When creating today’s simulations, programmers use a particular method for coding the finer details—the movement of hairs in a polar bear’s fur, say. The methods for creating a facsimile of a pastoral landscape are different. Similarly, the creators of our simulation may have used different methods for programming our reality on different scales, so we should not expect to be able to marry them together. If that is the case, the frustrations of science might be a clue to the nature of our existence.

A further clue might be found in our genetic code. Our DNA tends to make mistakes when replicating. Left uncorrected, these mistakes would be enough to give any species a short shelf life—perhaps too short to evolve. The simulated story of life would have crashed quickly had it not been for error-correcting routines embedded in the function of our genes. We do the same with our computer programs: we incorporate error-correcting routines that put things right before things go irretrievably awry. It is not a big stretch, therefore, to imagine that the simulation’s programmers would have to employ the same methods.

One suggestion that has been made by serious physicists is that a correction to the simulation might create cracks, or even breaks, in the laws of physics. Some things might not behave as expected. Have we made any such observations? As a matter of fact, yes. Astronomers have suggested, for instance, that the light reaching Earth from the furthest galaxies observable shows signs that the laws of physics have suffered a tweak at some point in the distant past. The light was emitted 12 billion years ago, and its interactions with matter during its journey across the universe have a slightly different character than one might reasonably expect.

The observation seems to suggest that one of the constants of physics, the constant that governs the fine details of how light and matter interact, was subtly different in the past. Is this a programming error, or part of an error-correction routine? Though the scientific inference about the varying constant seems solid enough, the suggestion that it provides support for the idea that we live in a simulation remains controversial.

None of these “tests” are knock-down convincing. The idea that we are living in a computer simulation is an intriguing one, and in many ways it offers a highly plausible answer to one of the most vexing problems of modern physics. Whether it can be proved or falsified remains an open question. Maybe that’s why some philosophers have argued that the only way we will know for sure is if the humans propagating the idea are mysteriously “deleted” from the simulation because they pose a threat to its continued success. Others have made a similarly playful, but far more appealing suggestion. Now that we have made this discovery, it seems entirely possible that we could soon find a huge message rending the sky asunder: “Congratulations: please proceed to Level 2.”