The possibilities addressed in the chapters above, far-fetched as they may seem, raise the possibility that future technological developments may be even more unpredictable than one might expect from recent history. In this final chapter, I discuss a few possible long-term directions for the evolution of technology. These are educated guesses or, if you prefer, wild speculations.
Bootstrapping is a term that may have originated a couple of centuries ago to speak of an imagined ability to lift oneself up by one’s own bootstraps. It implies the ability of a process to sustain itself, and to keep reinforcing itself, once it has been started.
Digital minds have a number of advantages over biological minds. They can be copied an arbitrary number of times, they can be run at different speeds (depending on how much computational power is available), and they can be instrumented and changed in ways that are not applicable to biological minds. It is reasonable to assume that digital minds, once in place, could be used to advance our understanding of intelligence. These simple advantages of digital minds alone would make them important in the development of advanced AI technologies.
The basic idea that an intelligent system, running in a computer, can bootstrap itself in order to become more and more intelligent, without the need for human intervention, is an old one and has appeared in a number of forms in scientific works and in science fiction. This idea is sometimes called seed AI, a term used to imply that what is needed to create an ever-accelerating process is to create the first artificially intelligent system sufficiently smart to understand and improve itself. Although no such system exists, or is even planned, it is reasonable to assume that digital minds, if they ever exist, will play an important role in the further development of the technologies that led to their very own existence.
In 1965, Irving John Good (a mathematician who worked with Alan Turing at Bletchley Park, contributing to the British effort to break the Germans’ Enigma codes) coined the term intelligence explosion:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. (Good 1965)
Good believed that if machines could even slightly surpass human intelligence, they would surely be able to improve their own designs in ways not foreseen by their human designers, recursively augmenting their intellects past the level of human intelligence.
The possibility of bootstrapping, by which intelligent systems would lead to even more intelligent systems in an accelerating and self-reinforcing cycle, is one of the factors that have led many people to believe we will eventually observe a discontinuity in the way society is organized. Such a discontinuity has been called the technological singularity or simply the singularity. The term refers to a situation in which technological evolution leads to even more rapid technological evolution in a rapidly accelerating cycle that ends in an abrupt and radical change of the whole society. Intelligent machines are viewed as playing a major role in this cycle, since they could be used to design successively more intelligent machines, leading to levels of intelligence and technological capacity that would rapidly exceed human levels. Since this would lead to a situation in which humans would no longer be the dominant intelligence on Earth, the technological singularity would be an event beyond which the evolution of technology and society would become entirely unpredictable.
In mathematics, a singularity is a point at which a function or some other mathematical object reaches some exceptional or non-defined value or fails to have some specific smoothness property. For instance, the function 1/x has a singularity when x = 0, because at that point the function has no defined value. As x approaches 0 while taking positive values, the value of the function moves rapidly toward infinity.
This idea of a mathematical singularity inspired the use of the term technological singularity to denote a point at which there will be a lack of continuity in the technological evolution of mankind. The first person to use the word singularity in this context may have been the mathematician and physicist John von Neumann. In his 1958 tribute to von Neumann, Stanislaw Ulam describes a conversation in which he quotes von Neumann as having said that “the ever-accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue” (Ulam 1958).
The expression technological singularity was popularized by Vernor Vinge, a scientist and a science-fiction author who has argued that a number of other causes could also cause or contribute to the singularity (Vinge 1993), and by Ray Kurzweil, who wrote a number of books that address what he sees as the coming singularity (1990, 2000, 2005).
It is important to note that technologies other than digital minds, natural or synthetic, could also lead to a technological singularity. Radical improvements in biology and medicine could lead to an ability to replace or regenerate human bodies and human brains, creating the possibility of eternal or almost eternal life. Such a situation would certainly disrupt many social conventions, but would not, in my mind, really correspond to a singularity, since society would probably be able to adapt, more or less smoothly, to a situation in which death was no longer inevitable. Significant advances in nanotechnologies, which make it possible to create from raw atoms any physical object, would also change many of the basic tenets of society, perhaps leading to a society in which any object could be obtained by anyone. However, legislation and other restrictions would probably ensure enough continuity so that human life, as we know it, would still be understandable to us.
To my mind, none of these technologies really compares, in its disruptive potential, to a technology capable of creating digital minds. Digital persons, living in a digital world, bound only by the limitations imposed by available computer power, would create an entirely different society—a society so different from the one we currently have that would justify the application of the term singularity. Increasing computational power as much as is possible would lead to new computing technologies and architectures, perhaps directing a large fraction of the Earth’s resources to create ever-more-powerful computers. Science fiction and futurologists have envisaged a situation in which the whole Earth would be turned into a computer, and have even created a term for a substance that would be used to create computers as efficiently as would be possible from the resources available. This substance, computronium, has been defined by some as a substance that extracts as much computational power as possible from the matter and energy available. Such a material would be used to create the ultimate computer, which would satisfy the ever-increasing needs of a civilization running in a virtual world. In fact, the evolution of computers, leading to ever smaller and more powerful computing devices, seems to take us in the direction of computronium, even though at present only a small fraction of the Earth’s mass and energy is dedicated to making and operating computers. However, as transistors become smaller and smaller, and more and more resources are dedicated to computations, aren’t we progressing steadily toward creating computronium?
There is no general agreement as to whether the singularity might come to pass, much less on when. Optimists imagine it arriving sometime in the first half of the twenty-first century, in time to rescue some people alive today from the inevitable death brought by aging. Other, less optimistic estimates put it several hundred years in the future. In 2008, IEEE Spectrum, the flagship publication of the Institute of Electrical and Electronics Engineers, dedicated a special issue to the topic of the singularity. Many scientists, engineers, visionaries, and science-fiction writers discussed the topic, but no clear conclusions were drawn.
Many arguments have been leveled against the idea of the singularity. We have already met the most obvious and perhaps the strongest arguments. They are based on the idea that minds cannot be the result of a computational process and that consciousness and true intelligence can only result from the operation of a human brain. According to these arguments, any other intelligence, resulting from the operation of a computer, will always be a minor intelligence, incapable of creating the self-accelerating, self-reinforcing, process required to create the singularity.
Other arguments are based on the fact that most predictions of future technologies—underwater cities, nuclear fusion, interplanetary travel, flying automobiles, and so on—have not materialized. Therefore, the prediction of a technological singularity is no more likely to come true than many of these other predictions, simply because technology will take other routes and directions.
Many opponents of the idea of a singularity base their arguments on sociological and economic considerations. As more and more jobs disappear and are replaced by intelligent agents and robots, they argue, there will be less and less incentive to create more intelligent machines. Severe unemployment and reduced consumer demand will destroy the incentive to create the technologies required to bring about the singularity. Jared Diamond argues in his 2005 book Collapse: How Societies Choose to Fail or Succeed that cultures eventually collapse when they exceed the sustainable carrying capacity of their environment. If that is true, then our civilization may not evolve further in the direction of the singularity.
A final line of argument is based on the idea that the rate of technological innovation has already ceased to rise and is actually now declining (Modis 2006). Evidence for this decline in our capacity to innovate is the fact that computer clock speeds are not increasing at the same rate and the mounting evidence that physical limitations will ultimately put a stop to Moore’s Law. If this argument is correct, and such a decline in innovation capacity is indeed happening, then the second half of the twentieth century and the first decades of the twenty-first will have witnessed the highest rate of technological change ever to occur.
I don’t have a final, personal opinion as to whether or not the singularity will happen. Although I believe that the evolution of technology follows a tendency that is essentially exponential, I recognize that such an analysis is true only when considered in broad strokes. Detailed analysis of actual technological innovations would certainly identify large deviations from such an exponential tendency. It is also true that exponential functions have no singularities. They grow faster and faster as time goes by, but they never reach a mathematical singularity.
However, it is also true, in general, that exponential evolutions proceed so rapidly that they eventually reach some physical limit. The exponential growth of living cells, caused by their ability to reproduce, can last only as long as the physical resources are sufficient; the growth slows when it reaches some excessively high level. Maybe the “exponential” evolution of technology we have been observing in recent decades will reach a limit, and the rate of innovation will then decrease, for technological or social reasons, leading to a more stable and predictable evolution of society.
However, even if the technological singularity doesn’t happen, I believe that changes will continue to accumulate at such a high rate that society will, by the end of the present century, have changed so drastically that no one alive today would recognize it. I have already mentioned Arthur C. Clarke’s third law, which states that any sufficiently advanced technology is indistinguishable from magic. Airplanes, cars, computers, and cellular phones would have appeared as magic to anyone living only a few hundred years ago. What wondrous technologies will the future bring? Digital minds may be only one of them!
True believers in the singularity may wish to take measures to ensure that their personalities, if not their bodies, are preserved long enough to be able to take advantage of the possibilities the singularity may bring. The most obvious and conspicuous approach is used by people who believe in cryonics, the cryopreservation of human bodies. Cryopreservation is a well-developed process whereby cells and tissues are preserved by subjecting them to cooling at very low temperatures. If the temperatures are low enough (near 80 kelvin, approximately the temperature of liquid nitrogen), enzymatic or chemical activity that may cause damage to biological tissues is effectively stopped, which makes it possible to preserve specimens for extended periods of time. The idea of cryonics is to keep human brains or whole human bodies frozen while waiting for the technologies of the future.
The challenge is to reach these low temperatures while avoiding a number of phenomena that cause extensive damage to cells. The most damaging such phenomena are cell dehydration and extracellular and intracellular ice formation. These deleterious effects can be reduced though the use of chemicals known as cryoprotectants, but current technology still inflicts significant damage on organs beyond a certain size.
Once the preserved material has been frozen, it is relatively safe from suffering further damage and can be stored for hundreds of years, although other effects, such as radiation-induced cell damage, may have to be taken into consideration.
With existing technology, cryopreservation of people or of whole brains is deemed not reversible, since the damages inflicted on the tissues are extensive. Attempts to recover large mammals frozen for thousands of years by simply warming them were abandoned many decades ago. However, believers in cryonics take it for granted that future technology will be able to revert the damages inflicted by the freezing process, making it possible that cryopreserved people (or brains) may someday be brought back to life.
A number of institutions and companies, including the Cryonics Institute and the Alcor Life Extension Foundation, give their members the option of having their heads or bodies frozen in liquid nitrogen for prices that start at less than $100,000—a cost that, by any standard, has to be considered very reasonable for a fair chance at resurrection. According to the statistics published, these institutions have, as of today, a few thousand members, and already have preserved a few hundred bodies in liquid nitrogen.
There is a strong ongoing debate about the feasibility of cryonics (Merkle 1992). An open letter supporting the feasibility of the technology was signed by a number of well-known scientists, but most informed people are highly skeptical and view cryonics as a dishonest scheme to extract money from uninformed (if optimistic) customers.
Other believers in the singularity, who are more optimistic, hope it will arrive in time to save people still alive today. Of these people, the most outspoken is probably Ray Kurzweil. He believes that advances in medicine and biology will make it possible for people alive today to experience the singularity during their lifetimes (Kurzweil and Grossman 2004).
In view of the complexity of the techniques required to perform either mind uploading or mind repairing on frozen brains and the enormous technological developments still required, my personal feeling is that the hopes of the two groups of people mentioned above are probably misplaced. Even if cryonics doesn’t damage brain tissues too deeply, future technologies probably will not be able to use the live brain information that, in my view, will be indispensable if mind uploading ever becomes possible. If one assumes that the singularity will not arrive before the end of the twenty-first century, the hope that it will arrive on time to rescue from certain death people still alive today rests implicitly on the assumption that advances on medicine will increase life expectancy by about one year every year, something that is definitely too optimistic.
As we have seen, the jury is still out on whether or not the singularity will happen sometime in the not too distant future. However, even if the singularity doesn’t happen, super-human intelligences may still come into existence, by design or accident. Irving John Good’s idea of an intelligence explosion may lead to systems that are much more intelligent than humans, even if the process doesn’t create a technological singularity.
AI researchers, starting with Alan Turing, have many times looked forward to a time when human-level artificial intelligence becomes possible. There is, however, no particular reason to believe that there is anything special about the specific level of intelligence displayed by humans. While it is true that humans are much more intelligent that all other animals, even those with larger brains, it is not reasonable to expect that human intelligence sits at the maximum attainable point on the intelligence scale.
Super-human intelligences could be obtained either by greatly speeding up human-like reasoning (imagine a whole-brain emulator running 100.000 times the speed of real time), by pooling together large numbers of coordinated human-level intelligences (natural or artificial), by supplying a human-level intelligence with enormous amounts of data and memory, or by developing some yet-unknown new forms of intelligence. Such super-human intelligences would be, to human intelligence, as human intelligence is to chimpanzee-level intelligence. The survival of chimpanzees, as of the other animals, now depends less on them than on us, the dominant form of life on Earth. This is not because we are stronger or more numerous, but only because we are more intelligent and have vastly superior technologies.
If a super-human intelligence ever develops, will we not be as dependent on it as chimpanzees now are on us? The very survival of the human race may one day depend on how kindly such a super-intelligence looks upon humanity. Since such a super-human intelligence will have been created by us, directly or indirectly, we may have a chance of setting things up so that such an intelligence serves only the best interests of humanity. However, this is easier said than done.
In 1942, Isaac Asimov proposed the three laws of robotics that, if implemented in all robots, would guarantee the safeguarding of human lives: that a robot may not injure a human being or, through inaction, allow a human being to come to harm; that a robot must obey orders given to it by human beings unless such orders would conflict with the first law; and that a robot must protect its own existence as long as such protection doesn’t conflict with the first law or the second. However, these laws now seem somewhat naive, as they are based on the assumption (which was common in the early years of AI research) that a set of well-defined symbolic rules would eventually lead to strong artificial intelligence. With the evolution of technology, we now understand that an artificially intelligent system will not be programmed, in minute detail, by a set of rules that fully define its behavior. Artificially intelligent systems will derive their own rules of behavior from complex and opaque learning algorithms, statistical analyses, and complex objective functions.
A super-human intelligence might easily get out of control, even if aiming for goals defined by humans. For purposes of illustration, suppose that a super-intelligent system is asked to address and solve the problem of global warming. Things may not turn out as its designers expected if the super-intelligent system determines that the most effective solution is to take human civilization back to a pre-technological state or even to eradicate humans from the surface of the Earth. You may believe this to be a far-fetched possibility, but the truth is that a truly super-intelligent system might have goals, values, and approaches very different from those held by humans. A truly super-intelligent system may be so much more intelligent than humans, and so effective at developing technologies and finding solutions, that humanity will become enslaved to its aims and means. Furthermore, its motivations might be completely alien to us—particularly if it is a synthetic intelligence (see chapter 10) with a behavior very different from the behavior of human intelligence.
The problem is made more complex by the fact the explosion of intelligence may happen in a relatively short period of time. Artificial intelligence has been under development for many decades, and mind-uploading technologies will probably take more than fifty years to develop. This would lead us to believe that an intelligence explosion, were it to happen, would take place over many decades. However, that may turn out not to be the case. A seed AI, as defined above, may be able to improve itself at a rate incommensurably faster than the rate of development of AI technologies by humans. By using large amounts of computational resources and speeding up the computation in a number of ways, a human-level artificially intelligent system may be able to increase its intelligence greatly over a period of days, hours, or even seconds. It all depends on factors that we cannot know in advance.
Some authors have argued that, before we develop the technologies that may lead to super-human intelligence, we should make sure that we will be able to control them and to direct them toward goals that benefit mankind. In his book Superintelligence, Nick Bostrom addresses these matters—including the potential risks and rewards—in depth, and proposes a number of approaches that would ultimately enable us to have a better chance at controlling and directing such a super-human intelligence. To me, however, it remains unclear exactly what can effectively be done to stop a truly super-human intelligence from setting its own goals and actions and, in the process, going against the interests of humanity.
Will digital minds become the servants or the masters of mankind? Only the future will tell.
The universe is composed of more than 100 billion galaxies. There are probably more than 200 billion stars in the Milky Way, our galaxy. If the Milky Way is a typical galaxy, the total number of stars in the universe is at least 1022. Even by the lowest estimates, there are more than a billion trillion stars in the universe—a number which is, in the words of Carl Sagan, vastly larger than the number of grains of sand on Earth. We know now that many of these stars have planets around them, and many of those planets may have the conditions required to support life as we know it. We humans have wondered, for many years, whether we are alone in the universe. It is hard to conceive that such a vast place is now inhabited by only a single intelligent species, Homo sapiens.
Using the Drake Equation (proposed in 1961 by the radio astronomer Frank Drake to stimulate scientific discussion about the search for extraterrestrial intelligence), it is relatively easy to compute an estimate of the number of intelligent civilizations in our galaxy with long-range communication ability:
where N is the number of civilizations in the Milky Way whose electromagnetic emissions are detectable, R* is the rate of formation of stars suitable for the development of intelligent life, fp is the fraction of those stars with planetary systems, ne is the number of planets per solar system with an environment suitable for life, fl is the fraction of suitable planets on which life actually appears, fi is the fraction of life bearing planets on which intelligent life emerges, fc is the fraction of civilizations that develop a technology that releases detectable signs of their existence into space, and L is the length of time such civilizations release detectable signals into space. Some of the factors, such as the rate of star formation and the fraction of stars with planets, are relatively easy to estimate from sky surveys and the study of exoplanets. The other factors are more difficult or even impossible to estimate accurately.
Three of the factors in the Drake Equation are particularly hard to estimate with any accuracy. One is the fraction of planets amenable to life that actually develop life. There is no reliable way to estimate this number, although many researchers believe it is highly probable that life will develop on a planet if the right conditions exist. This belief is supported in part by the fact that life appeared on Earth shortly (in geological terms) after the right conditions were present. This is seen as evidence that many planets that can support life will eventually see it appear. However, the anthropic principle gets somewhat in the way of this argument. After all, our analysis of the history of appearance of life on Earth is very biased, since only planets that have developed life have any chance of supporting the intelligent beings making the analysis.
The second hard-to-estimate factor is the fraction of life-bearing planets that actually develop intelligent life. On the basis of the idea that life will eventually develop intelligence as an effective tool for survival, some estimates of this factor propose a value close to 1. In the other direction, the argument states that the value must be very low, because there have been probably more than a billion species on Earth and only one of them developed intelligence. This argument is not exactly true, because there were multiple species in the genus Homo, all of them presumably rather intelligent, although all but one are now extinct (Harari 2014). Still, considerable uncertainty remains about the right value for this factor, which can be anywhere in the range between 0 and 1. As Caleb Scharf described clearly in his captivating book The Copernicus Complex, inhabiting the only known planet that evolved intelligent life makes it particularly difficult to obtain accurate and unbiased estimates for these two factors.
The third difficult-to-estimate factor is the length of time a civilization lasts and emits communication signals into space. We don’t have even one example that could enable us to estimate the duration of a technological civilization, nor do we have physical principles to guide us. There is no way to know whether our technological space-faring civilization will last 100 years or 100 million years. Historically, specific civilizations have lasted between a few decades and a few hundred years, but it is hard to argue that history is a good guide in this respect, given the technological changes that took place in the twentieth century. In fact, I don’t believe we are equipped with the right tools to reason about the future of technological civilizations lasting for millions of years. There is simply no reasonable way to extrapolate, to that length of time, our experience of a technological civilization that has been in place for only a few hundred years.
If one assumes a reasonable probability that life develops whenever the right conditions are met on a planet and a conservative but not overly pessimistic value for fi (the fraction of life-bearing planets that develop intelligent life), such as 1 percent (Scharf 2014), the crucial factor determining the number of living technological civilizations in the galaxy is, in fact, L, the length of time a technological civilization endures. If one assumes a value of only a few hundred years, it is highly likely that there are only a few technological civilizations, and perhaps only one, in the galaxy. If, on the other hand, a technological civilization lasts for millions of years, there may be hundreds of thousands of such civilizations in the galaxy.
This leads us to one difficult question regarding extra-terrestrial civilizations, known as Fermi’s Paradox: Where are they? That question was posed in 1950 by Enrico Fermi when he questioned why we have never seen evidence of advanced extraterrestrial civilizations, if many of them exist in our galaxy.
Answers to this question have to be based on the Rare Earth Hypothesis, on the argument that intelligent life is a very uncommon occurrence on life-bearing planets, or on the argument that communicating civilizations last only for short spans of time. Each of these three arguments supports a very low value for at least one of the difficult-to-estimate factors in the Drake Equation.
The Rare Earth Hypothesis (Ward and Brownlee 2000) states that the origin of life and the evolution of biological complexity require a highly improbable combination of events and circumstances and are very likely to have occurred only once, or only a small number of times, in the galaxy. This hypothesis leads to very small values of the factors ne and fl, (the number of planets per star that support life and the fraction of those that will actually develop life, respectively). If the Rare Earth Hypothesis is true, then there are only a few planets in the galaxy, maybe only one, that have developed life. There are many reasons why planets similar to Earth may be very rare. These reasons include long-term instability of planetary orbits and solar systems, fairly frequent planetary cataclysms, and the low likelihood that all the things necessary to support life are present on a particular planet.
I have already presented some of the discussions about fi, the probability that a life-bearing planet will develop intelligent life. Although the answer to Fermi’s Paradox may lie in the fact that there are many life-bearing planets but few with intelligent life, most see this possibility as unlikely, since it would imply that the universe is teaming with life but intelligent life evolved only on Earth.
The third explanation is based on the idea that civilizations tend not to last long enough to communicate with one another, since they remain viable for only a few hundred years. This explanation may have some bearing on the central topic of this book. There are a number of reasons why a technological civilization that communicates with the outside world and develops space travel may last only a few hundred years.
One possible explanation is that such a civilization will destroy itself, either because is exhausts the resources of the planet, because it develops tensions that cannot be handled, or because it develops a hostile super-intelligence. Such an explanation holds only if one believes that the collapse of a civilization leads to a situation in which the species that created the civilization becomes extinct or permanently pre-technological. This doesn’t seem very likely, in view of our knowledge of human history, unless such an event is caused by an unprecedented situation, such as the creation of a hostile super-intelligence not interested in space communication.
The alternative explanation is that such a civilization doesn’t become extinct, but evolves to a more advanced state in which it stops communicating with the outside world and doesn’t develop space travel. Paradoxically, digital minds can provide an explanation to Fermi’s Paradox. It may happen that all sufficiently advanced civilizations end up developing mechanisms for the creation of virtual realities so rich that they develop their own internal, synthetic universes and no longer consider space travel necessary or even desirable. Virtual realities, created in digital computers or other advanced computational supports by sufficiently advanced civilizations, may become so rich and powerful that even the most fascinating of all explorations, interstellar travel, comes to be thought irrelevant and uninteresting.
If such an explanation holds true, it is possible that many technological civilizations do exist in the galaxy, living in virtual realities, hives of collective minds running in unthinkably powerful computers, creating their own universes and physical laws. In fact, it is even possible that we live inside one such virtual reality, which we call, simply, the universe.