IT’S THE LAW
IN THE APRIL 19, 1965, ISSUE OF ELECTRONICS MAGAZINE, ENGINEER Gordon Moore, later cofounder of Intel Corporation, wrote the following prophetic words about advances to be expected in semiconductor technology:
The complexity for minimum component costs has increased at a rate of roughly a factor of two per year…. Certainly over the short term this rate can be expected to continue, if not to increase…. [B]y 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer.
A few years later, semiconductor pioneer and Caltech professor Carver Mead dubbed this statement “Moore’s law,” a term that techno-futurists and the media have now enshrined as the definitive statement underpinning technological advancement in the age of machines. Subsequent mutations, modifications, and tinkering have led to the folk wisdom that what Moore said is that transistor packing/computer memory capacity/computer performance per unit cost/…will “double every 18 months.” Moore actually said no such thing. But what his statement quoted above does state is the general claim that something absolutely central to digital technology improvement will increase at an exponential rate—with no increase in cost. Moreover, this rate will continue for at least several decades, if not longer.
Despite the rather grandiose labeling of Moore’s observation as a “law,” there is actually nothing at all remarkable about what he claimed. In fact, it is a statement that applies with equal force to the overall life cycle of just about any new technology. When a technology is in its infancy, struggling to shove the current competition off center stage, its market share is very small. As the newcomer gains adherents and begins to make serious inroads into the market, the rate of growth increases exponentially. It then peaks and starts the downhill slide leading to this new technology itself being replaced by the “next big thing.”
Many studies have shown that the life cycle represented by rate of growth (say in number of units of the product sold per month) obeys very closely the well-known bell-shaped curve of probabilities discussed in Part I. And if we measure the cumulative growth of the technology—that is, the fraction of its ultimate total market share achieved over the course of time—that cumulative market share follows the familiar S-shaped curve governing many living and lifelike processes. The high-growth part of the S-curve displays exactly the exponentially increasing growth pattern claimed by Moore for semiconductors.
Even though Moore’s law is neither a law nor an extraordinary insight into the growth of a new technology, it is extremely important in the historical development of digital innovations, serving as a kind of goal for an entire industry. The reason is that research and marketing arms of major players in the industry actually believed the forecasts coming from “the law,” beliefs that drove them to furiously develop new products aimed at attaining the predicted performance capabilities since they were convinced that their competitors would soon produce the product if they did not. So in a certain sense one can think of Moore’s law as a self-fulfilling prophecy. An obvious question is, What are the limits of this principle?
A good place to start in addressing this question is with Gordon Moore himself, who stated in a 2005 interview that the law cannot be extended indefinitely. At that time, he said, “It can’t continue forever. The nature of exponentials is that you push them out and eventually disaster happens.” In the same interview, Moore also noted that “Moore’s law is a violation of Murphy’s law. Everything gets better and better.” But “forever” and “a long time” are very different things, and so researchers, such as MIT quantum computing expert Seth Lloyd, see the limit as being on the order of six hundred years!
In this spirit, speculative futurists like inventor Ray Kurzweil and mathematician, computer scientist, and science-fiction author Vernor Vinge have conjectured that continuation of Moore’s law for only another few decades will bring on a so-called technological singularity. In his book The Singularity Is Near (2005), Kurzweil suggests there are six epochs to the process of evolution, starting with the emergence of information in atomic structures and moving to the state we’re in today in Epoch 4, where technical products are able to embody information processes in hardware and software designs. Kurzweil believes that we are currently at the forefront of Epoch 5, which involves the merger of machine and human intelligence. Put another way, this is the point where computer hardware and software manages to incorporate the methods of biology—primarily self-repair and replication. These methods are then integrated into the human technology base. The “singularity”—Epoch 6—occurs when the knowledge embedded in our brains is merged together with the information-processing capability of our machines.
In a 1993 article, Vinge called the Singularity “a point where our old models must be discarded and a new reality rules.” Since we humans have the ability to internalize the world and ask “What if?” in our heads, we can solve problems thousands of times faster than evolution can do it with its shotgun approach of trying everything and sorting out what works from what doesn’t. By being able to create our simulations at a vastly greater speed than ever before, we will enter a regime so different that it will be tantamount to throwing away all the old rules overnight.
It’s of more than passing interest to see that Vinge credits the great visionary John von Neumann for seeing this possibility in the 1950s. Von Neumann’s close friend, mathematician Stan Ulam, recalls in his autobiography a conversation the two of them had centering on the ever-accelerating progress of technology and changes in human life. Von Neumann argued that the rate of technological progress gives rise to an approaching “singularity” in human history beyond which human affairs as we know them cannot continue. Even though he didn’t seem to be using the term “singularity” in quite the same way as Vinge, who refers to a kind of superhuman intelligence, the essential content of the statement is the same as what today’s futurists have in mind: an ultraintelligent machine beyond any hope of human control.
Radical futurists claim that this merger between the human mind and machines will enable humankind to surmount many problems—disease, finite material resources, poverty, hunger. But they warn that this capability will also open up unparalleled possibilities for humans to act on their destructive impulses. For those readers old enough to remember the golden age of science-fiction films, this is all eerily reminiscent of the marvelous 1956 classic Forbidden Planet, in which intrepid intergalactic explorers discover the remains of the Krell, an ancient civilization that possessed the power of creation by pure thought alone. The Krell apparently vanished overnight when the destructive power of their alien Ids was given free rein. If you happened to have missed the film, a reading of Shakespeare’s Othello makes the same point.
It’s important to note at this juncture that Kurzweil’s argument does not depend on Moore’s law remaining in effect indefinitely, at least not in its original form pertaining just to semiconductors. Rather, he believes that some new kind of technology will replace the use of integrated circuits, and the exponential growth embodied in Moore’s law will then start anew with this new technology. To distinguish this generalized version of Moore’s law, Kurzweil has coined the term “the law of accelerating returns.”
The type of X-event I focus on in this chapter involves the emergence of an “unfriendly” technological species whose interests conflict with those of us lowly humans. In such a planetary battle, the humans might win out. But it’s not the way to bet. So let’s look into the arguments for and against this type of conflict, and see if we can get some insight as to why the radical futurists think we should be wondering and worrying about these matters at all.
THE GNR PROBLEM
THERE ARE THREE RAPIDLY DEVELOPING TECHNOLOGIES THAT CONCERN most “singularity theorists” like Kurzweil, Vinge, and others. They are genetic engineering, nanotechnology, and robotics, which taken together form what is often termed “the GNR problem.” Here’s a bird’s-eye view of each.
All three of these threats give the same apocalyptic vision: a technology run amok that’s developed beyond human control. Whether it’s genetically engineered organisms pushing nature’s creations off center stage, a plague of nano-objects vacuuming up matter to leave waste products—a kind of “gray goo”—coating the entire planet, or a race of robots breeding like hyperactive rabbits to force humans out of the evolutionary competition, the common factor underwriting each of these dark visions is the heretofore unseen ability of engineered technology to replicate. Killer plants breeding copies of themselves, nano-objects soaking up whatever resources they need to make more and more nano-objects or robots building more robots, all lead to the same unhappy end for humans: a planet that can no longer sustain human life, or what’s worse, a planet in which we humans can no longer control our destiny but have been usurped by objects generated by our own technology.
Up to now, a potentially dangerous technology like a nuclear bomb can be used just once—build it and use it. Then we humans have to build it again. Technologists argue that genetically engineered organisms, nano-objects, and robots will be free of this constraint. They will be capable of self-reproduction on a speed and scale never before seen on this planet. When that crossover point is reached, the curtain starts to fall for humankind as the dominant species on the planet. Or so goes the scenario painted by techno-pessimists like Bill Joy, cofounder of Sun Microsystems, who argued in 2002 that we should impose severe restrictions on research in these areas in order to short-circuit this kind of technological “singularity.” I’ll take up those arguments, pro and con, a bit later.
Now let’s look at one of the more interesting threats of the foregoing type, a plague of robots, as a viable candidate for relegating us humans to the scrap heap of history.
INTELLIGENT MACHINES
ALMOST FROM THE VERY INCEPTION OF THE MODERN COMPUTER ERA in the late 1940s, the idea of the computer as a “giant brain” has been a dominant metaphor. In fact, early popular accounts of computers and what people claimed they would be able to do refer to them as “electronic brains.” This metaphor gained currency following a now-legendary meeting at Dartmouth College in 1950 on the theme of what we now call “artificial intelligence,” the study of how to make a computer think just like you and me. At about the same time, British computer pioneer Alan Turing published an article titled “Computing Machinery and Intelligence,” in which he outlined the case for believing that it would be possible to develop a computer that could think, human style. In this article, Turing even suggested a test, now called the Turing test, for determining if a computer was indeed thinking like you and me. The Turing test says the computer is thinking like a human if an interrogator cannot reliably decide whether the machine is a human or a machine through a sequence of blind interrogations, in which the interrogator cannot see the object being questioned. What is relevant here is that for a race of robots to take over the world, they must have some way of processing information about the physical world received from their sensory apparatus. In short, they need a brain.
The issue is whether technology has come to the point at which a brain sufficient for the job can be put together from the kind of computing equipment currently on offer or to be on offer soon. (Note: It is not required that the robot be able to solve all the same problems that humans encounter. Nor is it necessary that the robot think in the same way as a human. All that’s needed is that a brain be good enough to give the robot a survival advantage in competition with humans.) But for the sake of comparison, let’s confine our attention to the question of how much computing power we need in order to match or exceed the computational capacity of the human brain.
First we consider the brain. From numerous studies focused on estimating the processing required to simulate specific brain functions like visual perception, auditory functions, and the like, we can extrapolate the computing requirements for the particular part of the brain involved to the entire brain by just scaling up. So, for instance, estimates suggest that visual computation in the retina requires roughly 1,000 million instructions per second (MIPS or cps). As the human brain is about 75,000 times heavier than the neurons in this part of the retina (about one-fifth of the entire retina, weighing approximately 0.2 grams), we arrive at an estimate of 1014 instructions per second for the entire brain. Another estimate of the same sort can be obtained from examination of the auditory system. It leads to a figure of 1015 cps for the entire brain. All other such exercises have arrived at more or less this same number as a reasonable estimate of the power of a single human brain.
How does this compare to a computer? Today’s personal computer provides about 109 cps. By the law of accelerating returns, we can expect this figure to be stepped up to that of the brain in about fifteen years—or less! So much for processing. What about memory?
Estimates indicate that a human who is expert in some domain such as medicine, mathematics, law, or chess playing can remember around ten million “chunks” of information. These chunks consist of pieces of specific knowledge, along with various patterns specific to the domain of expertise. Further, experts say that each such chunk requires about one million bits to store. So the total storage capacity of the brain comes to around 1013 bits. Estimating memory requirements in the brain by counting connections between neurons leads to a larger figure of 1018 bits of memory for a brain.
According to the technological growth curves for computer memory, we should be able to buy 1013 bits of memory for less than one thousand dollars in about ten years. So it’s reasonable to expect that all the memory needed to match that in the brain will be readily available not later than around the year 2020.
Putting the two hardware estimates together, we see that we’re within twenty years of being able to match the brain’s processing and memory capacity with a computing machine costing around one thousand dollars.
Now, what about software? Matching the hardware requirements of the human brain is likely within the next decade or so. But the “killer app” arrives when we can match the computer’s speed, accuracy, and unerring memory with human-level intelligence (i.e., software). In order to do this, we have to effectively “reverse engineer” the brain, capturing its software in the hardware of tomorrow.
When it comes to simulating the brain, the first thing we have to note is all the many ways the human brain differs from a computer. Here are just a few of the more important distinctions:
Analog Versus Digital: A modern computer is essentially a digital device that relies on great speed to turn switches on and off at a dazzling rate. The brain, on the other hand, uses a combination of digital and analog processes for its computations. While in the early days of computing people set great store by the seeming digital aspect of the brain’s neurons, we have found that human brains are actually mostly analog devices using chemical gradients (neurotransmitters) to open and close its neuronal switches. So to argue a similarity between a computer switching circuit and one in the brain is a big stretch, to say the least.
Speed: The brain is slow; a computer is fast. In fact, the typical cycle time of even a slow computer is billions of times faster than the cycle time of a neuron, which is about twenty milliseconds. Thus, the brain has only a few hundred cycles to do its job of recognizing patterns.
Parallel Versus Serial: The brain has trillions of connections linking its neurons. This high degree of connectivity allows lots of computations to be carried out in parallel. This is totally unlike almost all digital computers, which do one operation at a time in a serial fashion.
These are but a few of the features distinguishing a human brain from a computer. But with the computing power that’s just around the corner, we will still be in a position to simulate the brain without actually trying to fabricate it. What’s needed for the next stage of evolution is for computers to be functionally equivalent to the brain, not to duplicate its precise physical structure.
All this having been said, simulating a human brain functionally inside a computer is not quite the same thing as simulating a human being. Or is it? Of course, a disembodied suprahuman brain might easily displace carbon-based humans as the dominant “thought processors” on the planet. But even a disembodied brain needs somehow to sustain its existence in some type of material medium. Nowadays that medium is the motherboard, keyboard, monitor, hard drive, RAM memory chips, and all the other hardware of your computer. Tomorrow, who knows? But what we do know is that there will have to be some type of physical embodiment of the intelligence. This means a sensory apparatus for accessing the world outside the intelligence, as well as some sort of boundary separating that intelligence from what is “outside.” So much for computers. What about robots?
A BRAIN-IN-A-VAT VERSUS ROBBY, THE ROBOT
AS I WRITE THESE WORDS IN MY OFFICE AT HOME, IN THE NEXT ROOM a robot called a “Roomba” is moving about in the living room diligently vacuuming the carpet and floors. My hat’s off to the design team at iRobot, Inc., who developed this little gizmo, as it does an excellent job at a task that I hate—exactly what most of us wish for in a robot. Basically, what we usually have in mind is some sort of automaton that will do our bidding with no questions asked, relieving us of various chores and duties that need doing (like vacuuming) but that in truth are pretty tiresome and boring. But what we certainly do not have in mind is a collective of intelligent robots who think that perhaps the tables should be turned and that humans should be vacuuming their floors them instead of the other way around. What are the possibilities of that flip-flop?
Just before setting the Roomba loose in my living room, a friend and I watched the classic 1956 sci-fi film Forbidden Planet that I mentioned earlier. Although the technology envisioned by the film’s producers fifty years ago is now a bit antiquated, the story and the moral is as fresh as this morning’s croissants from the bakery on the corner.
Although the F/X specialists in the 1950s were not quite up to today’s standards, the portrayal of Robby, the Robot, a piece of machinery that serves humans as a driver, cook, transport device, and overall factotum is wondrous. Even my teenage mind was fascinated by the possibilities when I first saw the film, and I marveled at Robby’s capability to learn new tasks and comprehend human instructions. Moreover, at the story’s denouement Robby remains loyal to his human masters, as all of his wiring short-circuited when he was given instructions to harm a human.
The question raised for us by Robby is whether a robot with those almost superhuman properties can be guaranteed to follow something like Isaac Asimov’s laws of robotics. In about 1940, Asimov proposed the following laws that a robot must adhere to in order to remain a servant to humans, not an evolutionary competitor.
First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These laws of robotics stipulate that robots are to be slaves to humans (the Second Law). However, this role can be overridden by the higher-order First Law, which prevents robots from injuring a human, either by their own action or by following instructions given by a human. This prevents them from continuing any activity when doing so would lead to human injury. It also prevents the machines from being used as a tool for or to assist in various types of physical assaults on humans.
The Third Law generates a survival instinct. So in the absence of conflict with a higher-order law, a robot will
Roger Clarke and others have noted that under the Second Law a robot appears to be required to comply with a human order, so as to (1) not resist being destroyed or dismantled, (2) cause itself to be destroyed, or (3) (within the limits of paradox) dismantle itself. In various stories, Asimov notes that the order to self-destruct does not have to be obeyed if obedience would result in harm to a human. In addition, a robot would generally not be precluded from seeking clarification of the order.
Another gap in Asimov’s three laws, and one that is very important for our purposes here, is that the laws refer to individual human beings. Nothing is said about robots taking actions that would harm a group, or in the extreme case, humanity as a whole. This leads to the following:
Zeroth Law: A robot may not injure humanity, or through inaction allow humanity to come to harm.
These “laws” of robotic good citizenship impose severe constraints as to what’s needed to keep a band of intelligent robots in check. Given the propensity of intelligent objects to evolve so as to enhance their own survival, it seems unlikely that robots of the sort we’re envisioning here will be satisfied to serve humans as they develop the capacity to serve themselves. In the 2004 film I, Robot, which is loosely based on Asimov’s 1950 collection of short stories, robots reinterpret the laws and logically conclude that the best way to protect humans is to rule them. The paradox here is that to be really useful robots have to be able to make their own decisions. But as soon as they have the capacity to do this, they acquire the ability to violate the laws.
Now back to the vexing question: Will robots take over the world? The short answer is…a definite maybe!
One of my favorite rejoinders to the claims of some futurists about robotic takeover in the next few decades is that the bodies of robots will be made of mechanical technology, not electronic. And mechanical engineering technology is simply not developing at the same furious rate as computers. There is no Moore’s law in the mechanical realm. To illustrate, if automobiles had developed at the same pace as computers, we would now have cars smaller than a match box, traveling at supersonic speeds, and transporting a trainload of passengers while consuming a teaspoonful of gasoline. In short, size matters when it comes to mechanical technology, and the rule is the bigger it is, the more powerful it is. Computers are just the opposite.
So even if we have robots hundreds of times more intelligent than us in a few decades, humans will still maintain a vast mechanical superiority. Humans will be able to knock over such a robot without breaking a sweat, climb stairs and trees easier than any robot on wheels could hope to do, and generally outperform robots on almost any task requiring the delicate manipulative capabilities that we have in our hands and fingers.
If I were a betting man, I’d put my money on the above argument for human superiority in the mechanical dexterity department. And this despite the fact that we already have robots doing surgical operations by remote control, along with robotic soldiers carrying out missions in regions infested with land mines, poisonous gases, and other hazards to humans. The fact that robots can execute such tasks is indeed impressive. But these are very special-purpose devices, just like the Roomba vacuum, designed to perform a very special job—and only that job.
Humans, on the other hand, have a far greater capacity to deviate from the planned program when circumstances don’t quite fit into the predefined framework that the robot’s “brain” expects to encounter. Of course, you might argue that when the robot brain begins to surpass the human brain in its information-processing capabilities and in its ability to adapt to unanticipated circumstances, the game may indeed be up for us humans. With this ambiguous prospect in mind for a robotic takeover, let’s return to the question of the Singularity and examine when it might happen.
THE SINGULARITY
IN THE 1993 PAPER BY VINGE THAT SPARKED OFF THE VOLUMES OF debate about the Singularity, several paths are sketched that could lead to the technological creation of a transhuman intelligence. To paraphrase Vinge, these include:
The first three elements on this list involve improvement in computer technology, while the last is primarily genetic. And all may well rely on nanotechnological developments for their realization. So each facet of the GNR problem discussed earlier makes its appearance in the unfolding of the Singularity. And once such an intelligence is “alive,” it’s likely that it will lead to an exponential runaway in development of even greater intelligences.
From a human point of view, the consequences of the emergence of these superhuman intelligences are incalculable. All the old rules will be thrown away, perhaps in just a few hours! Developments that previously were thought to take generations or millennia may unfold in a few years—or less.
For the next decade or so we probably won’t notice any dramatic movement toward the Singularity. But as hardware develops to a level well beyond natural human abilities, more symptoms of the Singularity will become evident. We will see machines take over high-level jobs such as executive management that were previously thought of as the province of humans. Another symptom will be that ideas spread far quicker than ever before. Of course, we already rely on computers for a bewildering array of tasks, as I described solely in the context of communication in the earlier chapter on the Internet. But even in such a mundane matter as writing this book, I occasionally shudder when I think of what it was like just three decades ago when I wrote my first book—literally by hand! That thought is but a distant early-warning sign of things to come as we approach the Singularity.
And what of the moment when the Singularity actually arrives? According to Vinge, it may seem as if our artifacts simply “wake up.” The moment we cross the threshold of the Singularity we will be in the Posthuman era.
The most crucial point here is whether the Singularity is actually possible. If we can convince ourselves that it can indeed happen, then nothing short of the total destruction of human society can stand in its way. Even if all the governments of the world were to try to prevent it, researchers would still find ways to continue making progress to the goal. In short, if something can happen, it will happen—regardless of what governments, or societies as a whole, might think about it. That’s the natural way of human curiosity and inventiveness. And no amount of political bombast or hand-wringing morality is going to change that state of affairs.
So assuming the Singularity can take place, when is the “crossover” going to occur? There seems to be a reasonably uniform consensus on the answer: within the next twenty to thirty years. The technology futurist Ray Kurzweil has been even more specific. In his book The Singularity Is Near, a kind of bible of Singularitarians, he states:
I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045. The nonbiological intelligence created in that year will be one billion times more powerful than all human intelligence today.
That’s about as definite as you can get in the forecasting business!
For what it’s worth, even though I firmly believe that there will be a Singularity, I’m personally rather skeptical about the timing aspect of the whole business. The arguments from Moore’s law, accelerating returns, human curiosity, and the like leading to this “grand” event in a couple of decades strikes me as rather reminiscent of the kinds of pronouncements made in the early 1950s by AI advocates about what computers would (or wouldn’t) do in the years to come. Some of those claims included becoming the world chess champion within ten years, translating languages at the skill level of first-rate human translators in the same time frame, becoming electromechanical butlers serving dry martinis after a hard day at the office, and so on. Well, some of these goals actually were achieved, such as a computer (Deep Blue II) beating the world chess champion (in 1997, not in the 1960s, and by using methods totally unlike what a human player would employ), while others are as far away as ever from being achieved (high-quality, human-level language translation). In fact, the whole line of argument by the Singularitarians is a familiar one in the futurology business: extrapolate current trends and ignore the possibility of any surprises getting in the way. But, of course, this argument only puts off the day of accounting, and I strongly suspect we will see the kind of superhuman intelligence the Singularity calls for before the end of this century.
ADDING IT ALL UP
THE COMPLEXITY INCREASE IN THE WORLD OF MACHINES IS RAPIDLY outpacing that of the human side of the ledger. In contrast to some of the complexity gaps I’ve spoken of earlier, such as an EMP attack or an Internet crash, the Singularity is an X-event, whose unfolding time is decades, not minutes or seconds. But its impact will be dramatic and irreversible, pushing humans off center stage in the grand evolutionary drama of life on this planet.
So there it is. Arguments, pro and con, for the end of the Human era. In the final analysis, it seems a good bet that the GNR problem will indeed lead to the kind of transcendent intelligence that the Singularity will usher in.