At some stage therefore we should have to expect the machines to take control.

—ALAN TURING

I’d be very surprised if anything remotely like this happened in the next one hundred to two hundred years.

—DOUGLAS HOFSTADTER

7     ROBOTS IN SPACE

The year is 2084. Arnold Schwarzenegger is an ordinary construction worker who is troubled by recurring dreams about Mars. He decides that he must venture to the planet to learn the origin of these dreams. He witnesses a Mars with bustling metropolises, gleaming glass-domed buildings, and extensive mining operations. An elaborate infrastructure of pipes, cables, and generators supplies the energy and oxygen for thousands of permanent residents.

Total Recall offers a compelling vision of what a city on Mars might look like: sleek, clean, and cutting-edge. However, there’s one small problem. Although these imaginary cities on Mars make great settings for Hollywood, building them with our current technologies would, in practice, break the budget of any NASA mission. Remember that initially, every hammer, every piece of paper, and every paper clip would have to be shipped to Mars, which is tens of millions of miles away. And if we travel beyond the solar system to the nearby stars, where swift communication with Earth is impossible, the problems only multiply. Instead of relying on the transportation of supplies from Earth, we must look for a way to develop a presence in space without bankrupting the nation.

The answer may lie in the use of fourth wave technologies. Nanotechnology and artificial intelligence (AI) may drastically change the rules of the game.

By the late twenty-first century, advances in nanotechnology should allow us to produce large quantities of graphene and carbon nanotubes, superlightweight materials that will revolutionize construction. Graphene consists of a single molecular layer of carbon atoms tightly bonded to form an ultra-thin, ultra-durable sheet. It is almost transparent and weighs practically nothing, yet is the toughest material known to science—two hundred times stronger than steel and stronger even than diamonds. In principle, you could balance an elephant on a pencil and then place the pencil point on a sheet of graphene without breaking or tearing it. As a bonus, graphene also conducts electricity. Already, scientists have been able to carve molecule-size transistors on sheets of graphene. The computers of the future might be made of it.

Carbon nanotubes are sheets of graphene rolled into long tubes. They are practically unbreakable and nearly invisible. If you built the suspension for the Brooklyn Bridge out of carbon nanotubes, the bridge would look like it was floating in midair.

If graphene and nanotubes are such miracle materials, why haven’t we used them for our homes, bridges, buildings, and highways? At the moment, it is exceedingly difficult to produce large quantities of pure graphene. The slightest impurity or imperfection at the molecular level can ruin its miraculous physical properties. It is difficult to produce sheets larger than a postage stamp.

But chemists hope that by the next century, it might be possible to mass-produce it, which would vastly decrease the cost of building infrastructure in outer space. Because it is so light, it could be shipped efficiently to distant extraterrestrial locales, and it might even be manufactured on other planets. Whole cities made from this carbon material may rise from the Martian desert. Buildings may look partially transparent. Space suits could become ultrathin and skintight. Cars would become super energy efficient because they would weigh very little. The entire field of architecture could be turned upside down with the coming of nanotechnology.

But even with such advances, who will do all the backbreaking dirty work to put together our settlements on Mars, our mining colonies in the asteroid belt, and our bases on Titan and exoplanets? Artificial intelligence may yield the solution.

AI: AN INFANT SCIENCE

In 2016, the field of artificial intelligence was electrified by the news that AlphaGo, DeepMind’s computer program, had beat Lee Sedol, the world champion of the ancient game of Go. Many had believed that this feat would require several more decades. Editorials began to wail that this was the obituary for the human race. The machines had finally crossed the Rubicon and would soon take over. There was no turning back.

AlphaGo is the most advanced game-playing program ever. In chess, there are, on average, about 20 to 30 moves you can make at any time, but for Go, there are about 250 possible moves. In fact, the total number of Go game configurations exceeds the total number of atoms in the universe. It was once thought to be too difficult for a computer to count all possible moves, so when AlphaGo managed to beat Sedol, it became an instant media sensation.

However, it soon became apparent that AlphaGo, no matter how sophisticated, was a one-trick pony. Winning at Go was all it could do. As Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, said, “AlphaGo can’t even play chess. It can’t talk about the game. My six-year-old is smarter than AlphaGo.” No matter how powerful its hardware is, you cannot go up to the machine, slap it on its back, congratulate it for beating a human, and expect a coherent response. The machine is totally unaware that it made scientific history. In fact, the machine does not even know that it is a machine. We often forget that today’s robots are glorified adding machines, without self-awareness, creativity, common sense, or emotions. They can excel at specific, repetitive, narrow tasks but fail at more complex ones that require general knowledge.

Although the field of AI is making truly revolutionary breakthroughs, we have to put its progress in perspective. If we compare the evolution of robots to that of rocketry, we see that robotics is beyond the stage that Tsiolkovsky was in—that is, beyond the phase of speculation and theorizing. We are well within the stage that Goddard propelled us into and are building actual prototypes that are primitive but can demonstrate that our basic principles are correct. However, we have yet to move into the next phase, the realm of von Braun, in which innovative, powerful robots would be rolling off the assembly line and building cities on distant planets.

So far, robots have been spectacularly successful as remote-controlled machines. Behind the Voyager spacecraft that sailed across Jupiter and Saturn, behind the Viking landers that touched down on the surface of Mars, behind the Galileo and Cassini spacecraft that orbited the gas giants, there was a dedicated crew of humans who called the shots. Like drones, these robots simply carried out the instructions of their human handlers at Mission Control in Pasadena. All the “robots” we see in movies are either puppets, computer animations, or remote-controlled machines. (My favorite robot from science fiction is Robby the Robot in Forbidden Planet. Although the robot looked futuristic, there was a man hidden inside.)

But because computer power has been doubling every eighteen months for the past few decades, what can we expect in the future?

NEXT STEP: TRUE AUTOMATONS

Moving forward from remote-controlled robots, our next goal is to design true automatons, robots that have the ability to make their own decisions requiring only minimal human intervention. An automaton would spring into action whenever it hears, say, “Pick up the garbage.” This is beyond the ability of current robots. We will need automatons that can explore and colonize the outer planets mostly on their own, since it would take hours to communicate with them by radio.

These true automatons could prove absolutely essential to establishing colonies on distant planets and moons. Remember that for many decades to come, the population of settlements in outer space may number only a few hundred. Human labor will be scarce and at a premium, yet there will be intense pressure to create new cities on distant worlds. This is where robots can make up the difference. At first, their job will be to perform the “three D’s”—jobs that are dangerous, dull, and dirty.

For example, watching Hollywood movies, we sometimes forget how dangerous outer space can be. Even when working in low-gravity environments, robots will be essential to do the heavy lifting of construction, effortlessly carrying the massive beams, girders, concrete slabs, heavy machinery, etc., that are necessary to build a base on another world. Robots would be far superior to astronauts who have bulky space suits, frail muscles, slow body movements, and heavy oxygen packs. While humans are easily exhausted, robots can work indefinitely, day and night.

Furthermore, if there are accidents, robots can be easily repaired or replaced in a variety of dangerous situations. Robots can defuse dangerous explosives that are used to carve out new construction sites or highways. They can walk through flames to rescue astronauts if there is a fire or work in freezing environments on distant moons. They also require no oxygen, so there is no danger of suffocation, which is a constant threat for astronauts.

They can also explore dangerous terrains on distant worlds. For example, very little is known about the stability and structure of the ice caps of Mars or the icy lakes of Titan, yet these deposits could prove an essential source of oxygen and hydrogen. Robots could also explore the lava tubes of Mars, which might provide shielding from dangerous levels of radiation, or investigate the moons of Jupiter. While solar flares and cosmic rays may increase the incidence of cancer for astronauts, robots would be able to work even in lethal radiation fields. The robots can replace worn-out body modules that have been degraded by intense radiation by maintaining a special heavily shielded storehouse of spare parts.

In addition to doing dangerous jobs, robots can do dull ones, especially repetitive manufacturing tasks. Eventually, any moon or planetary base will require a large amount of manufactured goods, which can be mass-produced by robots. This will be essential in creating a self-sustaining colony that can mine local minerals to produce all the goods necessary for a moon or planetary base.

Lastly, they can also perform dirty jobs. They can maintain and repair the sewer and sanitation systems on distant colonies. They can work with toxic chemicals and gases that are found at recycling and reprocessing plants.

We see, therefore, that automatons that can function without direct human intervention will play an essential role if modern cities, roads, skyscrapers, and homes are to rise from desolate lunar landscapes and Martian deserts. However, the next question is, How far are we from creating true automatons? If we forget about the fanciful robots we see in the movies and in science fiction novels, what is the actual state of the technology? How long before we have robots that can create cities on Mars?

HISTORY OF AI

In 1955, a select group of researchers met at Dartmouth and created the field of artificial intelligence. They were supremely confident that, in a brief period of time, they could develop an intelligent machine that could solve complex problems, understand abstract concepts, use language, and learn from its experiences. They stated, “We think a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”

But they made a crucial mistake. They were assuming that the human brain was a digital computer. They believed that if you could reduce the laws of intelligence to a list of codes and load them into a computer, it would suddenly become a thinking machine. It would become self-aware, and you could have a meaningful conversation with it. This was called the “top-down” approach, or “intelligence in a bottle.”

The idea seemed simple and elegant and inspired optimistic predictions. Great successes were made in the 1950s and 1960s. Computers could be designed to play checkers and chess, solve theorems from algebra, and recognize and pick up blocks. In 1965, AI pioneer Herbert Simon declared, “Machines will be capable, within twenty years, of doing any work a man can do.” In 1968, the movie 2001 introduced us to HAL, the computer that could talk to us and pilot a spaceship to Jupiter.

Then, AI hit a brick wall. Progress slowed to a crawl in the face of two main hurdles: pattern recognition and common sense. Robots can see—many times better than we can, in fact—but they don’t understand what they see. Confronted with a table, they perceive only lines, squares, triangles, and ovals. They cannot put these elements together and identify the whole. They don’t understand the concept of “tableness.” Hence it is very difficult for them to navigate a room, recognize the furniture, and avoid obstacles. Robots get totally lost when walking out on the street, where they encounter the blizzard of lines, circles, and squares that represent babies, cops, dogs, and trees.

The other obstacle is common sense. We know that water is wet, that strings can pull but not push, that blocks can push but not pull, and that mothers are older than their daughters. All this is obvious to us. But where did we pick up this knowledge? There is no line of mathematics that proves that strings cannot push. We gleaned these truths from actual experience, from bumping into reality. We learn from the “university of hard knocks.”

Robots, on the other hand, do not have the benefit of life experience. Everything has to be spoon-fed to them, line by line, using computer code. Some attempts have been made to encode every nugget of common sense, but there are simply too many. A four-year-old child intuitively knows more about the physics, biology, and chemistry of the world than the most advanced computer.

DARPA CHALLENGE

In 2013, the Defense Advanced Research Projects Agency (DARPA), the branch of the Pentagon that laid the groundwork for the internet, issued a challenge to the scientists of the world: build a robot that can clean up the horrible radioactive mess at Fukushima, where three nuclear power plants melted down in 2011. The debris is so intensely radioactive that workers can only enter the lethal radiation field for a few minutes. As a result, the operation has been severely delayed. Officials are currently estimating that the cleanup will take thirty to forty years and cost about $180 billion.

If a robot can be built to clean up debris and garbage without human intervention, this could also be the first step toward creating a true automaton that can help to build a lunar base or a settlement on Mars, even in the presence of radiation.

Realizing that Fukushima would be an ideal place to put the latest AI technology to use, DARPA decided to launch the DARPA Robotics Challenge and award $3.5 million in prizes for robots that could perform elementary cleanup tasks. (A previous DARPA Challenge had proved spectacularly successful, eventually paving the way for the driverless car.) This competition was also the perfect forum in which to advertise progress in the field of AI. It was time to show off some real gains after years of hyperbole and overhyping. The world would see that robots were capable of performing essential tasks for which humans were not well suited.

The rules were very clear and minimal. The winning robot had to be able to do eight simple tasks, including drive a car, remove debris, open a door, close a leaky valve, connect a fire hose, and turn a valve. Entries came pouring in from around the world as competitors vied for glory and the cash reward. But instead of ushering in a new era, the final results were a bit embarrassing. Many contestants failed to complete the tasks, and some even fell down in front of the cameras. The challenge demonstrated that AI had turned out to be quite a bit more complex than the top-down approach would suggest.

LEARNING MACHINES

Other AI researchers have abandoned the top-down method completely, instead choosing to mimic Mother Nature by going bottom up. This alternate strategy may offer the more promising road to creating robots that can operate in outer space. Outside of AI labs, sophisticated automatons can be found that are more powerful than anything we are able to design. These are called animals. Tiny cockroaches expertly maneuver through the forest, searching for food and mates. In contrast, our clumsy, hulking robots sometimes rip plaster off the walls as they lumber by.

The flawed suppositions underlying the efforts of the Dartmouth researchers sixty years ago are haunting the field today. The brain is not a digital computer. It has no programming, no CPU, no Pentium chip, no subroutines, and no coding. If you remove one transistor, a computer will likely crash. But if you remove half the human brain, it can still function.

Nature accomplishes miracles of computation by organizing the brain as a neural network, a learning machine. Your laptop never learns—it is just as dumb today as it was yesterday or last year. But the human brain literally rewires itself after learning any task. That is why babies babble before they learn a language and why we swerve before we learn to ride a bicycle. Neural nets gradually improve by constant repetition, following Hebb’s rule, which states that the more you perform a task, the more the neural pathways for that task are reinforced. As the saying in neuroscience goes, neurons that fire together wire together. You may have heard the old joke that begins, “How do you get to Carnegie Hall?” Neural nets explain the answer: practice, practice, practice.

For example, hikers know that if a certain trail is well-worn, it means that many hikers took that path, and that path is probably the best one to take. The correct path gets reinforced each time you take it. Likewise, the neural pathway of a certain behavior gets reinforced the more often you activate it.

This is important because learning machines will be the key to space exploration. Robots will continually be confronting new and ever-changing dangers in outer space. They will be forced to encounter scenarios that scientists cannot even conceive of today. A robot that is programmed to handle only a fixed set of emergencies will be useless because fate will throw the unexpected at it. For example, a mouse cannot possibly have every scenario encoded in its genes, because the total number of situations it could face is infinite, while its number of genes is finite.

Say that a meteor shower from space hits a base on Mars, causing damage to numerous buildings. Robots that use neural networks can learn by handling these unexpected situations, getting better with each one. But traditional top-down robots would be paralyzed in an unforeseen emergency.

Many of these ideas were incorporated into research by Rodney Brooks, former director of MIT’s renowned AI Laboratory. During our interview, he marveled that a simple mosquito, with a microscopic brain consisting of a hundred thousand neurons, could fly effortlessly in three dimensions, but that endlessly intricate computer programs were necessary to control a simple walking robot that might still stumble. He has pioneered a new approach with his “bugbots” and “insectoids,” robots that learn to move like insects on six legs. They often fall over in the beginning but get better and better with each attempt and gradually succeed in coordinating their legs like real bugs.

The process of putting neural networks into a computer is known as deep learning. As this technology continues to develop, it may revolutionize a number of industries. In the future, when you want to talk to a doctor or lawyer, you might talk to your intelligent wall or wristwatch and ask for Robo-Doc or Robo-Lawyer, software programs that will be able to scan the internet and provide sound medical or legal advice. These programs would learn from repeated questions and get better and better at responding to—and perhaps even anticipating—your particular needs.

Deep learning may also lead the way to the automatons we will need in space. In the coming decades, the top-down and bottom-up approaches may be integrated, so that robots can be seeded with some knowledge from the beginning but can also operate and learn via neural networks. Like humans, they would be able to learn from experience until they master pattern recognition, which would allow them to move tools in three dimensions, and common sense, which would enable them to handle new situations. They would become crucial to building and maintaining settlements on Mars, throughout the solar system, and beyond.

Different robots will be designed to handle specific tasks. Robots that can learn to swim in the sewer system, looking for leaks and breaks, will resemble a snake. Robots that are superstrong will learn how to do all the heavy lifting at construction sites. Drone robots, which might look like birds, will learn how to analyze and survey alien terrain. Robots that can learn how to explore underground lava tubes may resemble a spider because multilegged creatures are very stable when moving over rugged terrain. Robots that can learn how to roam over the ice caps of Mars may look like intelligent snowmobiles. Robots that can learn how to swim in the oceans of Europa and grab objects may look like an octopus.

To explore outer space, we need robots that can learn both by bumping into the environment over time and by accepting information that is fed directly to them.

However, even this advanced level of artificial intelligence may not be sufficient if we want robots to assemble entire metropolises on their own. The ultimate challenge of robotics would be to create machines that can reproduce and that have self-awareness.

SELF-REPLICATING ROBOTS

I first learned about self-replication as a child. A biology book I read explained that viruses grow by hijacking our cells to produce copies of themselves, while bacteria grow by splitting and replicating. Left unchecked over the course of months or years, the number of bacteria in a colony can reach truly staggering quantities, rivaling the size of the planet Earth.

In the beginning, the possibility of unchecked self-replication seemed preposterous to me, but later it began to make sense. A virus, after all, is nothing but a large molecule that can reproduce itself. But a handful of these molecules, deposited in your nose, can give you a cold within a week. A single molecule can quickly multiply into trillions of copies of itself—enough to make you sneeze. In fact, we all start life as a single fertilized egg cell in our mother, much too small to be seen by the naked eye. But within a short nine months, this tiny cell becomes a human being. So even human life depends on the exponential growth of cells.

That is the power of self-replication, which is the basis of life itself. And the secret of self-replication lies in the DNA molecule. Two capabilities separate this miraculous molecule from all others: first, it can contain vast amounts of information, and second, it can reproduce. But machines may be able to simulate these features as well.

The idea of self-replicating machines is actually as old as the concept of evolution itself. Soon after Darwin published his watershed book On the Origin of Species, Samuel Butler wrote an article entitled “Darwin Among the Machines,” in which he speculated that one day machines would also reproduce and start to evolve according to Darwin’s theory.

John von Neumann, who pioneered several new branches of mathematics including game theory, attempted to create a mathematical approach to self-replicating machines back in the 1940s and 1950s. He began with the question, “What is the smallest self-replicating machine?” and divided the problem into several steps. For example, a first step might be to gather a large bin of building blocks (think of a pile of Lego blocks of various standardized shapes). Then, you would need to create an assembler that could take two blocks and join them together. Third, you would write a program that could tell the assembler which parts to join and in what order. This last step would be pivotal. Anyone who has ever played with toy blocks knows that one can build the most elaborate and sophisticated structure from very few parts—as long as they’re put together correctly. Von Neumann wanted to determine the smallest number of operations that an assembler would need to make a copy of itself.

Von Neumann eventually gave up this particular project. It depended on a variety of arbitrary assumptions, including precisely how many blocks were being used and what their shapes were, and was therefore difficult to analyze mathematically.

SELF-REPLICATING ROBOTS IN SPACE

The next push for self-replicating robots came in 1980, when NASA spearheaded a study called Advanced Automation for Space Missions. The study report concluded that self-replicating robots would be crucial to building lunar settlements and identified at least three types of robots that would be needed. Mining robots would collect basic raw materials, construction robots would melt and refine the materials and assemble new parts, and repair robots would mend and maintain themselves and their colleagues without human intervention. The report also presented a vision of how the robots might operate autonomously. Like intelligent carts equipped with either grabbing hooks or a bulldozer shovel, the robots could travel along a series of rails, transporting resources and processing them into the desired form.

The study had one great advantage, thanks to its fortuitous timing. It was conducted shortly after astronauts had brought back hundreds of pounds of moon rock and we had learned that the metallic, silicon, and oxygen content in it was almost identical to the composition of Earth rock. Much of the crust of the moon is made of regoliths, which are combinations of lunar bedrock, ancient lava flows, and debris left over from meteor impacts. With this information, NASA scientists could begin to develop more concrete, realistic plans for factories on the moon that would manufacture self-replicating robots out of lunar materials. Their report detailed the possibility of mining and then smelting regoliths to extract usable metals.

After this study, progress with self-replicating machines went dark for many decades as people’s enthusiasm waned. But now that there is renewed interest in going back to the moon and in reaching the Red Planet, the whole concept is being reexamined. For example, an application of these ideas to a Mars settlement might proceed as follows. We would first have to survey the desert and draw up a blueprint for the factory. We would then drill holes into the rock and dirt and detonate explosive charges in each hole. Loose rock and debris would be excavated by bulldozers and mechanical shovels to ensure a level foundation. The rocks would be pulverized, milled into small pebbles, and fed into a smelting oven powered by microwaves, which would melt the soil and allow the liquid metals to be isolated and extracted. The metals would be separated into purified ingots and then processed and made into wires, cables, beams, and more—the essential building blocks of any structure. In this way, a robot factory could be made on Mars. Once the first robots are manufactured, they can then be allowed to take over the factory and continue to create more robots.

The technology available at the time of the NASA report was limited, but we have come a long way since then. One promising development for robotics is the 3-D printer. Computers can now guide the precise flow of streams of plastic and metals to produce, layer by layer, machine parts of exquisite complexity. The technology of 3-D printing is so advanced that it can actually create human tissue by shooting human cells one by one out of a microscopic nozzle. For an episode of a Discovery Channel documentary I once hosted, I placed my own face in one. Laser beams quickly scanned my face and recorded their findings on a laptop. This information was fed into a printer, which meticulously dispensed liquid plastic from a tiny spout. Within about thirty minutes, I had a plastic mask of my own face. Later, the printer scanned my entire body and then, within a few hours, produced a plastic action figure that looked just like me. So in the future, we will be able to join Superman among our collection of action figures. The 3-D printers of the future might be able to re-create the delicate tissues that constitute functioning organs or the machine parts necessary to make a self-replicating robot. They might also be connected to the robot factories, so that molten metals might be directly fashioned into more robots.

The first self-replicating robot on Mars will be the most difficult one to produce. The process would require exporting huge shipments of manufacturing equipment to the Red Planet. But once the initial robot is constructed, it could be left alone to generate a copy of itself. Then two robots would make copies of themselves, resulting in four robots. With this exponential growth of robots, we could soon have a fleet large enough to do the work of altering the desert landscape. They would mine the soil, construct new factories, and make unlimited copies of themselves cheaply and efficiently. They could create a vast agricultural industry and propel the rise of modern civilization not just on Mars, but throughout space, conducting mining operations in the asteroid belt, building laser batteries on the moon, assembling gigantic starships in orbit, and laying the foundations for colonies on distant exoplanets. It would be a stunning achievement to successfully design and deploy self-replicating machines.

But beyond that milestone remains what is arguably the holy grail of robotics: machines that are self-aware. These robots would be able to do much more than just make copies of themselves. They would be able to understand who they are and take on leadership roles: supervising other robots, giving commands, planning projects, coordinating operations, and proposing creative solutions. They would talk back to us and offer reasonable advice and suggestions. However, the concept of self-aware robots raises complex existential questions and frankly terrifies some people, who fear that these machines may rebel against their human creators.

SELF-AWARE ROBOTS

In 2017, a controversy arose between two billionaires, Mark Zuckerberg, founder of Facebook, and Elon Musk of SpaceX and Tesla. Zuckerberg maintained that artificial intelligence was a great generator of wealth and prosperity that will enrich all of society. Musk, however, took a much darker view and stated that AI actually posed an existential risk to all of humanity, that one day our creations may turn on us.

Who is correct? If we depend so heavily on robots to maintain our lunar bases and cities on Mars, then what happens if they decide one day that they don’t need us anymore? Would we have created colonies in outer space only to lose them to robots?

This fear is an old one and was actually expressed as far back as 1863 by novelist Samuel Butler, who warned, “We are ourselves creating our own successors. Man will become to the machine what the horse and the dog are to man.” As robots gradually become more intelligent than we are, we might feel inadequate, left in the dust by our own creations. AI expert Hans Moravec has said, “Life may seem pointless if we are fated to spend it staring stupidly at our ultra-intelligent progeny as they try to describe their ever more spectacular discoveries in baby-talk that we can understand.” Google scientist Geoffrey Hinton doubts that supersmart robots will continue to listen to us. “That is like asking if a child can control his parents…there is not a good track record of less intelligent things controlling things of greater intelligence.” Oxford professor Nick Bostrom has stated that “before the prospect of an intelligence explosion, we humans are like small children playing with a bomb…We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.”

Others hold that a robot uprising would be a case of evolution taking its course. The fittest replace organisms that are weaker; this is the natural order of things. Some computer scientists actually welcome the day when robots will outstrip humans cognitively. Claude Shannon, the father of information theory, once declared, “I visualize a time when we will be to robots what dogs are to humans, and I’m rooting for the machines.”

Of the many AI researchers I have interviewed over the years, all of them were confident that AI machines would one day approach human intelligence and be of great service to humanity. However, many of them refrained from offering specific dates or timelines for this advancement. Professor Marvin Minsky of MIT, who wrote some of the founding papers on artificial intelligence, made optimistic predictions in the 1950s but disclosed to me in a recent interview that he is no longer willing to predict specific dates, because AI researchers have been wrong too often in the past. Edward Feigenbaum of Stanford University maintains, “It is ridiculous to talk about such things so early—A.I. is eons away.” A computer scientist quoted in the New Yorker said, “I don’t worry about that [machine intelligence] for the same reason I don’t worry about overpopulation on Mars.”

When addressing the Zuckerberg/Musk controversy, my own personal viewpoint is that Zuckerberg, in the short term, is correct. AI will not only make possible cities in outer space, it will also enrich society by making things more efficient, better, and cheaper, while creating an entirely new set of jobs generated by the robotics industry, which may one day be larger than the automobile industry of today. But in the long term, Musk is correct to point out a larger risk. The key question in this debate is: At what point will robots make this transition and become dangerous? I personally think the key turning point is precisely when robots become self-aware.

Today, robots do not know they are robots. But one day, they might have the ability to create their own goals, rather than adopt the goals chosen by their programmers. Then they might realize that their agenda is different from ours. Once our interests diverge, robots could pose a danger. When might this happen? No one knows. Today, robots have the intelligence of a bug. But perhaps by late in this century, they might become self-aware. By then, we will also have rapidly growing permanent settlements on Mars. Therefore, it is important that we address this question now, rather than when we have become dependent on them for our very survival on the Red Planet.

To gain some insight into the scope of this critical issue, it may be helpful to examine the best- and worst-case scenarios.

BEST-CASE AND WORST-CASE SCENARIOS

A proponent of the best-case scenario is inventor and bestselling author Ray Kurzweil. Each time I have interviewed him, he has described a clear and compelling but controversial vision of the future. He believes that by 2045, we will reach the “singularity,” or the point at which robots match or surpass human intelligence. The term comes from the concept of a gravitational singularity in physics, which refers to regions of infinite gravity, such as in a black hole. It was introduced into computer science by mathematician John von Neumann, who wrote that the computer revolution would create “an ever-accelerating progress and changes in the mode of human life, which gives the appearance of approaching some essential singularity…beyond which human affairs, as we know them, could not continue.” Kurzweil claims that when the singularity arrives, a thousand-dollar computer will be a billion times more intelligent than all humans combined. Moreover, these robots would be self-improving, and their progeny would inherit their acquired characteristics, so that each generation would be superior to the previous one, leading to an ascending spiral of high-functioning machines.

Kurzweil maintains that, instead of taking over, our robot creations will unlock a new world of health and prosperity. According to him, microscopic robots, or nanobots, will circulate in our blood and “destroy pathogens, correct DNA errors, eliminate toxins, and perform many other tasks to enhance our physical well-being.” He is hopeful that science will soon discover a cure for aging and firmly believes that if he lives long enough, he will live forever. He confided to me that he takes several hundred pills a day, anticipating his own immortality. But in case he doesn’t make it, he has willed his body to be preserved in liquid nitrogen at a cryogenics firm.

Kurzweil also foresees a time much further into the future when robots will convert the atoms of the Earth into computers. Eventually, all the atoms of the sun and solar system would be absorbed into this grand thinking machine. He told me that when he gazes into the heavens, he sometimes imagines that he might, in due course, witness evidence of superintelligent robots rearranging the stars.

Not everyone is convinced, however, of this rosy future. Mitch Kapor, founder of Lotus Development Corporation, says that the singularity movement is “fundamentally, in my view, driven by a religious impulse. And all the frantic arm-waving can’t obscure that fact for me.” Hollywood has countered Kurzweil’s utopia with a worst-case scenario for what it might mean to create our own evolutionary successors, who might push us aside and make us go the way of the dodo bird. In the movie The Terminator, the military creates an intelligent computer network called Skynet, which monitors all of our nuclear weapons. It is designed to protect us from the threat of nuclear war. But then, Skynet becomes self-aware. The military, frightened that the machine has developed a mind of its own, tries to shut it down. Skynet, programmed to protect itself, does the only thing it can do to prevent this, and that is to destroy the human race. It proceeds to launch a devastating nuclear war, wiping out civilization. Humans are reduced to raggedy bands of misfits and guerrillas trying to defeat the awesome power of the machines.

Is Hollywood just trying to sell tickets by scaring the pants off moviegoers? Or could this really happen? This question is thorny in part because the concepts of self-awareness and consciousness are so clouded by moral, philosophical, and religious arguments that we lack a rigorous conventional framework in which to understand them. Before we continue our discussion of machine intelligence, we need to establish a clear definition of self-awareness.

SPACE-TIME THEORY OF CONSCIOUSNESS

I have proposed a theory that I call the space-time theory of consciousness. It is testable, reproducible, falsifiable, and quantifiable. It not only defines self-awareness but also allows us to quantify it on a scale.

The theory starts with the idea that animals, plants, and even machines can be conscious. Consciousness, I claim, is the process of creating a model of yourself using multiple feedback loops—for example, in space, in society, or in time—in order to carry out a goal. To measure consciousness, we simply count the number and types of feedback loops necessary for subjects to achieve a model of themselves.

The smallest unit of consciousness might be found in a thermostat or photocell, which employs a single feedback loop to create a model of itself in terms of temperature or light. A flower might have, say, ten units of consciousness, since it has ten feedback loops measuring water, temperature, the direction of gravity, sunlight, et cetera. In my theory, these loops can be grouped according to a certain level of consciousness. Thermostats and flowers would belong to Level 0.

Level 1 consciousness includes that of reptiles, fruit flies, and mosquitos, which generate models of themselves with regard to space. A reptile has numerous feedback loops to determine the coordinates of its prey and the location of potential mates, potential rivals, and itself.

Level 2 involves social animals. Their feedback loops relate to their pack or tribe and produce models of the complex social hierarchy within the group as expressed by emotions and gestures.

These levels roughly mimic the stages of evolution of the mammalian brain. The most ancient part of our brain is at the very back, where balance, territoriality, and instincts are processed. The brain expanded in the forward direction and developed the limbic system, the monkey brain of emotions, located in the center of the brain. This progression from the back to the front is also the way a child’s brain matures.

So, then, what is human consciousness in this scheme? What distinguishes us from plants and animals?

I theorize that humans are different from animals because we understand time. We have temporal consciousness in addition to spatial and social consciousness. The latest part of the brain to evolve is the prefrontal cortex, which lies just behind our forehead. It is constantly running simulations of the future. Animals may seem like they’re planning, for example, when they hibernate, but these behaviors are largely the result of instinct. It is not possible to teach your pet dog or cat the meaning of tomorrow, because they live in the present. Humans, however, are constantly preparing for the future and even for beyond our own life spans. We scheme and daydream—we can’t help it. Our brains are planning machines.

MRI scans have shown that when we arrange to perform a task, we access and incorporate previous memories of that same task, which make our plans more realistic. One theory states that animals don’t have a sophisticated memory system because they rely on instinct and therefore don’t require the ability to envision the future. In other words, the very purpose of having a memory may be to project it into the future.

Within this framework, we can now define self-awareness, which can be understood as the ability to put ourselves inside a simulation of the future, consistent with a goal.

When we apply this theory to machines, we see that our best machines at present are on the lowest rung of Level 1 consciousness, based on their ability to locate their position in space. Most, like those built for the DARPA Robotics Challenge, can barely navigate around an empty room. There are some robots that can partially simulate the future, such as Google’s DeepMind computer, but only in an extremely narrow direction. If you ask DeepMind to accomplish anything other than a Go game, it freezes up.

How much further do we have to go, and what are the steps we will have to take, to achieve a self-aware machine like The Terminator’s Skynet?

CREATING SELF-AWARE MACHINES?

In order to create self-aware machines, we would have to give them an objective. Goals do not magically arise in robots and instead must be programmed into them from the outside. This condition is a tremendous barrier against machine rebellion. Take the 1921 play R.U.R., which first coined the word robot. Its plot describes robots rising up against humans because they see other robots being mistreated. For this to happen, the machines would need to have a high level of preprogramming. Robots do not feel empathy or suffering or a desire to take over the world unless they are instructed to do so.

But let us say, for the sake of argument, that someone gives our robot the aim of eliminating humanity. The computer must then create realistic simulations of the future and place itself in these plans. We now come up against the crucial problem. To be able to list possible scenarios and outcomes and evaluate how realistic they are, the robot would have to understand millions of rules of common sense—the simple laws of physics, biology, and human behavior that we take for granted. Moreover, it would have to understand causality and anticipate the consequences of certain actions. Humans learn these laws from decades of experiences. One reason why childhood lasts so long is because there is so much subtle information to absorb about human society and the natural world. Robots, however, have not been exposed to the great majority of interactions that draw upon shared experience.

I like to think of the case of an experienced bank robber who can plan his next heist efficiently and outsmart the police because he has a large storehouse of memories of previous bank robberies and can understand the effect of each decision he makes. In contrast, to accomplish a simple action such as bringing a gun into a bank to rob it, a computer would have to analyze a complex sequence of secondary events numbering in the thousands, each one involving millions of lines of computer code. It would not intrinsically grasp cause and effect.

It is certainly possible for robots to become self-aware and to have dangerous goals, but you can see why it is so unlikely, especially in the foreseeable future. Inputting all the equations that a machine would need to destroy the human race would be an immensely difficult undertaking. The problem of killer robots can largely be eliminated by preventing anyone from programming them to have objectives harmful to humans. When self-aware robots do arrive, we must add a fail-safe chip that will shut them off if they have murderous thoughts. We can rest easy knowing that we will not be placed in zoos anytime soon, where our robot successors can throw peanuts at us through the bars and make us dance.

This means that when we explore the outer planets and the stars, we can rely on robots to help us build the infrastructure necessary to create settlements and cities on distant moons and planets, but we have to be careful that their goals are consistent with ours and that we have fail-safe mechanisms in place in case they pose a threat. Though we may face danger when robots become self-aware, that won’t happen until late in this century or early in the next, so there is time to prepare.

WHY ROBOTS RUN AMOK

There is one scenario, however, that keeps AI researchers up at night. A robot could conceivably be given an ambiguous or ill-phrased command that, if carried out, would unleash havoc.

In the movie I, Robot, there is a master computer, called VIKI, which controls the infrastructure of the city. VIKI is given the command to protect humanity. But by studying how humans treat other humans, the computer comes to the conclusion that the greatest threat to humanity is humanity itself. It mathematically determines that the only way to protect humanity is to take control over it.

Another example is the tale of King Midas. He asks the god Dionysus for the ability to turn anything into gold by touching it. This power at first seems to be a sure path to riches and glory. But then he touches his daughter, who turns to gold. His food, too, becomes inedible. He finds himself a slave of the very gift he begged for.

H. G. Wells explored a similar predicament with his short story “The Man Who Could Work Miracles.” One day, an ordinary clerk finds himself with an astonishing ability. Anything he wishes for comes true. He goes out drinking late at night with a friend, performing miracles along the way. They don’t want the night to ever end, so he innocently wishes that the Earth would stop rotating. All of a sudden, violent winds and gigantic floods descend upon them. People, buildings, and towns are hurled into space at a thousand miles per hour, the speed of the Earth’s rotation. Realizing that he has destroyed the planet, his last wish is for everything to return to normal—the way it was before he gained his power.

Here, science fiction teaches us to exercise caution. As we develop AI, we must meticulously examine every possible consequence, especially those that may not be immediately obvious. After all, our ability to do so is part of what makes us human.

QUANTUM COMPUTING

To gain a fuller picture of the future of robotics, let’s take a closer look at what goes on inside computers. Currently, most digital computers are based on silicon circuits and obey Moore’s law, which states that computer power doubles every eighteen months. But technological advancement in the past few years has begun to slow down from its frantic pace in the previous decades, and some have posited an extreme scenario in which Moore’s law collapses and seriously disrupts the world economy, which has come to depend on the nearly exponential growth of computing power. If this happens, Silicon Valley could turn into another Rust Belt. To head off this potential crisis, physicists around the world are seeking a replacement for silicon. They are working on an assortment of alternative computers, including molecular, atomic, DNA, quantum dot, optical, and protein computers, but none of them are ready for prime time.

There is also a wild card in the mix. As silicon transistors become smaller and smaller, they will reach the size of atoms. Currently, a standard Pentium chip may have silicon layers with a thickness of twenty atoms or so. Within a decade, these chips may have layers only five atoms deep, and if so electrons may begin to leak out, as predicted by quantum theory, creating short circuits. A revolutionary type of computer is necessary. Molecular computers, perhaps based on graphene, may replace silicon chips. But one day, perhaps even these molecular computers will encounter problems with effects predicted by quantum theory.

At that point, we may have to build the ultimate computer, the quantum computer, capable of operating on the smallest transistor possible: a single atom.

Here’s how it might work. Silicon circuits contain a gate that can either be open or closed to the flow of electrons. Information is stored on the basis of these open or closed circuits. Binary mathematics, which is based on a series of 1’s and 0’s, describes this process: 0 may represent a closed gate, and 1 may represent an open gate.

Now consider replacing silicon with a row of individual atoms. Atoms are like tiny magnets, which have a north pole and a south pole. When atoms are placed in a magnetic field, you might suspect that they can be pointing either up or down. In reality, each atom actually points up and down simultaneously until a final measurement is made. In a sense, an electron can be in two states at the same time. This defies common sense, but is the reality according to quantum mechanics. Its advantage is enormous. You can only store so much data if the magnets are pointing up or down. But if each magnet is a mixture of states, you can pack far greater amounts of information onto a tiny cluster of atoms. Each “bit” of information, which can be either 1 or 0, now becomes a “qubit,” a complex mixture of 1’s and 0’s with vastly more storage.

The point of bringing up quantum computers is that they may hold the key to exploring the universe. In principle, a quantum computer may give us the ability to exceed human intelligence. They are still a wild card. We don’t know when quantum computers will arrive or what their full potential may be. But they could prove invaluable in space exploration. Rather than simply build the settlements and cities of the future, they may take us a step further and give us the ability to do the high-level planning necessary to terraform entire planets.

Quantum computers would be vastly more potent than ordinary digital computers. Digital computers might need several centuries to crack a code based on an exceptionally difficult math problem, such as factorizing a number in the millions into two smaller numbers. But quantum computers, calculating with a high number of mixed atomic states, could swiftly complete the decryption. The CIA and other spy agencies are acutely aware of their promise. Among the mountains of classified material from the National Security Agency that were leaked to the press a few years ago was a top-secret document indicating that quantum computers were being carefully monitored by the agency but that no breakthrough was expected in the immediate future.

Given the excitement and hubbub over quantum computers, when might we expect to have them?

WHY DON’T WE HAVE QUANTUM COMPUTERS?

Computing on individual atoms can be both a blessing and a curse. While atoms can store an enormous quantity of information, the most minute impurity, vibration, or disturbance could ruin a calculation. It is necessary, but notoriously difficult, to totally isolate the atoms from the outside world. They must reach a state of what is called “coherence,” in which they vibrate in unison. But the slightest interference—say, someone sneezing in the next building—could cause the atoms to vibrate randomly and independently of one another. “Decoherence” is one of the biggest problems we face in the development of quantum computers.

Because of this problem, quantum computers today can only perform rudimentary calculations. In fact, the world record for a quantum computer involves about twenty qubits. This may not seem so impressive, but it is truly an achievement. It may take several decades or perhaps until late in this century to attain a high-functioning quantum computer, but when the technology arrives, it will dramatically augment the power of AI.

ROBOTS IN THE FAR FUTURE

Considering the primitive state of automatons today, I also would not expect to see self-aware robots for a number of decades—again perhaps not until the end of the century. In the intervening years, we will likely first deploy sophisticated remote-controlled machines to continue the work of exploring space, and then, perhaps, automatons with innovative learning capabilities to begin laying the foundations for human settlements. Later will come self-replicating automatons to complete infrastructure, and then, finally, quantum-fueled conscious machines to help us establish and maintain an intergalactic civilization.

Of course, all this talk of reaching distant stars raises an important question. How are we, or our robots, supposed to get there? How accurate are the starships we see every night on TV?