The preceding chapters focused on the behavior of the one system known that undoubtedly creates a mind: the human brain. The question of whether the human brain is the only system that can host a mind is worth discussing in some detail.
Having a mind is somehow related to having an intelligence, although it may be possible to envision mindful behavior without intelligence behind it and intelligent behavior without a mind behind it. Let us assume, for the sake of discussion, that we accept the premises of the Turing Test, and that we will recognize a system as intelligent if it passes some sophisticated version of that test. We may defer to a later time important questions related to the level of sophistication of such a test, including how long the interaction would last, which questions could be asked, how knowledgeable the interrogators would have to be, and many additional details that are relevant but not essential to this discussion.
It is reasonable to think that a system that passes the Turing Test, and is therefore deemed intelligent, will necessarily have a mind of its own. This statement raises, among other questions, the question “What does it mean to have a mind?” After all, it is perfectly possible that a system behaves as an intelligent system, passes the Turing Test, yet doesn’t have a mind of its own but only manages to fake having one. This question cannot be dismissed lightly. We will return to it later in this chapter.
In this book the word mind has already been used many times, most often in phrases such as “keep in mind” or in references to the “human mind.” These are familiar uses of the word mind, and they probably haven’t raised any eyebrows. However, anyone asked to define the meaning of the word mind more exactly will soon recognize that the concept is very slippery. The most commonly accepted definition of mind is “an emergent property of brain behavior that provides humans with a set of cognitive faculties, which include intelligence, consciousness, free will, reasoning, memory, and emotions.” We may discuss whether other higher animals, such as dogs and monkeys, have minds of their own, and we may be willing to concede that they do but that their minds are different from and somewhat inferior to our own.
Perhaps the most defining characteristic of a mind is that it provides its owner with consciousness and with all that consciousness entails. After all, we may grant that a program running in a computer can reason, can perform elaborate mathematical computations, can play chess better than a human champion, can beat humans in TV game shows, can translate speech, and can even drive vehicles, but we would still be hard pressed to believe that a program with those abilities has free will, that it is self-conscious, that it has the concept of self, and that it fears its own death. We tend to view consciousness, in this sense, as a thing reserved for humans, even though we cannot pinpoint exactly what it is and where it comes from. Thousands of brilliant thinkers have tackled this problem, but the solution remains elusive.
Experience has made us familiar with only one very particular type of mind: the human mind. This makes it hard to imagine or to think about other types of minds. We may, of course, imagine the existence of a computer program that behaves and interacts with the outside world exactly as a human would, and we may be willing to concede that such a program may have a mind; however, such a program doesn’t yet exist, and therefore the gap separating humans from machines remains wide.
Using a Turing Test to detect whether a computer is intelligent reveals an anthropocentric and restricted view of what a mind is. The Turing Test, which was conceived to remove anthropocentric predispositions, is still based on imitation of human behavior by a machine. It doesn’t really test whether a program has a mind; it tests only whether a program has a mind that mimics the human mind closely.
Many thinkers, philosophers, and writers have addressed the question of whether a non-human mind can emerge from the workings of a program running in a computer, in a network of computers, or in some other computational support. But until we have a more solid definition of what a mind is, we will not be able to develop a test that will be able to recognize whether a given system has a mind of its own—especially if the system has a mind very different from the human minds we are familiar with.
To get around the aforementioned difficulty, let us consider a more restricted definition of a mind, which I will call a field-specific mind: a property of a system that enables the system to behave intelligently in a human-like way in some specific field. For instance, a program that plays chess very well must have a field-specific mind for chess, and a program that drives vehicles well enough to drive them on public streets must have a field-specific mind for driving. Field-specific minds are, of course, more limited than human minds. However, in view of the inherent complexity of a system that can behave intelligently and in a human-like way in any reasonably complex field, it seems reasonable to ask the reader to accept that such a system must have some kind of a field-specific mind. Such a field-specific mind probably is simpler than a human mind and probably doesn’t give its owner consciousness, free will, self-awareness, or any of the characteristics that presumably emanate from self-awareness, such as fear of death.
A field-specific mind is an emergent property of a system that enables it to use some internal model of the world sufficiently well for the system to behave intelligently in some specific field and to act as intelligently and competently as a human in that field. It isn’t difficult to imagine a field-specific intelligence test in which a system is tested against humans and is deemed to have a field-specific mind if it passes the test. The most stringent test of that kind would be a Turing Test in which a system not only has to behave intelligently in many specific fields but also must emulate human-like intelligence and behavior in all those fields.
Obviously, there are fields so specific and simple that having a field-specific mind for them is trivial. For instance, a system that never loses at tic-tac-toe will have a very simple field-specific mind, because tic-tac-toe is so easy that a very simple set of rules, encoded in a program (or even in a set of wires and switches), is sufficient to play the game. By the same token, a system that can play chess by following the rules, but that loses every game, has only a very simple mind for chess, close to no mind at all. On the other hand, a system that plays world-class chess and defeats human champions most of the time must have a field-specific mind for chess—there is no way it could do it by following some simple set of rules. With this proviso, we can accept that even creatures as simple as worms have field-specific minds that enable them to live and to compete in their particular environments, even if those minds are very simple and emanate from the workings of a very simple brain.
These definitions, somehow, define an ontology and a hierarchy of minds. A mind A can be viewed as more general than a mind B if it can be used to reason about any field in which mind B can also be used, and perhaps in other fields too. The human mind can be viewed as the most general mind known to date—a mind that encompasses many field-specific minds and that may exhibit some additional characteristics leading to self-awareness and consciousness. What the additional characteristics are and where they come from are important questions; they will be addressed later in this chapter.
I will now propose a classification of minds in accordance with their origins, their computational supports, and the forms of intelligence they exhibit. I will call a mind synthetic if it was designed and didn’t appear “naturally” through evolution. I will call a mind natural if it has appeared through evolution, such as the human mind and, perhaps, the minds of other animals. I will call a mind digital if it emanates from the workings of a digital computer program. I will call a mind biological if it emanates from the workings of a biological brain. These considerations lead to the taxonomy illustrated in figure 10.1.
Figure 10.1 Digital and biological minds, natural and synthetic.
We are all familiar with natural minds, designed by evolution. Synthetic minds, designed by processes other than evolution to fulfill specific needs, are likely to be developed in the next few decades. They probably will use digital supports, although if developed using the tools of synthetic biology (discussed in chapter 7) they could conceptually be supported by biological systems.
The entries in figure 10.1 may not be all equally intuitive. Almost everyone is familiar with natural, biological minds, the most obvious being the human mind. Many people are also familiar with the concept of synthetic, digital minds designed using principles that have nothing to do with how the brain operates. (“Synthetic intelligences” fall into the latter category.) I have also discussed the possibility that intelligent systems may someday be obtained by simulating the principles of brain development processes in a digital computer. I call such systems “neuromorphic intelligent systems.” Additionally, rather than designing systems inspired by the principles at work in the brain, we may one day have the technology to simulate, or emulate, in minute detail, the behavior of a working human brain. That approach, called whole-brain emulation, corresponds to a natural mind (since it would work exactly as a human mind, which was designed by evolution) with a digital substrate.
It also is possible that we may one day be able to create synthetic minds with a biological substrate. The techniques of synthetic biology and advanced genetic engineering may someday be used to engineer beings with superhuman thinking abilities.
With this ontology of minds, we can now try to answer the central question of this book: What types of digital minds—field-specific or general, natural or synthetic—will, in the coming decades, emerge from the workings of intelligent programs running in digital computers?
It is easy to identify present-day systems that were created to exhibit complex field-specific minds. By accessing large amounts of information, and by using sophisticated algorithms, these systems perform tasks that clearly require advanced field-specific minds. Such systems, developed by large teams of programmers, perform complex tasks that would require significant amounts of intelligence if they were to be performed by humans. I believe that they are the precursors of general-purpose synthetic minds, and that someday they will be recognized as such. As these systems evolve, they probably will become more and more intelligent until they become synthetic intelligences.
Deep Blue is a chess-playing computer developed by IBM. Its predecessor, Deep Thought, had been developed at Carnegie Mellon University. The creators of Deep Thought were hired by IBM and were challenged to continue their quest to build a chess-playing machine that could defeat the human world champion. Deep Blue won its first chess game against a world champion in 1996, when it defeated Garry Kasparov in the first game of a match. At the end of the match, however, Kasparov had defeated Deep Blue by a score of 4 to 2. Deep Blue was then improved. In 1997 it played Kasparov again and became the first computer system to defeat a reigning world champion in a standard chess match. Kasparov never accepted the defeat. Afterward he said that he sometimes had observed deep intelligence and creativity in the machine’s moves, suggesting that human chess players had helped the machine. IBM always denied that any cheating had taken place, noting that the rules had allowed the developers of Deep Blue to “tune” the machine’s program between games and that they had made extensive use of that opportunity. The rematch demanded by Kasparov never took place. IBM ended up discontinuing Deep Blue. Whether Deep Blue was indeed better than the reigning chess champion is, of course, irrelevant. Deep Blue and many other chess computers in existence today probably can beat 99,999999 percent of humans in a game that was once thought to require strong artificial intelligence to be mastered. Deep Blue played chess in a way that was not, in many respects, the same way a human plays, but it can’t be denied that Deep Blue had a field-specific mind for chess.
There are games more complex than chess. Go, which originated in China more than 2,500 years ago, is played on a board with 19 × 19 positions (figure 10.2). Players take turns placing black and white stones in order to control territories. Once played, the stones are never moved. There are hundreds of possible moves at each turn. Go is very difficult for computers because of the high branching factor and the depth of the search tree. Until recently, no Go-playing program could beat a reasonably good human player, and it was believed that playing Go well would remain beyond the capabilities of computers for a long time. However, in 2016 a team of researchers from a company called Google Deepmind used neural networks trained using deep learning (Silver et al. 2016) to create a program, called AlphaGo, that played the game at the highest level. In January of 2016, AlphaGo defeated the European champion, Fan Hui. Two months later, it defeated the man who was considered the best player in the world, Lee Sedol.
Figure 10.2 The final position of a game of Go, with the territories defined.
When they play chess, Go, or similar games, computers typically perform a search, trying many moves and evaluating the millions of positions that may result from them. Evaluating a position in chess or in Go is difficult, but the number of pieces on the board and their positions can be used to obtain a good estimate of the position value, particularly if large databases of games are available. Although we don’t know exactly how humans play such games, we know that computers use an approach based mostly on brute-force search, trying many possible moves before choosing one. Checkers has been completely solved by computers—that is, all possible combinations of moves have been tried and have been registered in a database, so that the best move in each situation is known to a program that has access to the database. This implies no human player without access to the database can ever beat such a computer. (A human with access to the database would also play the game perfectly, but no human’s memory would be able to store such a huge amount of data.)
Games more complex than checkers, such as chess and Go, probably will never be solved exactly, because the number of possible games is simply too large. Although the exact number of possible games of chess isn’t known, it is estimated to exceed 10120, a number vastly larger than the number of atoms in the universe. The number of possible Go games is probably larger than 10800. Therefore, it cannot be argued that Deep Blue or AlphaGo play by brute force (that is, by knowing all the possible positions). Deep Blue definitely had a mind for chess and AlphaGo a mind for Go, albeit a mind different from the minds of human players.
A somewhat less obvious application of artificial intelligence is used by many people almost every day. As of this writing, the World Wide Web is composed of tens of billions of Web pages. Google, the best-known and most widely used search engine, gives everyone access to the information contained in about 50 billion of those pages. By using a technique called indexing, in which the pages containing a certain term are listed together in an index organized a bit like a phone book, Google can retrieve, in a fraction of a second, the pages related to any set of topics chosen by a user. Furthermore, it is able to list the most relevant pages first. It does so by applying a number of clever methods that improved on the original idea of its founders (Brin and Page 1998), the page-rank algorithm.
The page-rank algorithm, which is at the origin of Google’s success, views the Web as a huge, interconnected set of pages, and deems most likely to be interesting the pages visited more often if a user was randomly traveling this network by following the hyperlinks that connect the pages. Pages and domains with many links pointing to them are, therefore, deemed more interesting. This is why we are more likely to retrieve a page from CNN or BBC than one from some more obscure news source. Many more pages point to CNN (as a source of news or for some other reason) than to other, less-well-known pages. The page-rank algorithm has been, over the years, complemented by many other techniques that contribute to a good ranking of the pages retrieved as results of a query.
The apparently simple operation of retrieving a set of pages relevant to some query is the result of the application of very sophisticated technology, developed over the years by engineers and researchers in computer science. Many of us are now so accustomed to going to Google and retrieving relevant information on any obscure topic that we may forget that such an operation would have been very slow, if possible at all, only twenty years ago.
We usually don’t think of Google as having a mind of its own, not even a field-specific mind. Let us, however, go back in time to 1985. Only a few years earlier, the TCP/IP protocol, which enabled computers to talk with other computers over long distances, had been developed and deployed. In 1985 one could use computer-to-computer networks to send and receive messages (mainly email and files). But only a few thousand people used it for that purpose, and no one predicted that, only a few decades later, there would be billions of computers and mobile devices interconnected as they are now. However, the essential technology already existed. Computers could already talk to one another. The only things that were needed to have the Internet we have today were more sophisticated algorithms and programs, faster computers, and faster interconnection networks.
Now imagine that you are back in 1985 and that someone tells you that, thirty years in the future, a computer system will exist that will be able to recover, in a fraction of a second, all the relevant information in the world about a given topic, chosen by you. You are told that such a system will go through all the information in the Internet (the roughly 1024 bytes stored in all the interconnected computers in the world), understand enough of it to select the information that is relevant to your request, and present the answer to you in a fraction of a second. To put things in perspective, it is useful to remember that a typical book consists of about half a million bytes, that the Library of Congress stores about 150 million books, and that therefore the information available from the Internet corresponds to about 10 billion times the information stored in the Library of Congress. (To be fair, the large majority of the data stored in the Internet corresponds to videos and photos, which are only partially covered by search engines, since these engines index videos and photos mostly on the bases of keywords and tags, not on the basis of the contents. However, that will soon change as computers get better at processing images, and it isn’t really relevant to the question that will be posed next.)
Now, imagine that, back in 1985, you are asked: Is such a system intelligent? Does it have a mind of its own? Do you need intelligence to search this huge amount of information and, from it, retrieve instantly the relevant pieces of data? Does the system need to understand enough of what is written in the documents to retrieve relevant pieces of information and to discard the irrelevant ones?
My guess is that, back in 1985, you would have answered that such a system must be intelligent and must have a mind of its own, at least in some limited way. After all, to answer the complex queries it accepts, it must have some model of the world and it must understand, at least partially, what has been asked.
If you didn’t know, at the time, how queries would be posed to a system such as the one we have now, you might have imagined some sort of written or spoken dialogue. For instance, a user would type some question (perhaps “What is the size of the Library of Congress?”) and would get an answer something like “The Library of Congress is the largest library in the world, with more than 158 million items on approximately 838 miles of bookshelves.” If you try that today, you probably will get back a document that includes exactly the information you were looking for, but it probably will be mixed with some additional information that may or may not be relevant to you. You probably are so accustomed to using a search engine that you don’t even remotely consider that it may have a mind, but in 1985 you may have speculated that it did. There are some excellent reasons for this duality of criteria.
One of the reasons is that search engines are still far from having a perfect interface with the user and from retrieving exactly what the user wants. We talk to them mostly by means of text (even though we now can use a somewhat less than perfect voice interface), and, instead of replying by writing an explicit answer, they provide us with a list of documents in which the answers to our queries can be found. The technologies needed to transform such answers into answers in plain English are probably within reach. Indeed, such capabilities have been demonstrated by other systems. IBM’s Watson is one such system.
Watson uses advanced algorithms for the processing of natural language, automated reasoning, and machine learning to answer questions in specific domains. Using information from Wikipedia and other knowledge sources, including encyclopedias, dictionaries, articles, and books, Watson was able to compete with human champions on the TV game show Jeopardy, beating the top human competitors a number of times. (In the game, what ordinarily would be considered questions are presented as answers, and players must respond with the relevant questions. For the purposes of this discussion, however, the former will be referred to as questions and the latter as answers.)
Watching videos of the Jeopardy competition (which are available in YouTube) is a humbling and sobering experience. A computer, using complex but well-understood technologies, interprets difficult questions posed in natural language and answers them faster and more accurately than skilled human players. Watson’s computer-synthesized voice answers difficult questions with astounding precision, exhibiting knowledge of culture and of historical facts that very few people on Earth (if any) possess.
Yet skeptics were quick to point out that Watson didn’t understand the questions, didn’t understand the answers, and certainly didn’t understand that it had won a game. One of the most outspoken skeptics was John Searle, whose arguments were based on the same ideas that support the Chinese Room experiment (discussed in chapter 5). All Watson does, Searle argued in a paper published in 2011, is manipulate symbols according to a specific set of rules specified by the program. Symbol manipulation, he argued, isn’t the same thing as understanding; therefore, Watson doesn’t understand any of the questions or the answers, which are meaningless symbols to the program. The system cannot understand the questions, because it has no way to go from symbols to meaning. You may ask “Then how does the brain do that?” Searle’s answer is that a human brain, by simply operating, causes consciousness, understanding, and all that comes with consciousness and understanding.
I do not agree with Searle and the other skeptics when they say that Watson didn’t understand the questions or the answers it provided. I believe that understanding is not a magical process, a light either on or off. Understanding is a gradual thing. It results from creating an internal model of the world that is relevant to the problem at hand. A system that has no understanding cannot answer, as accurately as Watson did, the complex questions posed in a game of Jeopardy. The model of the world in Watson’s mind was enough to enable it to answer, accurately, a large majority of the questions. Some it didn’t understand at all; others it understood partially; many—probably most—it understood completely.
The question, then, is not whether Watson is intelligent. Certainly it is intelligent enough to answer many complex questions, and if it can do so it must have a mind of his own—a field-specific mind for the particular game called Jeopardy. Did Watson understand that it had won the game? It probably did, since its behavior was directed at winning the game.
Was Watson happy for having won? Was it conscious of having won? Probably not, since consciousness of having won would have required a deeper understanding of the game, of the world, and of Watson’s role in the world. As far as I know, Watson doesn’t have internal models for those concepts, and doesn’t have a sense of self, because internal models and a sense of self were not included in its design.
Watson has been used in medical diagnosis and in oil prospecting, domains in which its ability to answer questions is useful. Its ability to answer questions posed in natural language, coupled with its evidence-based learning capabilities, enable it to work as a clinical decision-support system. It can be used to aid physicians in finding the best treatment for a patient. Once a physician has posed a query to the system describing symptoms and other factors, Watson parses the input, looks for the most important pieces of information, then finds facts relevant to the problem at hand, including sources of data and the patient’s medical and hereditary history. In oil prospecting, Watson can be an interface between geologists and the numerical algorithms that perform the complex signal-processing tasks used to determine the structure of potential underground oil fields.
Deep Blue, AlphaGo, Google, and Watson are examples of systems designed, in top-down fashion, to exhibit intelligence in specific fields. Deep Blue and AlphaGo excel at complex board games, Google retrieves information from the Web in response to a query, and Watson answers questions in specific domains in natural language. Other systems likely to be designed and deployed in the next decade are self-driving vehicles with a model of the world good enough to steer unassisted through roads and streets, intelligent personal assistants that will help us manage our daily tasks, and sales agents able to talk to customers in plain English. All these abilities have, so far, been unique to human beings, but they will certainly be mastered by machines sometime in the next decade.
Systems designed with the purpose of solving specific problems, such as those mentioned above, are likely to have field-specific minds sophisticated enough to enable them to become partners in conversations in their specific fields, but aren’t likely to have the properties that would lead to general-purpose minds. But it is possible that, if many field-specific minds are put together, more general systems will become available. The next two decades will probably see the emergence of computerized personal assistants that can master such tasks as answering questions, retrieving information from the Web, taking care of personal agendas, scheduling meetings, and taking care of the shopping and provisioning of a house. The technologies required for those tasks will certainly be available in the next twenty years, and there will be a demand for such personal assistants (perhaps to be made available through an interface on a cell phone or a laptop computer).
Such systems will have more general minds, but they will still be limited to a small number of field-specific environments. To perform effectively they will have to be familiar with many specifics of human life, and to respond efficiently to a variety of requests they will have to maintain memory of past events.
I anticipate that we will view such a system as having a mind of its own, perhaps a restricted and somewhat simplified mind. We may still not consider such a system to have consciousness, personality, emotions, and desires. However, it is very likely that those characteristics are largely in the eye of the beholder. Even simple systems that mimic animals’ behaviors and emotions can inspire emotions and perception of consciousness, as becomes apparent when one reads the description of the Mark III Beast in Terrel Miedaner’s 1978 novel The Soul of Anna Klane (reprinted in Hofstadter and Dennett’s book The Mind’s I.) If Watson or Google were to be modified to interact more deeply with the emotions of its users, it is very possible that our perceptions about their personalities, emotions, and desires would change profoundly.
Kevin Kelly, in his insightful book What Technology Wants, uses the term the technium to designate the vast interconnected network of computers, programs, information, devices, tools, cities, vehicles, and other technological implements already present in the world. Kelly argues that the technium is already an assembly of billions of minds that we don’t recognize as such simply because we have a “chauvinistic bias” and refuse to accept as mindful anything other than a human being.
To Kelly, the technium is the inevitable next step in the evolution of complexity, a process that has been taking place on Earth for more than 4 billion years and in the universe for much longer, since before Earth existed. In his view, the technium is simultaneously an autonomous entity, with its own agenda, and the future of evolution. It is a new way to develop more and more powerful minds, in the process creating more complexity and opening more possibilities. It is indeed possible that we are blind, or chauvinistic, or both, when we fail to recognize the Internet as an ever-present mindful entity, with its billions of eyes, ears, and brains. I don’t know whether we will ever change our view of the synthetic minds we created and developed.
There are, however, other ways to design intelligent systems. One way is by drawing more inspiration from the human brain, the system that supports the only general mind we know. Intelligent systems designed in such a way will raise much more complex questions about what a mind is, and about the difference between synthetic minds and natural minds. If we design systems based on the same principles that are at work on human minds, we probably will then be more willing to accept that they have minds of their own and are conscious. After all, if one wants to create an intelligent and conscious system, the most obvious way is to copy the one system we know to be intelligent and conscious: the human mind. There are two ways to make such a copy.
One way would be to directly copy a working human mind. This could be done by copying, detail by detail, a human brain, the physical system that supports the human mind. Copying all the details of a working brain would be hard work, but the result, if successful, would certainly deliver what was desired. After all, if the resulting system were a complete and fully operational working copy of the original, and its behavior were indistinguishable from the original, certainly it would be intelligent. Because such a mind would be a piece-by-piece copy of an existing biological mind, it would behave as a natural mind behaves, even if it was supported by a digital computer. This mind design process, usually called mind uploading or whole-brain emulation, is the topic of the next section.
Another way to design a human-like mind is to reproduce, rather than the details of a working brain, the mechanisms that led to the creation of such a brain. This approach may be somewhat less challenging, and may be more accessible with technology likely to be available in the near future. The idea here is to reproduce in a computer not the details of a working brain, but the principles and the mechanisms that lead to the creation of the structures in a human brain, and to use these principles to create synthetic brains.
In chapter 8 I described the two main factors that are at work when a brain assembles itself: genetic encoding and brain plasticity. With the development of our knowledge of those two mechanisms, it may one day be possible to reproduce, in a program, the way a brain organizes itself.
Many researchers believe that a complex system such as a brain results, not from a very complex recipe with precisely tuned parameters and detailed connectivity instructions, but from a general mechanism that creates complexity from simple local interactions, subject to some very general rules that work within a wide range of parameters (Kelso 1997). Significant evidence obtained from firing patterns in the brain, and particularly in the cortex (Beggs and Plenz 2003), supports the idea that the brain organizes itself to be in some sort of critical state such that exactly the right amount of firing optimizes information transmission, while avoiding both activity decay and runaway network excitation.
Information about general mathematical principles of self-organization and detailed knowledge of the biological and biochemical mechanisms that control brain development may eventually enable us to simulate, in a computer, the processes involved in brain development. The simulations will have to be extensively compared against data obtained from both simple and complex organisms. If successful, the simulation should derive connectivity structures and patterns of cell activity that are, in all observable aspects, equivalent to those observed in real brains. If the physical and mathematical principles are sound, and if enough knowledge is available with which to tune the parameters, it is reasonable to expect that the resulting self-organized brain will behave, in most if not all respects, like a real brain. Such a simulation need not be performed at the level of neurons, since higher-level structures may be involved in the process of brain organization. In fact, neurons may not even need to be involved in the simulation at all, because they may not represent the right level of abstraction.
Of course, no human-like brain will develop normally if it doesn’t receive a complete and varied set of stimuli. Those stimuli, which will have to enter the system through simulated senses, will play an essential part in the development of the right brain structures. The whole process of designing a brain by this route will take time, since the mechanisms that lead to brain organization take years to develop. In this process, interaction with the real world is one limiting factor. There is, in principle, no reason why the whole process cannot run many times faster than what occurs in a real brain. In fact, it is likely that the first approaches will use this possibility, making us able to simulate, in only a few days, development processes that take months or years to occur. However, as the simulation becomes more precise and more complex, there will be a need to provide the simulated brain with realistic input patterns, something that will require real-time interaction with the physical world. If the aim is to obtain, by this route, a working simulation of a brain of an infant monkey, the simulation will have to interact with other monkeys, real or simulated. If the aim is to simulate the full development of a human brain, interactions with humans and other agents will be required. This means that the complete simulation will have to be run, mainly, in real time, and it will take a few years to develop the simulation of a toddler and about twenty years to develop the simulation of a young adult.
Many complex questions are raised by the possibility that a human brain can be “developed” or “grown” in this way from first principles, experimental data, and real-world stimuli. If such an approach eventually leads to a system that thinks, understands speech, perceives the world, and interacts with humans, then it probably will be able to pass, with flying colors, any sort of unrestricted Turing Test. Even if interaction is always performed through a simulated world (one that never existed physically), this fact will not make the existence of an intelligent and conscious system less real or its emotions less genuine.
Such a mind would be synthetic, but its workings would be inspired by the workings of a natural mind, so it would be somewhat of a hybrid of a natural and a synthetic mind (see figure 10.1). If such a system is developed, we will have to relate to the first mind that is, in some respects, similar to our own, but that doesn’t have a body and wasn’t born of a human. His or her brain will perform computations similar to those performed by human beings, he or she will have feelings and will have the memories of a life lived in a world that exists only inside the memory of a computer. His or her feelings will presumably be no less real than ours, the emotions will be as vivid as ours, and the desire to live will be, probably, as strong as any of ours. For the first time, we would have created a synthetic mind modeled after a natural human mind.
As referred above, there is another way to create a human-like mind, and it consists in copying directly the working brain of a living human. Such a process would yield a digital mind that would also be a natural mind, because it would work exactly in the same way as a biological human mind. It would, however, be supported by a brain emulator running in a computer, rather than by the activity of a biological brain. This is, by far, the most ambitious and technologically challenging method, because it would require technology that doesn’t yet exist and may indeed never exist. It also raises the most complex set of questions, because it changes completely the paradigm of what intelligence is and what it means to be human.
To create a digital mind from an existing human mind, it is necessary to reverse engineer a working brain in great detail. Conceptually there are many ways that might be done, but all of them require very detailed information about the structure and the patterns of neural activity of a specific human brain—information reliable and precise enough to make it possible to emulate the behavior of that brain in a computer. This approach, known as whole-brain emulation (WBE) or mind uploading, is based on the idea that complete emulation of a brain based on detailed information is possible. That whole-brain emulation will one day be possible has been proposed by a number of authors, including Ray Kurzweil and Nick Bostrom, but only recently have we begun to understand what it will take to attempt such a feat.
In theory, mind uploading might be accomplished by a number of methods, but two methods at different ends of the spectrum deserve special mention. One would entail copying neural structures wholesale; the other would entail gradual replacement of neurons. In the first method, mind uploading would be achieved by scanning and identifying the relevant structures in a brain, then copying that information and storing it in a computer system, which would use it to perform brain emulation. The second method would be more gradual. The original brain would keep operating, but its computational support would be progressively replaced.
The first approach, wholesale copying of a working brain, requires mainly enormous improvements in existing technologies coupled with the development of some new ones. In chapter 9 I discussed some techniques that could be used to identify detailed structures in human brains; in chapter 8 I discussed techniques that could be used to simulate sections of brain tissue if given detailed models of the neurons, their interconnectivity patterns, and other relevant biochemical information. Existing knowledge of the structure of brain tissue can, in principle, be used to simulate a human brain very accurately (even if slowly) if given a detailed description of each neuron and of how it interacts with other neurons. The challenge lies in the fact that no existing technology is sufficiently advanced to obtain the detailed structural information required to supply a brain emulator with the required level of detail.
At present, a number of sophisticated imaging methods can be used to retrieve information about brain structures and neuron activity. In chapter 9, we considered a number of techniques, among them PET and MRI, that can be used to obtain functional information about how the brain is wired and about the activity levels of the neurons involved in specific tasks. However, even the most powerful techniques in use today can only obtain, non-invasively, information about structures that are no smaller than a cubic millimeter, which is the size of the smallest voxel they can resolve. Within a cubic millimeter of cortex there are between 50,000 and 100,000 neurons, each with hundreds or thousands of synapses that connect with other neurons, for a total number of synapses that is on the order of a billion per cubic millimeter, each synapse ranging in size from 20 to 200 nanometers.
Therefore, present-day imaging techniques cannot be used to obtain detailed information about synapse location and neuron connectivity in live brains. The actual resolution obtained using MRI varies with the specific technique used, but resolutions as high as a billion voxels per cubic millimeter have been reported, which corresponds to voxels with 1 µm on the side (Glover and Mansfield 2002). There are, however, many obstacles that have to be surmounted to achieve such high resolution, and it is not clear whether such techniques could ever be used to image large sections of a working brain.
To resolve individual synapses would require microscopy techniques that can now be used to image slices of brain tissue with unprecedented detail. These techniques destroy the tissue and can be used only in dead brains. In chapter 7 we saw how the brain of the worm C. elegans was painstakingly sliced and photographed, using electron microscopy, to obtain the complete wiring diagram of its 302 neurons, in an effort that took 12 years to complete. If such an effort and methodology were to be applied to a human brain, it would require a time commensurate with the age of the universe, if we take into consideration only the scaling up implied by the much larger number of neurons and synapses.
However, technology has evolved, and it may one day become possible to apply a similar process effectively to a whole human brain. The process done mostly by hand in the C. elegans study can now be almost completely automated, with robots and computers doing all the brain slicing, electron microscopy, and three-dimensional reconstruction of neurons. Small volumes of neocortex have already been reconstructed in great detail, as was noted in chapter 9. With the right technology, such a process could conceptually be applied to a whole human brain, obtaining a very faithful copy of the original organ (which would, regrettably, be destroyed in the process). The level of detail gathered in this way would probably not be enough to emulate accurately the brain that was sliced and imaged, since significant uncertainty in a number of parameters influencing neuron behavior might remain, even at this level of resolution. However, it might be possible to fill in the missing information by combining data about the real-time behavior of the brain (obtained, of course, while it was still operating). Optogenetics, a technology to which I alluded briefly in chapter 9, could be used to provide very useful information about detailed brain behavior.
It is clear that existing technologies cannot easily be used to obtain information about working brains with a level of detail sufficient to feed a brain emulator that can generate sufficiently accurate output. Obtaining such detailed information from a working brain is clearly beyond the reach of existing technologies. On the other hand, structural information, obtained using microscopy, can be very detailed and could be used, at least in principle, to feed a brain emulator, even though much information would have to be obtained by indirect means. Limitations of state-of-the-art techniques still restrict the applicability of these techniques to small volumes, on the order of a fraction of a cubic millimeter, but we can expect that, with improved algorithms and better microscopy technologies, the whole brain of a mouse could be reverse engineered in great detail within a few decades.
However, it isn’t clear that structural information obtained using existing microscopy techniques, such as serial block-face electron microscopy, has enough detail to feed an emulator that would faithfully reproduce the original system. The difficulty lies in the fact that the behavior of the system is likely to be sensitive to a number of parameters that are hard to estimate accurately from purely structural information, such as properties of membranes and synapses, and from chemical information present in the environment outside the neurons.
Ideally, detailed structural information would be combined with in vivo functional information. Such a combination of information might make it possible to tune the emulator parameters in order to reproduce, in silico, the behavior of the original system. Although many methods for combining these two types of information are under development (Ragan et al. 2012), combining functional information from a live brain and detailed structural information obtained by microscopy of a sliced brain remains a challenge. It is therefore not yet possible to use the significant amount of knowledge about the behavior of individual neurons and groups of neurons to construct a complete model of a brain, because the requisite information about neuron structure, properties, and connectivity is not yet available.
There is, in fact, a huge gap in our understanding of brain processes. On the one hand, significant amounts of information at the macro level are being collected and organized by many research projects. That information can be used to find out which regions of the brain are involved in specific processes, but cannot be used to understand in detail the behavior of single neurons or of small groups of neurons. It may, however, provide some information about the way the brain organizes itself at the level of macro-structures containing millions of neurons. On the other hand, researchers understand in significant detail many small-scale processes and structures that occur in individual neurons, including synapse plasticity and membrane firing behavior.
There is no obvious way to bridge this huge gap. Because of physical limitations, it isn’t likely that existing techniques, by themselves, can be used to determine all the required information, without which there is no way to build an accurate brain emulator. Nanotechnologies and other techniques may be usable to bridge the gap, but they will not be used in healthy human brains before the technology is mature enough to guarantee results; even then, they will raise significant moral questions. Ongoing projects that use animal models, including OpenWorm, the Open Connectome Project, and the Human Brain Project, will shed some light on the feasibility of different approaches, but finding an exact way to proceed will remain the focus of many projects in coming decades.
I cannot tell you, because I don’t know, how this challenge will be met. Many teams of researchers, all over the world, are working on techniques aimed at better understanding the brain. No single technique or combination of techniques has so far delivered on the promise of being able to obtain the detailed information that could be used to perform whole-brain emulation. One may even be tempted to argue that, owing to physical limitations and the complexity of the challenges involved, we will never have information detailed enough to enable us to build a faithful emulator.
In order for mind uploading to become possible, a number of technologies would have to be enormously developed, including advanced microscopy, computational techniques for three-dimensional reconstruction of brain tissue, and multi-level neural modeling and simulation. Enormous advances in processing power would also be necessary. A workshop on whole-brain emulation held at the Future of Humanity Institute in Oxford in 2007 produced a “road map” (Sandberg and Bostrom 2008) that discusses in detail the technologies required and the appropriate level of emulation. In theory, brain emulation could be performed at a variety of levels, or scales, including quantum, molecular, synaptic, neuronal, or neuron population. Depending on the exact mechanisms used by brains, emulation might have to be performed at a more or less detailed level. The level of the emulation has a significant effect on the computational resources required to perform it. Quantum or molecular-level emulation will probably be forever prohibitive (barring extraordinary advances in quantum computing). Higher-level emulation is computationally more efficient but may miss some effects that are essential in brain behavior.
The road map also includes a number of milestones that could be used to measure how far in the future whole-brain emulation is. It concludes by estimating, perhaps somewhat optimistically, that insufficient computational power is the most serious impediment to achieving whole-brain emulation by the middle of the twenty-first century. That conclusion, however, rests on the assumption that all the supporting technologies would have advanced enough by that time, something that is far from widely agreed upon.
One important argument presented in the road map is that whole-brain emulation doesn’t really require the creation of yet unknown technologies, but only scaling up (and vast improvement) of already-existing ones. After all, we already have the technologies to reverse engineer small blocks of brain tissue, the models that enable us to simulate neural tissue, and the simulators that use these models to perform simulations on a relatively small scale. All that is required, now, is to improve these technologies, so that we can apply them first to simple invertebrates, like C. elegans, then to larger and larger animals, including progressively more complex mammals, until we are finally able to emulate a complete human brain. None of these developments requires conceptually new technologies, although all of them require enormous advances in existing technologies.
The second approach to mind uploading, gradual replacement of neurons in a working brain, requires technologies even more complex and distant than the ones required to perform a wholesale copy. One such technology, which is still in its infancy but which may come to play an important role in the future, is nanotechnology.
In his influential 1986 book Engines of Creation, Eric Drexler popularized the idea that molecular-size machines could be assembled in large numbers and used to perform all sorts of tasks that are now impossible. The idea may have originated with Richard Feynman’s lecture “There’s Plenty of Room at the Bottom,” given in 1959 at a meeting of the American Physical Society. In that lecture, Feynman addressed the possibility of direct manipulation of individual atoms to create materials and machines with unprecedented characteristics.
The ideas of the pioneers mentioned above have been developed by the scientific community in many different directions. Some researchers have proposed the development of techniques for constructing nanodevices that would link to the nervous system and obtain detailed information about it. Swarms of nanomachines could then be used to detect nerve impulses and send the information to a computer. Ultimately, nanomachines could be used to identify specific neurons and, progressively, replace them with arrays of other nanomachines, which could then use the local environmental conditions, within the operating brain, to adapt their behavior. These nanomachines would have to be powered by energy obtained from electromagnetic, sonic, chemical, or biological sources, and proposals for these different approaches have already been made.
Nanotechnology has seen many developments in recent decades. Many types of nanoparticles with diameters of only a few nanometers have been created in the laboratory, as have carbon nanotubes with diameters as small as one nanometer, nano-rotors, and nano-cantilevers. Chips that sort and manipulate individual blood cells have been developed and will be deployed soon, making it possible to identify and remove cancer cells from blood (Laureyn et al. 2007). However, the technologies necessary to design swarms of autonomous nanorobots, which could perform the complex tasks required to instrument a living brain or to replace specific neurons, do not yet exist and will probably take many decades, or even centuries, to develop.
We are still a long way from the world of Neal Stephenson’s 1995 novel The Diamond Age, in which swarms of nanorobots are used to interconnect human brains into a collective mind, but there is no obvious reason why such a technology should be forever impossible.
One of the most interesting thought experiments related to this question was proposed by Arnold Zuboff in “The Story of a Brain,” included in the book The Mind’s I (Hofstadter and Dennett 2006). Zuboff imagines an experiment in which the brain of a man who had died of a disease that hadn’t affected his brain is put on a vat and fed with stimuli that mimicked those he would have received had he been alive. As the experiment evolved and more teams got involved, the brain was split several times into ever smaller pieces and eventually into individual neurons, all of which were connected by high-speed communication equipment that made sure the right stimuli reached every neuron. In the end, all that remained of the original brain was a vast network of electronic equipment, providing inputs to neurons that had nothing to do with the original brain and that could be, themselves, simulated. And yet, the reader is led to conclude, the original personality of the person whose brain was put into the vat was left untouched, his lifetime experiences unblemished, his feelings unmodified.
No matter which approach is used, if an emulation of a human brain can, one day, be run in a computer, we will be forced to face a number of very difficult questions. What makes us human is the processing of information that enables us to sense the world, to think, to feel, and to express our feelings. The physical system that supports this information-processing ability can be based on either biological or electronic digital devices. It isn’t difficult to imagine that one small part of the brain, perhaps the optic nerve, can be replaced without affecting the subjective experience of the brain’s owner. It is more difficult to imagine that the whole brain can be replaced by a digital computer running very detailed and precise emulation without affecting the subjective experience of the owner.
It is possible that someday a supercomputer will be able to run a complete emulation of a person’s brain. For the emulation to be realistic, appropriate inputs will have to be fed into the sensory systems, including the vision pathways, the auditory nerves, and perhaps the neural pathways that convey other senses. In the same way, outputs of the brain that activate muscles should be translated into appropriate actions. It isn’t likely that a faithful emulation of a complete brain can be accomplished without these components, because realistic stimuli (without which no brain can work properly) would be missing.
Greg Egan, in his 2010 book Zendegi, imagines a technology, not too far in the future, by which the brain behaviors of a terminally ill man, Martin, are scanned using MRI and used to create a synthetic agent that was expected to be able to interact with his son through a game-playing platform. In the end, the project fails because the synthetic agent created—the virtual Martin—doesn’t have enough of Martin’s personality to be able to replace him as a father to his son in the virtual environment of the game. The virtual Martin created from fragments of Martin’s brain behavior is too far from human, and the researchers recognize that uploading the mind of a person into virtual reality is not yet possible. Despite the negative conclusion, the idea raises interesting questions: When will we be sure the copied behaviors are good enough? How do you test whether a virtual copy of a human being is good enough to replace, even in some limited domain, the original? What do you do with copies that aren’t good enough?
It is safe to say no one will be able to understand in all its details a full-blown emulation of a working brain, if one ever comes into existence. Any decisions to be made regarding the future of such a project will have to take into consideration that the information processing taking place in such an emulation is the same as the information processing that goes on in a living brain, with all the moral and philosophical consequences that implies.
I am not underestimating the technical challenges involved in such a project. We are not talking about simulating a small subsystem of the brain, but about emulating a full brain, with its complete load of memories, experiences, reactions, and emotions. We are very far from accomplishing such a feat, and it is not likely that anyone alive today will live to see it. However, that isn’t the same thing as saying that it will never be possible. In fact, I believe that it will one day be accomplished.
The possibility that such a technology may one day exist, many years or even centuries in the future, should make us aware of its philosophical, social, and economic consequences. If a brain emulator can ever be built, then it is in the nature of technology that many more will soon be built.
We are still a long way from having any of the technologies required to perform whole-brain emulation. I have, however, a number of reasons to believe that we will be able to overcome this challenge. One of these reasons is simply the intuition that technological developments in this area will continue to evolve at an ever-increasing speed. The exponential nature of technology tends to surprise us. Once some technology takes off, the results are almost always surprising. The same will certainly be true for the many technologies now being used to understand the brain, many of them developed in the past few decades. Though it may be true that no single technology will suffice, by itself, to derive all the information required to reconstruct a working brain, it may be possible that some combination of technologies will be powerful enough.
Other reasons to believe that we will, one day, be able to emulate whole brains are related to the enormous economic value of such a technology. Some economists believe that world GDP, which now doubles every 15 years or so, could double every few weeks in the near future (Hanson 2008). A large part of these gains would be created by the creation of digital minds in various forms.
The ability to emulate a working brain accurately will create such radical changes in our perspective of the world and in our societies that all other challenges we are currently facing will become, in a way, secondary. It is therefore reasonable to believe that more and more resources will be committed to the development of such a technology, making it more likely that, eventually, all existing barriers will be overcome.
Should complex and very general artificial minds come into existence, we will have to address in a clear way the important question of whether they are conscious.
We all seem to know very well what it means to be conscious. We become conscious when we wake up in the morning, remain conscious during waking hours, and lose consciousness again when we go to sleep at night. There is an uninterrupted flow of consciousness that, with the exception of sleeping periods, connects the person you are now with the person you were many years ago and even with the person you were at birth, although most people cannot track down their actual memories that far back. If pressed to describe how consciousness works, most people will fall into one of two main groups: the dualists and the monists.
Dualists believe there are two different realms that define us: the physical realm, well studied and understood by the laws of physics, and the non-physical realm, in which our selves exist. Our essence—our soul, if you want—exists in the non-physical realm, and it interacts with and controls our physical body through some yet-unexplained mechanism. Most religions, including Christianity, Islam, and Hinduism, are based on dualist theories. Buddhism is something of an exception, since it holds that human beings have no permanent self, although different schools of Buddhism, including the Tibetan one, have different ideas about exactly what characteristics of the self continue to exist after death. Religions do not have a monopoly on dualism—the popular New Age movement also involves theories about the non-physical powers of the mind.
The best-known dualist theory is attributable to René Descartes. In Meditationes de Prima Philosophia (1641), he proposes the idea (now known as Cartesian dualism) that the mind and the brain are two different things, and that, whereas the mind has no physical substance, the body, controlled by the brain, is physical and follows the laws of physics. Descartes believed that the mind and the body interacted by means of the pineal gland. Notwithstanding Descartes’ influence on modern philosophy, there is no evidence whatsoever of the existence of a non-physical entity that is the seat of the soul, nor is there evidence that it interacts with the body through the small pineal gland.
Cartesian dualism was a response to Thomas Hobbes’ materialist critique of the human person. Hobbes, whose mechanist view was discussed in chapter 2, argued that all of human experience comes from biological processes contained within the body. His views became prevalent in the scientific community in the nineteenth century and remain prevalent today. Indeed, today few scientists see themselves as dualists. If you have never thought about this yourself, a number of enlightening thought experiments can help you decide whether you are a dualist or a monist.
Dualists, for instance, believe that zombies can exist, at least conceptually. For the purposes of this discussion, a zombie behaves exactly as a normal person does but is not conscious. If you believe that a zombie could be created (for instance, by assembling, atom by atom, a human being copied from a dead person, or by some other method) and you believe that, even though it is not a person, its behavior would be, in all aspects, indistinguishable from the behavior of a real person, you probably are a dualist. Such a zombie would talk, walk, move, and behave just like a person with a mind and a soul, but would not be a conscious entity; it would merely be mimicking the behavior of a human.
To behave like a real person, a zombie would have to have emotions, self-awareness, and free will. It would have to be able to make decisions based on its own state of mind and its own history, and those decisions would have to be indistinguishable from those of a real human. Raymond Smullyan’s story “An Unfortunate Dualist” (included in Smullyan 1980) illustrates the paradox that arises from believing in dualism. In the story, a dualist who wants to commit suicide but doesn’t want to make his friends unhappy learns of a wonderful new drug that annihilates the soul but leaves the body working exactly as before. He decides to take the drug the very next morning. Unbeknownst to him, a friend who knows of his wishes injects him with the drug during the night. The next morning, the body wakes up without a soul, goes to the drugstore, takes the drug (again), and concludes angrily that the drug has no discernible effect.
We are forced to conclude that a creature that behaves exactly like a human being, exhibits the same range of emotions, and makes decisions in exactly the same way has to be conscious and have a mind of its own—or, alternatively, that no one is really conscious and that consciousness is an illusion.
A counter-argument based on another thought experiment was proposed by Donald Davidson (1987). Suppose you go hiking in a swamp and you are struck and killed by a lightning bolt. At the same time, nearby in the swamp, another lightning bolt spontaneously rearranges a bunch of different molecules in such a way that, entirely by chance, your body is recreated just as it was at the moment of your death. This being, whom Davidson calls Swampman, is structurally identical to you and will, presumably, behave exactly as you would have behaved had you not died. Davidson holds there would be, nevertheless, a significant difference, though the difference wouldn’t be noticeable by an outside observer. Swampman would appear to recognize your friends; however, he wouldn’t be able to recognize them, not having seen them before. Swampman would be no more than a zombie. This reasoning, which, to me, is absurd, is a direct consequence of a dualist view of the consciousness phenomenon, not much different from the view Descartes proposed many centuries ago.
Monists, on the other hand, don’t believe in the existence of dual realities. They believe that the obvious absurdity of the concept of zombies shows that dualism is just plainly wrong. The term monism was first used by Christian Wolff, in the eighteenth century, to designate the position that everything is either mental (idealism) or physical (materialism) in order to address the difficulties present in dualist theories. (Here I will not discuss idealism, which is at significant odds with the way present-day science addresses and studies physical reality.)
Either implicitly or explicitly, present-day scientists are materialists at heart, believing that there is only one physical reality that generates every observable phenomenon, be it consciousness, free will, or the concept of self.
Even though you may reject dualism, you may still like to think there is an inner self constantly paying attention to some specific aspects of what is going on around you. You may be driving a car, with all the processing it requires, but your conscious mind may be somewhere else, perhaps thinking about your coming vacation or your impending deliverable. The inner self keeps a constant stream of attention, for every waking hour, and constructs an uninterrupted stream of consciousness. Everything works as if there is a small entity within each one of us that looks at everything our senses bring in but focuses only on some particular aspects of this input: the aspects that become the stream of consciousness. The problem with this view is that, as far as we know, there is no such entity that looks at the inputs, makes decision of its own free will, and acts on those decisions. There is no part of the brain that, if excised, keeps everything working the same but removes consciousness and free will. There isn’t even any detectable pattern of activity that plays this role. What exists, in the brain, are patterns of activity that respond to specific inputs and generate sequences of planned actions.
Benjamin Libet’s famous experiments (see Libet et al. 1983) have shown that our own perception of conscious decisions to act deceives us. Libet asked subjects to spontaneously and deliberately make a specific movement with one wrist. He then timed the exact moment of the action, the moment a change in brain activity patterns took place, and the moment the subjects reported having decided to move their wrist. The surprising result was not that the decision to act came before the action itself (by about 200 milliseconds), but that there was a change in brain activity in the motor cortex (the part of the cortex that controls movements) about 350 milliseconds before the decision to act was reported. The results of this experiment, repeated and confirmed in a number of different forms by other teams, suggest that the perceived moment of a conscious decision occurs a significant fraction of a second after the brain sets the decision in motion. So, instead of the conscious decision being the trigger of the action, the decision seems to be an after-the-fact justification for the movement. By itself this result would not prove there is no conscious center of decision that is in charge of all deliberate actions, but it certainly provides evidence that consciousness is deceiving in some aspects. It is consistent with this result to view consciousness as nothing more than the perception we have of our own selves and the way our actions change the environment we live in.
Many influential thinkers view the phenomenon of consciousness as an illusion, an attempt made by our brains to make sense of a stream of events perceived or caused by different parts of the brain acting independently (Nørretranders 1991). Dennett, in his book Consciousness Explained (1991), proposed the “multiple drafts” model for consciousness, which is based on this idea. In this model, there are, at any given time, various events occurring in different places in the brain. The brain itself is no more than a “bundle of semi-independent agencies” fashioned by millions of years of evolution. The effects of these events propagate to other parts of the brain and are assembled, after the fact, in one single, continuous story, which is the perceived stream of consciousness. In this model, the conscious I is not so much the autonomous agent that makes decisions and issues orders as it is a reporter that builds a coherent story from many independent parallel events. This model may actually be consistent, up to a point, with the existence of zombies, since this after the fact assembly of a serial account of events does not necessarily change that much the behavior of the agent, in normal circumstances.
This model is consistent with the view that consciousness may be an historically recent phenomenon, having appeared perhaps only a few thousand years ago. In his controversial 1976 book The Origin of Consciousness in the Breakdown of the Bicameral Mind, Julian Jaynes presents the argument that before 1000 BC humans were not conscious and acted mostly on the basis of their instincts, in accordance with the “inner voices” they heard, which were attributed to the gods. In Jaynes’ theory, called bicameralism, the brain of a primitive human was clearly divided into two parts: one that voiced thoughts and one that listened and acted. The end of bicameralism, which may have been caused by strong societal stresses and the need to act more effectively toward long-term objectives, marked the beginning of introspective activities and human consciousness.
We may never know when and exactly how human consciousness appeared. However, understanding in more objective terms what consciousness is will become more and more important as technology advances. Determining whether a given act was performed consciously or unconsciously makes a big difference in our judgment of many things, including criminal acts.
The problem of consciousness is deeply intermixed with that other central conundrum of philosophy, free will. Are we entities that can control our own destiny, by making decisions and defining our course of action, or are we simply complex machines, following a path pre-determined by the laws of physics and the inputs we receive? If there is no conscious I that, from somewhere outside the brain, makes the decisions and controls the actions to be taken, how can we have free will?
The big obstacle to free will is determinism—the theory that there is, at any instant, exactly one physically possible future (van Inwagen 1983). The laws of classical physics specify the exact future evolution of any system, from a given starting state. If you know the exact initial conditions and the exact inputs to a system, it is possible, in principle, to predict its future evolution with absolute certainty. This leaves no space for free-willed systems, as long as they are purely physical systems.
Quantum physics and (to some extent) chaotic systems theory add some indetermination and randomness to the future evolution of a system, but still leave no place for free will. In fact, quantum mechanics adds true randomness to the question and changes the definition of determinism. Quantum mechanics states that there exist many possible futures, and that the exact future that unfolds is determined at the time of the collapse of the wave function. Before the collapse of the wave function, many possible futures may happen. When the wave function collapses and a particular particle goes into one state or another, a specific future is chosen. Depending on the interpretation of the theory, either one of the possible futures is chosen at random or (in the many-worlds interpretation of quantum mechanics) they all unfold simultaneously in parallel universes. In any case, there is no way this genuine randomness can be used by the soul, or the conscience, to control the future, unless it has a way to harness this randomness to choose the particular future it wants.
Despite this obvious difficulty, some people believe that the brain may be able to use quantum uncertainty to give human beings a unique ability to counter determinism and impose their own free will. In The Emperor’s New Mind, Roger Penrose presents the argument that human consciousness is non-algorithmic, and that therefore the brain is not Turing equivalent. Penrose hypothesizes that quantum mechanics plays an essential role in human consciousness and that some as-yet-unknown mechanism working at the quantum-classical interface makes the brain more powerful than any Turing machine, at the same time creating the seat of consciousness and solving the dilemma of determinism versus free will. Ingenious as this argument may be, there is no evidence at all such a mechanism exists, and, in fact, people who believe in this explanation are, ultimately, undercover dualists.
Chaos theory is also viewed by some people as a loophole for free will. In chaotic systems, the future evolution of a system depends critically and very strongly on the exact initial conditions. In a chaotic system, even very minor differences in initial conditions lead to very different future evolutions of a system, making it in practice impossible to predict the future, because the initial conditions can never be known with the required level of precision. It is quite reasonable to think that the brain is, in many ways, a chaotic system. Very small differences in a visual input may lead to radically different perceptions and future behaviors. This characteristic of the brain has been used by many to argue against the possibility of whole-brain emulation and even as a possible mechanism for the existence of free will. However, none of these arguments carries any weight. Randomness, true or apparent, cannot be the source of free will and is not, in itself, an obstacle for any process that involves brain emulation. True, the emulated brain may have a behavior that diverges rapidly from the behavior of the real brain, either as a result of quantum phenomena or as a result of the slightly different initial conditions. But it remains true that both behaviors are reasonable and correspond to possible behaviors of the real brain. In practice, it is not possible to distinguish, either subjectively or objectively, which of these two behaviors is the real one.
Since quantum physics and chaos theory don’t provide a loophole for free will, either one has to be a dualist or one has to find some way to reconcile free will and determinism—a difficult but not impossible task. In his thought-provoking book Freedom Evolves, Dennett (2003) tries to reconcile free will and determinism by arguing that, although in the strict physical sense our future is predetermined, we are still free in what matters, because evolution created in us an innate ability to make our own decisions, independent of any compulsions other than the laws of nature. To Dennett, free will is about freedom to make decisions, as opposed to an impossible freedom from the laws of physics.
There is a thought experiment that can be used to tell whether, at heart, you are a dualist, even if you don’t consider yourself a dualist. In Reasons and Persons, Derek Parfit (1984) asks the reader to imagine a teletransporter, a machine that can, instantly and painlessly, make a destructive three-dimensional scan of yourself and then send the information to a sophisticated remote 3D assembler. The assembler, using entirely different atoms of the very same elements, can then re-create you in every detail. Would you use such a machine to travel, at the speed of light, to any point in the universe where the teletransporter can send you? If you are a true monist, and believe that everything is explained by physical reality, you will view such a machine simply as a very sophisticated and convenient mean of transport, much like the assembler-gates (A-gates) in Charles Stross’ 2006 novel Glasshouse. On the other hand, if there are some dualist feelings in you, you will think twice about using such a device for travel, since the you emerging at the other end may be missing an important feature: your soul. In Glasshouse, A-gates use nanotechnology to recreate, atom by atom, any physical object, animate or inanimate, from a detailed description or from 3D scans of actual objects. An A-gate is a kind of a universal assembler that can build anything out of raw atoms from a given specification. A-gates create the possibility of multiple working copies of a person—an even more puzzling phenomenon, which we will discuss in the next chapter. Teletransporters and A-gates don’t exist, of course, and aren’t likely to exist any time soon, but the questions they raise are likely to appear sooner than everyone expects, in the digital realm.
Where does all this leave us in regard to consciousness? Unless you are a hard-core dualist or you believe somehow in the non-algorithmic nature of consciousness (two stances that, in view of what we know, are hard to defend), you are forced to accept that zombies cannot exist. If an entity behaves like a conscious being, then it must have some kind of stream of consciousness, which in this case is an emergent property of sufficiently complex systems.
Conceptually, there could be many different forms of consciousness. Thomas Nagel’s famous essay “What Is It Like to Be a Bat?” (1974) asks you to imagine how is the world perceived from the point of view of a bat. It doesn’t help, of course, to imagine that you are a bat and that you fly around in the dark sensing your surroundings through radar, because a literate, English-speaking bat would definitely not be a typical bat. Neither does it help to imagine that, instead of eyes, you perceive the world through sonar, because we have no idea how bats perceive the world or how the world is represented in their minds. In fact, we don’t even have an idea how the world is represented in the minds of other people, although we assume, in general, that the representation is probably similar to our own.
The key point of Nagel’s question is whether there is such an experience as being a bat, in contrast with, say, the experience of being a mug. Nobody will really believe there is such a thing as being a mug, but the jury is still out on the bat question. Nagel concludes, as others have, that the problem of learning what consciousness is will never be solved, because it rests on the objective understanding of an entirely subjective experience, something that is a contradiction in terms. To be able to know what it is like to be a bat, you have to be a bat, and bats, by definition, cannot describe their experience to non-bats. To bridge the gap would require a bat that can describe its own experience using human language and human emotions. Such a bat would never be a real bat.
It seems we are back at square one, not one step closer to understanding what consciousness is. We must therefore fall back on some sort of objective black-box test, such as the Turing Test, if we don’t know the exact mechanisms some system uses to process information. On the other hand, if we know that a certain system performs exactly the same computations as another system, even if it uses a different computational substrate, then, unless we are firm believers in dualism, we must concede that such a system is conscious and has free will.
Therefore, intelligent systems inspired by the behavior of the brain or copied from living brains will be, almost by definition, conscious. I am talking, of course, about full-fledged intelligent systems, with structures that have evolved in accordance to the same principles as human brains or structures copied, piece by piece, from functioning human brains.
Systems designed top-down, from entirely different principles, will raise a set of new questions. Will we ever consider a search engine such as Google or an expert system such as Watson conscious? Will its designers ever decide to make such a system more human by providing it with a smarter interface—one that remembers past interactions and has its own motivations and goals? Up to a point, some systems now in existence may already have such capabilities, but we clearly don’t recognize these systems as conscious. Are they missing some crucial components (perhaps small ones), or is it our own anthropocentric stance that stops us from even considering that they may be conscious? I believe it is a combination of these two factors, together with the fact that their designers hesitate to raise this question—in view of the fear of hostile superhuman intelligences, it wouldn’t be good for business. But sooner or later people will be confronted with the deep philosophical questions that will arise when truly intelligent systems are commonplace.
The next chapter discusses these thorny issues. The challenges that will result from the development of these technologies, and the questions they will raise, are so outlandish that the next chapter may feel completely out of touch with reality. If you find the ideas in this chapter outrageous, this may be a good time to stop reading, as the next chapters will make them seem rather mild.