43   The Question of Consciousness

The key experience that underlies all human existence and all human creativity is consciousness. At present, consciousness is the key element that differentiates us from machines. To have truly creative machines, we will need to develop machines with consciousness. But what exactly is consciousness? Most of the people I interviewed for this book preferred not to discuss it or claimed it was undefined or that consciousness has nothing to do with creativity. But surely consciousness—that sense of ourselves and of the world around us, which we experience every moment of the waking day—is so overwhelming that it has to be at the root of how we think and create.

To return to our question: what is consciousness? What is that experience of being inside our body? Philosophers have pondered this for centuries and produced countless weighty tomes and articles. A sticking point has always been how to deal with that part of our consciousness that craves chocolate, feels awe at the beauty of nature, experiences the excitement and joy of being in love. Philosophers call such subjective and personal phenomena qualia, a Latin word meaning What sort of? or What is it like? These are our private innermost thoughts and are therefore, they claim, impenetrable to scientific investigation. Advances in computer science and neuroscience, however, make this stance increasingly untenable. Daniel Dennett caricatures this antiscience stance as, “How could anything composed of material particles be the fun I’m having?”69

How do we define light? An acceptable reply would be, anything that travels at 186,000 miles per second and has certain polarization properties. Other aspects of light can be added, but they are inessential. This is a descriptive definition. It defines light in terms of its properties. Can we define consciousness in terms of its properties in the same way that we define light? Here is a possible definition: anything which possesses awareness and self-awareness and can experience pain, joy, and grief has consciousness. This too is a descriptive definition, but it cuts through the Byzantine arguments regarding subjectivity, which many philosophers claim make consciousness impenetrable to science. Some computer scientists who study emotions, like Rosalind Picard, consider many philosophers’ writings on consciousness to be “arrogant.”70

John Searle’s Chinese Room and the Question of Whether Computers Can Actually Think

In 1980, at the University of California in Berkeley, philosopher John Searle proposed a thought experiment he called the Chinese room.71 Its crux is whether a computer knows what it is doing, whether it is capable of more than merely following instructions and manipulating symbols that are meaningless to it. Can it actually think?

A person is sitting in a closed room. Someone outside the room inserts under the door questions in Chinese in the form of strings of Chinese characters. The person in the room knows no Chinese, but he has a computer programmed to manipulate Chinese characters to form replies. These replies are so good that the person outside the room believes they must have been written by a Chinese speaker. But neither the person in the room nor the computer program understands Chinese. What the person outside the room sees is merely a simulation of understanding.

Searle’s argument seems to undermine any progress in computer science other than using computers strictly for engineering projects such as driverless cars, stock market trading, and drones. If machines are capable only of manipulating symbols, then they will never really be able to think and so will never have consciousness. This is the real thrust of Searle’s argument, as he made clear some years later in his book Consciousness and Language, published in 2002: nonbiological systems cannot have consciousness.72

Through his thought experiment, Searle also claims to overturn the notion that the human brain is an information-processing system. After all, he argues, a computer program is made up purely of mathematical symbols that have no meaning. Human brains, on the other hand, possess content that has meaning. But in fact, a computer program does not consist of meaningless symbols, as Searle asserts. It also contains rules for assembling symbols such as Chinese characters into sentences that make sense.

Moreover, Searle’s contention that nonbiological entities can do no more than manipulate symbols falls apart in the face of machine learning. Today’s artificial neural networks do not work by manipulating symbols. They learn the rules of grammar for each language by tuning their connections until they can accomplish the desired result—perhaps the translation of a particular input phrase, for example.

Despite the lively—and mostly philosophical—discussion over this thought experiment, which continues today, the majority of computer scientists continue to pay it no heed.

Reducing Consciousness to the Sum of Its Parts

Everyone agrees that the brain takes in and processes information. But how does it experience this data? Why does some of it manifest itself as subjective experiences that we call consciousness?

With the advent of AI, some computer scientists have joined neuroscientists in exploring consciousness. One reason is the analogy between the brain—which is like a series of computational elements that pass information from one to the next like a pipeline—and artificial neural networks, with their successes in data analysis, facial recognition, buying and selling stock, operating driverless cars, and winning games like Go.

According to this reductionist approach, the brain is nothing but subatomic particles—electrons, neutrons, and protons. The equations of quantum physics are designed to explain these particles and their movements just as they explain everything else in the world, in principle. In order to program these equations into a computer, they have to be turned into numbers, the raw material that computers work on. From this we can deduce that the human brain also can be understood entirely in terms of numbers—that is, that it is computable. In that case, so are consciousness and creativity.

We stand in wonder at the accomplishments of geniuses like Bach, Einstein, and Picasso. But how did they do it? Einstein wrote that it was as if Mozart plucked his melodies from the air. The seventeenth-century astronomer Johannes Kepler wrote of people who had a sympathetic ear for nature that it was as if they could hear the “harmony of the spheres.” Metaphorically, thinkers like these touched the cosmos. But if the fabric of the cosmos is numbers, as indeed it is, then that brings us back to our original argument—that everything, including creativity and consciousness, can be understood in terms of numbers and thus is computable.

If we accept that the brain is an information-processing system like a computer, then we will have to agree that computers, like the brain, will one day be creative and have consciousness too. It seems we have no choice but to be bound by the rules of reductionism. To say that we are greater than the sum of our parts is no more than a romantic illusion. We can shift one layer up from complete reductionism—that we are just subatomic particles whose properties are computable—to argue that our actions and emotions result from a mass of complex chemical reactions. But any way you look at it, ultimately we are nothing but biological machines.

The problem with explaining human creativity in terms of quantum physics is that at present the theory becomes overwhelmed by the vast number of elementary particles that make up the one hundred billion neurons in our brain and the trillions of connections between them. But there will surely come a time when a newer version of quantum physics will appear and when a supercomputer will be developed that can process its equations. The point is that, in principle, the brain can be described using the terms of a theory based on cause and effect. Reducing the brain to the sum of its parts offers a way to study it and to study creativity too, without having to posit any ghost in the machine.

MIT physicist Max Tegmark suggests that consciousness could be a state of matter in the brain that he dubs perceptronium. He argues that it is the particular arrangement of perceptronium’s atoms that gives rise to awareness and subjectivity. Physicists often suggest an explanation for a key phenomenon that may seem crazy, but sometimes works out. Austrian physicist Wolfgang Pauli proposed the existence of the neutrino, a subatomic particle, to rescue the venerable conservation laws of energy and momentum. It was later discovered. Scientists also proposed the existence of dark energy to explain the surprisingly fast expansion of the universe. It is still elusive, but has been accepted. Dark matter too has been put forward to explain why galaxies rotate faster than expected. It has not yet been found, but is also accepted. This will probably not happen with perceptronium. As physicist Niels Bohr would have said, as an idea, “It’s not crazy enough.”

Philosophers and psychologists have offered other explanations of consciousness. Jung suggested that consciousness pervades the cosmos, that there is a single mind that explains phenomena such as synchronicity, meaningful coincidences.73

Daniel Dennett argues that consciousness is made up of episodes that emerge from the electrochemical properties of the brain and its many parallel lines of thought. These episodes are distributed throughout the brain and are manifested through speech and other actions. Thus consciousness is somehow manufactured by the brain and somehow emerges. He goes on to argue that qualia—our subjective personal preferences—“have a way of changing their status and vanishing under scrutiny” and cannot be used as evidence that consciousness is impenetrable to science.74

Notes