In 1955 a group of computer scientists appealed to the Rockefeller Foundation to fund a meeting of ten experts at Dartmouth College. The scientists said they intended to “proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” They stated their agenda clearly and concisely, but the most striking sentence of their proposal is the one that followed their agenda statement. They said, “We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.” In retrospect, it seems obvious that significant progress in artificial intelligence comes over decades, not “a summer.” As cognitive neuroscientist Michael Gazzaniga put it, they were “a little optimistic.”
At the very heart of this early overoptimism is the “brain-as-computer” metaphor, which is, at best, an oversimplification. The operating characteristics of biological brains are very unlike those of the computers that were used in 1955, or even of the much more sophisticated ones we build today. Conventional computers consist of electronic components such as transistors—a kind of on-off switch—that implement a series of logical operations called gates. Logician George Boole proved in 1854 that any “logical expression,” including complicated mathematical calculations, can be implemented by a “logic circuit” made by wiring together components built from just four fundamental gates, called AND, OR, NOT, and COPY. These gates transform one or two bits of information at a time (a bit is a register—a storage location—that can have the value 0 or 1). For example, a NOT gate changes a 0 to a 1 and vice versa, while a COPY gate changes 0 to 00, and 1 to 11. Whatever it is used for, all a computer is really doing is applying electronic logic gates to bits, one or two at a time. Brains, on the other hand, execute operations in a parallel manner, doing millions of things simultaneously.
There are many other distinctions. Brains’ processes are noisy—that is, subject to undesired electrical disturbances that degrade useful information—while computers are reliable. Brains can survive the removal of individual neurons, while a computer operation will fail if even a single transistor it employs is destroyed. Brains adjust themselves to the tasks at hand, while computers are designed and programmed for each finite task they must perform. The physical architectures, too, are quite different. The human brain contains a thousand trillion synapses, while a multimillion-dollar system of computer hardware today might have a trillion transistors. Moreover, though synapses (the gaps between neurons through which electrical and chemical signals flow) are a bit like transistors, a neuron’s behavior is vastly more complex than that of a computer component. For example, a neuron fires—sending its own signal to thousands of others—when the aggregate signals from the neurons that feed it reach a critical threshold, but the timing of the incoming signals matters. There are also inhibitory signals, and neurons can contain elements that modify the effect of incoming messages. It’s an intricate design of vastly greater richness and complexity than anything employed in electronic devices.
Still, a metaphor can be useful even if the things being compared correspond in just one aspect. Carson McCullers wrote that “the heart is a lonely hunter,” and that is a wonderful observation despite the fact that hearts don’t carry rifles. So it can be helpful to think of the brain as a computer despite the differences in physical design and operation if, for example, biological brains and computer “brains” produce similar behavior. Among simple animals and advanced (by today’s standards) computers that can certainly be the case. Take the female hunting wasp, Sphex flavipennis. When a female of that species is ready to lay her eggs, she digs a hole and hunts down a cricket. The expectant mother stings her prey three times, then drags the paralyzed insect to the edge of the burrow and carefully positions it so that its antennae just touch the opening. After the cricket is in place the wasp enters the tunnel to inspect it. If all is well, she drags the cricket inside and lays her eggs nearby so that the cricket can serve as food once the grubs emerge. The wasp’s role as mother completed, she seals the exit and flies away. Like the ewes I described in the previous chapter, these female wasps appear to be acting thoughtfully, and with logic and intelligence. But as the French naturalist Jean-Henri Fabre noted in 1915, if the cricket is moved even slightly while the wasp is inside inspecting the burrow, when the wasp emerges she will reposition the cricket at the entrance, and again climb down into the burrow and look around—as if she had arrived with the cricket for the very first time. In fact, no matter how many times the cricket is moved, the wasp will repeat her entire ritual. It seems that the wasp is not intelligent and thoughtful after all, but rather follows a hardwired algorithm, a fixed set of rules. Fabre wrote, “This insect, which astounds us, which terrifies us with its extraordinary intelligence, surprises us, in the next moment, with its stupidity, when confronted with some simple fact that happens to lie outside its ordinary practice.” Cognitive scientist Douglas Hofstadter calls this behavior “sphexishness.”
If living creatures can appear intelligent, but disappoint when they sink to the level of sphexishness, digital computers can excite us when they rise to merit that same modest label. For example, in 1997 a chess-playing machine named Deep Blue beat reigning world chess champion Garry Kasparov in a six-game match. Afterward Kasparov said he saw intelligence and creativity in some of the computer’s moves and accused Deep Blue of obtaining advice from human experts. In the limited domain of chess, Deep Blue seemed not only human, but superhuman. But although the human character Deep Blue displayed on the chessboard was far more complex, nuanced, and convincing than the motherly care displayed by the wasp, it did not arise from a process most of us would be likely to think of as intelligent. The three-thousand-pound machine made its humanlike decisions by examining 200 million chess positions each second, which typically allowed it to look six to eight moves ahead, and in some cases, twenty or more. In addition, it stored a library of moves and responses applicable to the early part of the game, and another library of special rule-based strategies for the endgame. Kasparov, on the other hand, said he could analyze just a few positions each second, and he relied more on human intuition than on processor power. Even without checking under the hood, there is an easy way to illuminate the differences in intelligence: just change the game a bit. For example, scramble the pieces’ starting positions—or eliminate the rule, important in the endgame, that allows a pawn to be traded for any more powerful piece if it advances to the opposite end of the board. Kasparov would be able to adjust his thinking accordingly. But Deep Blue would be more like the wasp, unable to adapt to circumstances and make a judgment, its enormous apparent intelligence suddenly decimated by its inflexibility.
Deep Blue had a superhuman ability in chess, but it wasn’t what most of us would term “intelligent.” The same can be said of Watson, IBM’s Jeopardy-playing computer that in 2011 beat the best human champions. To equip it for the game, IBM stuffed Watson with 200 million pages of content stored on 4,000 gigabytes of disk space, and endowed it with 16,000 gigabytes of RAM and an estimated 6 million rules of logic to help it arrive at its answers. Still, though Watson was usually right, it got to the answers through brute-force searches based on statistical correlations, not on anything one could call an “understanding” of the question. That issue was best illuminated by some of its wrong answers, such as choosing Toronto as an answer in the category “U.S. Cities.” Behind its vast fact base, and impressive ability to answer questions posed in plain colloquial language, there was actually nothing one could call real intelligence.
Jeopardy is a relatively recent invention. So, too, are computers. That a computer could be programmed to play chess like an intelligent human being was originally proposed by mathematician Alan Turing in 1941, before the first machine that could reasonably be called an electronic computer had even been built. One of the most influential thinkers of the twentieth century, Turing provided many of the ideas that form the foundation of computer science, and of our digital age. He recognized the limitation of a computer having sphexish intelligence confined to a particular domain, and argued that thinking machines should also be learning machines, capable of altering their own configurations.
How does one judge whether a computer is intelligent? In Turing’s view, intelligence should not be assessed according to the process through which beings or machines operate, but rather from the outside, by examining their behavior. That is how we judge people in our everyday lives—after all, we normally have no access to the inner feelings and thought processes of people we don’t know. Little was known in Turing’s day about the detailed workings of biological brains, but Turing believed that whatever the physical similarities and differences, a computer with intelligence and behavior similar to those of a human being would eventually be built.
Turing’s life and brilliant career were cut tragically short when he committed suicide in 1954 after being forced by the British government to undergo hormone therapy because he was gay. But a few years before his death Turing published a paper titled “Computing Machinery and Intelligence,” in which he outlined a test for intelligence that is now called the Turing test. In its current form, the Turing test consists of a human being and a computer that both interact through textual messages with an impartial human judge. If the judge cannot distinguish which contestant is the human, the machine is called intelligent. Turing predicted that computers would pass this test within a half century, that is, before the year 2000.
The Turing test burst into popular culture in 1966, with a simple two-hundred-line computer program called ELIZA, which employed a handful of tricks to generate bland responses that often sounded startlingly humanlike. In the decades since, computer scientists have created many other “chatting robots.” Here is a snippet of dialogue from a chat at a recent Turing test contest. Can you distinguish which speaker is the human judge and which is the robot contestant?
BILL: Do you know any good jokes?
JOAN: I don’t even know any bad ones. I just can’t remember them for very long.
BILL: Please tell a joke.
JOAN: What do you get if you walk under a cow?
BILL: I don’t know.
JOAN: A pat on the head.
BILL: That’s very good.
JOAN: I know.
The actual conversation went on for twenty-five minutes. The truth: Joan is the computer. But unlike the designer of ELIZA, which merely followed fixed scripts, Joan’s creator took Turing’s advice and designed this machine to achieve its “intelligence” through learning: the program “chatted” online over a period of years with thousands of real people, building a database of several million utterances which it searches statistically when composing its replies.
Computer scientists still haven’t succeeded in creating a program that can consistently fool human judges over an extended period of time. But knowing both the degree to which programs like Joan do work, and how they work, suggests two conclusions. First, achieving “intelligence” of the Turing test variety in a digital computer is far more difficult than most people initially thought. Second, there is something wrong with the Turing test—for a machine that cobbles together speech by repeating responses it encountered previously isn’t exhibiting intelligence any more than a nematode that slithers past a McDonald’s is demonstrating culinary sophistication.
Though the Turing test is questionable, and has fallen out of favor with researchers in artificial intelligence, no better litmus test for intelligent thought has gained general acceptance. There are some interesting ones out there, however. Christof Koch and his colleague Giulio Tononi argue that—contrary to Turing’s belief—the key point is to assess the process the being or machine in question utilizes, something easier said than done if you have no access to the candidate’s inner workings. They propose that an entity should be considered intelligent if, when presented with any random scene, it can extract the gist of the image, describe the objects in it and their relationships—both spatial and causal—and make reasonable extrapolations and speculations that go beyond what is pictured. The idea is that any camera can record an image, but only an intelligent being can interpret what it sees, reason about it, and successfully analyze novel situations. To pass the Koch-Tononi test a computer would have to integrate information from many domains, create associations, and employ logic.
For example, look at the image on the facing page from the film Repo Man. An insect crawling over the page might detect the photo’s purely physical qualities—a rectangular array of pixels, each of which is colored in some shade of gray. But in just an instant, and without apparent effort, your mind realizes that the picture depicts a scene, identifies the visual elements, determines which are important, and invents a probable story regarding what is transpiring. To meet the criteria of the Koch-Tononi test an intelligent machine ought to be able to key in on the man with the gun, the victim with raised arms, and the bottles on the shelves. And it ought to be able to conclude that the photo depicts a liquor store robbery, that the robber is probably on edge, that the victim is terrified, and that a getaway car might be waiting outside. (The scenes depicted would obviously have to be tailored to the cultural knowledge base of the person or computer being tested.) So far no computer can come close. An unintelligent brute-force approach like that which achieved limited success in passing the standard Turing test is of no help in passing the Koch-Tononi test. Even limited success in passing their own test, these researchers believe, is many years away. In fact, it was only a few years ago that computers gained the ability to do what a three-year-old child can do—distinguish a cat from a dog.
Is the fact that computers have had so little success thus far at achieving the same sort of intelligence as our brain a technical problem, which we may one day solve? Or is the human brain inherently impossible to replicate?
In the abstract sense, the purpose of both brains and computers is to process information, that is, data and relations among data. Information is independent of the form that carries it. For example, suppose you study a scene, then photograph it and scan the photo into your computer. Neither your memory nor the computer’s will contain a literal image of the scene. Instead, through an arrangement of their own physical constituents, mind and computer will each symbolize the information defined by the scene in its own trademark fashion. The information in the physical scene would now be represented in three forms: the photographic image, its representation in your brain, and its representation in the computer. Ignoring distortions and issues of limited resolution, these three representations would all contain the same information.
Turing and others turned such insights about information, and how it is processed, into an idea called the “computational theory of mind.” In this theory, mental states such as your memory of the photograph, and more generally your knowledge and even your desires, are called computational states. These are represented in the brain by physical states of neurons, just as data and programs are symbolized as states in the chips inside a computer. And just as a computer follows its programs to process input data and produce output, thinking is an operation that processes computational states and produces new ones. It is in this abstract sense that your mind is like a computer. But Turing also took the idea a big step further. He designed a hypothetical machine, now called a Turing machine, that in theory could simulate the logic of any computer algorithm. That shows that, to the extent that the human brain follows some set of specified rules, a machine can indeed—in principle—be built that would simulate it.
The computational theory of mind has proved useful as a framework scientists can use to think about the brain, and technical terms common in information theory are now used widely in neuroscience, terms such as “signal processing,” “representations,” and “codes.” It helps us to think about mental processes in a theoretical way, and to better understand how beliefs and desires need not reside in some other realm, but can be embodied within the physical universe.
Still, biological brains are not Turing machines. The human brain can do far more than simply apply a set of algorithms to data and produce output. As described earlier, it can alter its own programming, and react to a changing environment—not just to sensory input from the outside, but even to its own physical state. And it has astonishing resilience. If the corpus callosum is cut, severing the brain in two, a person doesn’t die, but somehow goes on functioning, a wondrous testament to just how different we are from the computing machines that we build. A human brain can suffer the degradation of disease, or have vast sections obliterated through stroke or accidental impact, yet reorganize itself and go on. The brain can also react psychologically, and it is as resilient in its spirit as in its ability to heal itself. In Stumbling on Happiness, psychologist Daniel Gilbert wrote about an athlete who, after several years of grueling chemotherapy, felt joyful and said, “I wouldn’t change anything,” and about a musician who became disabled, but later said, “If I had it to do all over again, I would want it to happen the same way.” How can they say things like that? Whatever happens, we find our way. As Gilbert says, resilience is all around us. It is just these qualities of the human mind that elevate it above simple algorithmic machines, providing both the beauty of being human and the greatest mystery that science has yet to unravel.
The last time someone asked you if it looked like rain, did you reply, “I’ll have to sample some randomized variables for that”? If a person came to you to translate the Kalevala, the Finnish national epic, would you say, “I’m sorry, that’s not programmed in my software”? On the face of it, people don’t think like computers, which are machines that shuffle two numbers, 0 and 1, to arrive at their “thoughts.” Even if you believe, as Leonard apparently does, that the brain will eventually reveal the secrets of the mind, the brain doesn’t operate using 0s and 1s, either. There is really no similarity between our brains and any “thinking” machine yet devised, which means that those quotation marks aren’t going away.
Inevitably, the once promising field of artificial intelligence (AI) has not come close to reproducing actual thought. Leonard has covered the basic problems with AI, so I could just nod my head in agreement and move on. But there’s a crucial question left hanging in the air. If the brain isn’t like a computer, what does it do to produce thoughts? I believe the answer is clear-cut: the brain doesn’t produce thoughts. It transmits them from the mind. What does the mind do, then? It creates meaning. Not only that, but meaning evolves, and as it does, the brain races to catch up, guided by the next interesting thing the mind wants to think about.
If a computer could embrace meaning, AI would make an earthshaking breakthrough. Science fiction would become reality, since one of the favorite plots in science fiction consists of computers who outsmart their human masters, either turning on them or becoming all too human themselves. HAL the onboard computer stole the movie 2001: A Space Odyssey by sounding more sympathetic than the robotic astronauts traveling into deep space. The audience was shocked when HAL decided to kill off the crew for the sake of the mission, and yet it was also touching when the last surviving spaceman started to dismantle HAL’s memory, and the dying computer voice pleaded, “Please don’t do that, Dave. I feel strange.” Isaac Asimov’s I, Robot explores the same theme, when mankind’s mechanical slaves rebel against their masters.
The ability of computers to imitate us isn’t just entertaining. One of the more ingenious software programs was ELIZA, already referred to by Leonard. ELIZA used a clever trick, based on a school of psychotherapy developed by psychologist Carl Rogers in the 1940s and 1950s, which put patients at ease by making empathic remarks of a seemingly simple kind, such as “I understand,” “Tell me more about that,” or just “Um.” Programming such statements into ELIZA bypassed the computer’s need to know anything about the real world. Bland, empathic remarks have the effect of making people feel heard and understood. Presto, a computer comes off as human. (In fact, various people who talked to their computers through ELIZA reported therapeutic results as good as those of a real psychiatrist.)
My position is that computers will never think—tricks can offer a good imitation, but no machine is capable of creating meaning, of crossing the line that separates mind from matter. However, the instant I make such a claim, a huge obstacle stands in the way. The brain is matter, and it seems to traffic in meaning. If squishy bits of watery floating chemicals in a brain cell can transmit the words “I love you” and await with exquisite vulnerability to hear if the other person will reply “I love you, too,” a computer in the future may be able to do the same. Why not?
Rather than jumping headfirst into a complex argument about mind and meaning, let’s consider the following experiment. Subjects at Harvard volunteered for a study in game strategy. They were seated in front of a monitor and told the rules of a specific game. “You are playing with a partner who is hidden behind a screen. There are two buttons each of you can push, marked 0 and 1. If you both press 1, you get a dollar, and so does your partner. If you both press 0, you get nothing, and so does your partner. But if you press 0 while your partner presses 1, you get five dollars, and he gets nothing. The game lasts half an hour. Begin.”
Imagine yourself as a player of the game—what would your strategy be? Would you cooperate by pressing 1 all the time, so that you and your partner got the same reward? Or would you sneak in with a 0 while he was innocently pressing 1, so that you got a much bigger reward? You’d be tempted, but if he got angry enough, he could retaliate by pressing 0 all the time, forcing you to do the same, and then both of you would wind up with nothing.
After the experiment was conducted, subjects were asked about how their hidden partners played the game, and many said that their partners were irrational. Even when the subjects pressed 1 many times in a row, for example, signaling a willingness to cooperate, their partners refused. They would sneak in with a 0 in order to grab five dollars, while other times they seemed intent on pointless sabotage. It became necessary to punish them by pressing 0 all the time, but that didn’t faze them, either.
In reality, this wasn’t an experiment about game strategy at all. It was an experiment in psychological projection, because there were no hidden partners. Each subject played against a random number generator, which spewed out 0s and 1s in no particular order. Yet when asked what their partners were like, subjects projected human traits onto them, using words like “devious,” “uncooperative,” “fickle,” “underhanded,” “stupid,” and so on. The human mind, it seems, creates meaning even when none is present.
The mind is all about meaning, and machines cannot travel there. Unless you have Beethoven on hand to input a Tenth Symphony, Shakespeare to input his lost play Cardenio, or Picasso to input a style of painting he never expressed on canvas, the machine is helpless to do so. Creative inspiration can’t be reduced to writing code. Artificial intelligence was doomed from the start because “intelligence” was defined as logic and rationality, as if the other aspects of human thought—emotions, preferences, habits, conditioning, doubt, originality, nonsense, etc.—were beside the point. In fact, they are the glories of our highly fanciful, perversely delightful intelligence. Meaning has flowered through us in all its facets, not just as reason. These include irrationality. Atomic war is an example of such irrational behavior that it makes us shrink in terror from our own nature, but the Mona Lisa and Alice in Wonderland are just as irrational, and we gravitate toward them in fascination.
Computers are bound by rules and precedents, without which logic machines cannot operate. Computers don’t say, “When I was daydreaming, something suddenly occurred to me.” Yet Einstein did a lot of daydreaming, and the structure of benzene was revealed to the chemist Friedrich August Kekulé in a dream. (Somewhat ironically for AI, the German physiologist Otto Loewi, who won the Nobel Prize in Medicine in 1936, discovered how nerves transmit signals thanks to a dream he had.) So be grateful for the irrational. The French philosopher Pascal was right when he said, “The heart has reasons that reason cannot know.”
I imagine that Leonard would agree with most of this. But I also imagine he would cling to the belief that one day a deeper understanding of the brain—he points in the direction of neural networks—will tell us what thinking is. Yet, what if no such solution exists? There may be no simpler model of the brain than the brain itself. This doesn’t mean that the mind-brain connection isn’t evolving. Certainly it is. When the mind created reading and writing several thousand yeats ago, a region of the cerebral cortex adapted and made reading and writing physically possible. When new forms of modern art are created, people scratch their heads at first, just as they did when Einstein’s theory of relativity appeared, but in time they catch on, and then for future generations Cubism and relativity become second nature, just as reading and writing are. Once you train your brain to read and write, you cannot go backward and reclaim illiteracy. Those black marks of ink on the page will forever be letters, not random specks. Irrevocably, meaning has moved you forward.
The spiritual life is entirely about moving meaning forward, and I contend that science alone will never be equal to that project. The fact that mind isn’t matter goes to the heart of my argument, but so does a more technical point, which revolves around a famous mathematical argument known as Gödel’s incompleteness theorems. In order to grasp what those theorems mean in everyday life, we must look into the nature of logical systems. We are the only creatures that love all kinds of nonsense. “ ’Twas brillig, and the slithy toves …” but sense is where we make our home.
In our craving for meaning, logic is our primary tool for determining what makes sense and what doesn’t. But how can we truly know if we’re right? The laws of nature make sense because they can be reduced to mathematics, a completely logical system. That’s why we tell each other that two plus two equals four, not three or five. But can logic somehow fool itself? If so, then the world may seem to make sense when in reality it doesn’t. (Thousands of years ago, the ancient Greeks were wrestling with this issue and ran into baffling riddles like the following paradox: A philosopher from Crete named Epimenides declares, “All Cretans are liars.” Should you believe him? There’s no way to know. He could be telling the truth, but that means that he’s lying. Self-contradiction is built into the sentence.)
In simplified form, this is the problem that confronted Kurt Gödel (1906–1978), an Austrian mathematician who joined the wave of illustrious immigrants who escaped war-torn Europe to live in the United States. Gödel’s area was the logic that governs numbers. We don’t have to delve into that specialized field, except to say that natural numbers (the counting numbers like 1, 2, 3, etc.) are considered facts of nature and therefore can stand in for other things we take as facts. Numbers need to be consistent; when you apply procedures to them, the results should be provable. The same can be said of facts about the body, such as heart rate and blood pressure, because they, too, are governed by numbers. The doctor learns what range of numbers is considered normal, and your health is measured against that standard.
Gödel distilled numbers down to their purest essence, the logical processes that lead to such things as computers. What Gödel found is that logical systems have built-in flaws. They contain statements that cannot be proven—hence, his notion of incompleteness. His first theorem says that incompleteness is the fate of any logical system; there will never be a system that explains everything. His second theorem says that if you are looking at a system from the inside, it might be a consistent system, but you won’t be able to find out as long as you stay inside the system. A blind spot is built in, because certain unprovable assumptions are part of every system. If you want to escape these fatal flaws, you must find a way to step outside the system. Logic cannot transcend itself.
Spirituality argues that consciousness can go where logic can’t. There is a transcendent reality, and to reach it, you must experience it. Leonard, who is mathematically sophisticated, may be able to demonstrate how I’ve misconstrued these highly technical matters. But it’s hard to escape one of Gödel’s main points, that mathematical systems include certain statements that are accepted as true but which cannot be proven. If I boldly take this out of the realm of numbers, Gödel is saying that unprovable things are woven into our explanation of reality. Religionists make statements based on the assumption that God exists, although they can’t prove it. Materialists make statements based on the assumption that consciousness can be ignored, which they, too, cannot prove. Why do we keep living with these unprovable X factors? Several answers come to mind.
1. Faith: We believe in certain things and that’s good enough.
2. Necessity: We have to make sense of the world, even if there are glitches along the way.
3. Habit: The unprovable assumptions haven’t bothered anybody so far, and therefore we’ve gotten into the habit of forgetting them.
4. Conformity: The system may be flawed, but everybody else uses it, so I will, too. I want to belong.
Lump all of these reasons together, and lesser mortals—even lesser mortals trained in science—find it easy to defend systems that have flaws they don’t want to admit to. But it’s not just the Achilles’ heel in logic that plagues us. We are trapped by the implications of Gödel’s second theorem, which holds that a logical system cannot reveal its inconsistencies; blindness is built in. I know that I am humanizing mathematics, which marks me as a total outsider, but systems engulf us at every turn—systems of politics, religion, morality, gender, economics, and above all, materialism. It’s vital to know that you have been conditioned to accept these systems without regard for their unprovable assumptions. (Note that unprovable isn’t the same as wrong. I can’t prove that my mother loved me, but it’s still true.)
Several times Leonard has asserted that we can’t long for childish things like God, the afterlife, or the soul, and then expect them to be true. I don’t think spirituality came about from wishful thinking. It came about because the world’s sages, saints, and seers managed to escape the limitations of the logical system that Leonard has put so much faith in.
Gödel’s insights can be extended to show us that logic machines can’t make creative leaps, because any system that can’t reveal its internal flaws will always be confined within the prison of its logic. Think of a computer that can detect a million shades of red. If you ask it which one is the nicest, it has nothing to say. “Nice” is outside its logic. Fortunately, Nature refuses to be imprisoned by logic, and we humans have taken our cue from that. When Picasso invented Cubism, when Tolstoy imagined Anna Karenina jumping in front of the train, when Keats wrote the final draft of “Ode to a Nightingale” in a frenzied few minutes, turning a promising poem into a masterpiece, creativity made leaps that were not based on mixing and matching the ingredients of what came before. Logic didn’t come into it.
Leonard mentions Deep Blue, the chess-playing computer. On May 11, 1997, Deep Blue won a six-game match against the world chess champion, Garry Kasparov. This victory took ten years to achieve, growing out of a student project at Carnegie Mellon University. It was an anguishing emotional loss for Kasparov (we know the computer felt nothing about winning), who had defeated Deep Blue just the year before. But I’d like to turn this feat on its head. Deep Blue is a perfect example of a self-contained logical system that cannot escape its basic assumptions.
The machine knew nothing outside number crunching, and therefore it didn’t know how to play chess at all. It only knew how to shuffle, at lightning speed, the human knowledge it was fed. Chess grandmasters display a lovely arrogance about what they do. Alexander Alekhine, a legendary Russian champion, was asked by awestruck admirers how many moves ahead he could look in a game. He replied coolly, “I can only see ahead one move, the right move.” Chess playing is intuitive. It involves grasping the whole board, reading your opponent, taking risks, and so on. Grandmasters don’t memorize thousands of games by rote to get where they are. They learn from thousands of games, which is entirely different. The mind is training the brain, which in turn gives the mind a higher platform to stand on, and thus the process continues, mind and brain evolving together. All that Deep Blue could do was to suck up this knowledge and spit it back out.
Finally, one branch of AI is devising artificial hands to replace hands lost in battle; countless disabled veterans and other amputees will benefit if the project succeeds. Figuring out the complex signals sent to and from a human hand is incredibly difficult. Could a prosthetic hand one day mold a beautiful sculpture like the Venus de Milo? Could it ever feel the cool hard surface of the marble? To oppose such altruistic work seems wrong, and critics of AI are routinely treated as enemies of progress. But we have to consider the work of a neuroscientist at the Salk Institute in San Diego, Vilayanur Ramachandran, and his amazing work with amputees.
After an amputation, many patients experience phantom limbs. They continue to feel that the lost hand or arm is still there, and phantom limbs can be excruciatingly painful, often due to the sensation that the muscles are permanently clenched. Professor Ramachandran knew that drugs often do little for this pain, even strong doses of powerful painkillers. Pondering the problem, he made a creative leap. He took a patient whose right arm had been amputated and sat him in front of a box that had a mirror inside dividing the box in two. When the patient’s left arm was placed in the box, he was asked to peer inside. What he saw were two arms, the right one being simply a reflection. But to the naked eye, the mirror image looked real.
The patient was then asked to clench and unclench both hands, the real and the phantom one. To the astonishment of everyone, this simple action could bring relief, sometimes instantaneously, to acute, intractable pain. The brain was fooled by the sight of a “real” right arm, and Ramachandran suggests that the area of the brain that received input from the limbs (the somatosensory cortex) had become cross-wired—it was mapping the lost arm by adapting other nearby regions reserved for the feet and face. Showing it the image of a right arm inside the mirror box enabled the brain to remap it, and thus unclenching the phantom muscles became possible. (A curious sidelight to Ramachandran’s theory that the brain had become cross-wired is that sometimes the feelings from the amputated arm were transferred to the area that received sensations from the face. Thus, stroking a patient’s face made him report that he felt the stroking on his lost arm.)
This could happen only because the mind, being different from the brain, figured out how to trick the brain and its pain signals. Ramachandran’s methods are being tested in veterans’ hospitals. Not all amputees benefit fully, and the amount of time spent in the mirror box varies, but the key thing was to prove that sudden change is possible. Neuroplasticity, the ability of old pathways to turn into new ones, took on new prestige.
I want to go a step further. If we could discover what’s inside the mind, a door would open to higher intelligence. The trick—and it’s the trick of all time—is that the mind can be explored only by the mind. Every person knows how to look inside. We reflect, we second-guess, we try to make sense of our own motives. (A few familiar examples: “Why did I say something so stupid?” “I don’t know how I knew, I just knew.” “What made me eat the whole thing?”) Knowing your mind isn’t easy. The difference between a spiritual life and every other life comes down to this. In spirituality, you find out what the mind really is. Consciousness explores itself, and far from reaching a dead end, the mystery unravels. Then and only then does wisdom blossom. The kingdom of God is within, I am the way and the life, Love thy neighbor as thyself—these are not objective statements of fact. They cannot be deduced through computation. The mind has looked deeply into itself and discovered its source, which is transcendent.
Speaking of the presence of God, Hebrews 11:3 says, “What is seen was not made out of what is visible.” If you want, you can match that statement with quantum physics, but in the end, it comes from something else, the ability of the mind to know itself. That, too, is an unprovable assumption, but what saves us is that this particular assumption is true.