45 Two Dissenting Voices
Douglas Hofstadter and the Horrors of a Future Controlled by Creative Machines
I’m not against a machine having emotions, in the sense that our brain is kind of a machine.
—Douglas Hofstadter83
Computers with consciousness are not necessarily what everybody wants.
In 1979, Douglas Hofstadter published the widely read Gödel, Escher, Bach: An Eternal Golden Braid, in which he explores the connections between thinking, mathematics, Bach’s music, and computers, as a way of understanding human intelligence and investigating how cognition arises from hidden neurological mechanisms. Hofstadter’s early research in computer science concerned analogy as a way to build connections between words. A goal of this work, he tells me, was to provoke himself into thinking about whether computers might be able to use words and symbols in a “way that resembled the human mind.”84
At the time, he was convinced that it would be impossible for computers to reach human levels of intelligence in the foreseeable future and that computer programs that could make analogies “would be laughable attempts to model the human mind. The human level was asymptotic, something our programs could never reach,” by which he meant that the curve of computer learning might approach but would never reach the curve of human intelligence.85 “Then things started happening to make me wonder whether that could actually be the case,” he says. The chess-playing computer Deep Blue “troubled him,” and David Cope’s music-composing algorithm, EMMY, “every once in a while produced something that was surprising.” Then, in the twenty-first century, IBM Watson and AlphaGo appeared.
Hofstadter’s feeling about Deep Blue is that it played very good chess but that it won by “brute force”—by assessing just about every possible move. It doesn’t tell you how people play chess or how its moves relate to classic chess games, and it doesn’t provide any insight into the mind of Garry Kasparov.
At first a fan of David Cope and his musical algorithms, Hofstadter began to have second thoughts and became extremely critical. Does EMMY produce music? Yes. Is it a musician? No, he tells me. Real musicians do not compose merely by recombining works that already exist, as Cope claims. Hofstadter caricatures him in his “Essay in the Style of Douglas Hofstadter, by EWI” (a take-off on Cope’s EMMY), in which he describes a computer program that takes excerpts from his books and papers and generates new, Hofstadter-esque ideas.86 This, he says, is simply “obscene.”
IBM Watson does not worry Hofstadter too much because, in his view, it does not really understand language. It uses its enormous memory and calculating power simply to parse sentences and search for overlapping words. “But that’s not understanding,” he says. He agrees, however, that if Watson could be embodied so that it is out in the world, it might be possible to bring it “close to what thinking is.”
At present, however, computers do not understand the words they use. Hofstadter illustrates this with Google’s most recent translation algorithm based on artificial neural networks. It works well for routine text. Yet when he put in the German text for “The maid brought in the soup,” Google translated it into English as “The maid entered the soup.”
But his principal qualm with artificial neural networks is rooted in his fear that his old belief that human intelligence was simply unattainable by machines may be proved wrong. Geoffrey Hinton of Google and other AI leaders claim, Hofstadter says, that “we’re well on our way to reaching and surpassing human intelligence and it’s going to happen in a couple of decades.” To Hofstadter this is “totally terrifying and horrifying. It’s not a good goal.” If he had known, he tells me, of that possible future in the 1970s when he started in computer science, he might have changed fields. In his view, artificial neural networks are not complex enough to simulate the “nobility and profundity of life-forms.”
He agrees that intelligence and creativity obey the laws of physics and naturally emerge in complex systems, but he absolutely disagrees that this can happen in silicon life-forms. “If it turns out that in some far-off future some kind of ‘thing,’ ‘entities,’ which go around the world and struggle and have computer lives, if these things start creating, I have no problem with that. What I do have a problem with is that artificial neural networks could accomplish in the near future what the greatest of minds could. That’s an obscene and disgusting thought.”
Hofstadter considers consciousness to be a phenomenon brought about by the firings of neurons in our brains in response to incoming perceptions, a process that gives rise to our inner lives. He too, it seems, is a reductionist.
He continues to research the making of analogies and what it can tell us about the workings of the human mind, thereby throwing light on human and computer creativity. But he feels that his programs may be unsuccessful in this as they are not complex enough.
These days, he’s still an optimist, he tells me, because he believes that machines are not close to attaining human intelligence, but he would certainly become a pessimist if he thought we were almost there. “That’s terrifying.”
Hofstadter’s view of AI is not as dystopian as it might appear. As he states, he’s most worried about artificial neural networks that possess machine learning, the ability to learn by themselves. He is less disturbed by symbolic machines that use symbols to represent physical objects and as a means for calculation. His view of the future is less bleak than that of writers like Nick Bostrom and Yuval Noah Harari, who foresee dire consequences when machines become smarter than us and—as they predict—a new species takes over the earth.87
For this to happen, the first step is that machines will have to become as intelligent as us. Even this may take fifty or a hundred years, and, as workers in the field will confirm, it’s unlikely to happen any time soon. Nevertheless, once machines achieve the level of human intelligence, the next step will inevitably be superintelligence.
Sadly, there is a huge gulf between public perceptions of AI and what is really going on in the field. Decades of highly imaginative science fiction have left us all with an indelible image of AIs that steal our jobs, and will eventually enslave us, and worse. This is the AI of Hollywood movies, of the Terminator or of HAL 9000 in 2001. Perhaps as a result, almost every panel I’ve ever sat on or attended tends inevitably to revolve around this dystopian vision of AI.
But as we have seen, this sort of AI is still far into the future. It is not the AI being used by the artists, musicians, and writers I’ve featured in this book. It simply doesn’t exist yet. At present, AI is more like a child that needs to be painstakingly taught before it can produce the desired image or sound, let alone create of its own accord. This is what AI research is all about, and this is what we should be focusing on—the creative potential of machines to produce art, literature, and music, which is precisely the theme of this book.
Pat Langley and Machines That Work More like People
I’m interested in what makes us distinctly human.
—Pat Langley88
Another naysayer is Pat Langley, cognitive scientist, AI researcher, and honorary professor of computer science at the University of Auckland. Langley is best known for his work with Herbert Simon, the inventor of BACON. Langley is the chief author of Scientific Discovery, which describes in detail research Simon began in the 1950s with Allen Newell, in which they claimed the brain processes information using symbols, following rules they deduced from extensive interviews with people as they engaged in solving problems.
Langley does not consider this argument to be reductionist—that is, reducing the brain to its constituent parts of electrons, protons, and neutrons. But he points out that when this information is applied to the computer, these symbols have to be replaced by numbers—in a non-neural network, zeros and ones—because that is how computers function. Importantly to Langley, computers are not just number crunchers. They are general symbol processors, and he is “worried that many people in AI, neuroscience, and cognitive science have forgotten that.”
He says that most researchers in artificial neural networks take a reductionist approach. In all fields, there are different levels of description for phenomena. It’s easier to use the equation H2O to describe what happens when two hydrogen atoms combine with an oxygen atom to make water than to solve complex quantum physics equations only to end up with the same answer.
“To expect higher-level cognition to emerge from the lower learning you see in neural networks is difficult to imagine,” says Langley. But you don’t have to understand the computer to know how the program works, nor do you have to understand the brain to understand how the mind works.
Another of Langley’s criticisms is that artificial neural networks are useful only for tasks such as classifying data and recognizing faces. But the work of Douglas Eck and others on Project Magenta proves that they are capable of much more. They can create art and music and tell stories. They also power driverless cars, which involves reasoning. Langley also contends that they are not yet capable of solving scientific problems of the sort that he and Simon discussed. To some extent, that is true for now, but if controlling the gait of a robot is a significant problem, then some progress is being made via neuroevolution, using genetic algorithms to evolve the most suitable neural network to solve the problem at hand.89
Langley believes computers can be creative. “I am not a carbon chauvinist,” he insists. He claims that computers revealed their creativity in the work he did with Herbert Simon. But he believes that “we need a new word for things that have value. Creativity can mean different things to different people.” It is meaningless to say that computers are not creative just because they don’t yet exhibit the full range of human creativity. Langley feels that there are more interesting kinds of creativity out there that will be revealed once more advanced computers can be built.
He would like to see more research along the lines of the original version of AI that was discussed at the conference at Dartmouth University in 1956 that kickstarted the whole field. He paraphrases it thus: “Let’s look at what humans do, and let’s see if we can get a machine to do this.”90 The next step, he feels, is to build on and explore new problems. Conversely, researchers in machine learning tend to be interested only in solving problems involving pattern recognition. “They should ask themselves where, at the end of the day, this will take them.”91
One solution might be to combine a machine using symbol manipulation, like the one used by Langley and Simon, with an artificial neural network. This would advance the study of machine creativity in that it’s easier to examine the internal states of a symbolic machine—see what it’s thinking—than to examine a neural network’s hidden layers.
In fact, some computer scientists are moving in that direction, looking for options that offer a broader and more flexible intelligence than neural networks and that can teach machines to generate common-sense knowledge. Combining neural networks with symbolic processing could well help neural networks to represent knowledge in a more accessible way. Work has begun on this line of research at Kyndi, a Silicon Valley start-up that has developed a computer system that can identify the concepts in the documents it has been fed, thereby bringing together the two branches of AI.92