39   What Goes On in the Computer’s Brain?

Jason Yosinski and the Puzzle of What Machines See

This is sad scientifically because we want to know how the hidden layers work.

—Jason Yosinski9

Jason Yosinski, a machine-learning scientist at Uber AI in San Francisco, probes the mysteries of the hidden layers and their latent spaces. “We don’t understand most of those things,” he tells me. “Why? Because no matter how smart you are, it is very difficult to understand such a complex system.”10 He points out that in the machine Mordvintsev used, there were sixty million parameters—that is, sixty million connections between the neurons. The changes throughout these connections are so complex and minute that it is impossible for researchers to determine exactly what is happening. They just know it works.

We can get a vague understanding of how a complete layer works by probing certain neurons in a layer to determine their function. Yosinski calls this AI neuroscience because it is similar to the way neuroscientists try to understand the human brain by inserting probes and taking measurements.11 “But this is a hopelessly high-level explanation of the brain,” he says. Neuroscientists can work out how large numbers of neurons operate together and have managed to identify the regions for vision, memory, and emotion. But they still don’t understand how the trillions of connections between the billions of brain neurons or nerve cells work, how small numbers of neurons operate together to produce large-scale effects.

Yosinski’s work is based on the work of Ian Goodfellow, inventor of GANs, and his colleagues, who discovered that computers can be fooled into seeing something that isn’t actually there. If you alter the image of a lion by deleting a pixel or two, to the human eye it still looks like a lion—but an artificial neural network might see it as a library, for example.12

Yosinski went on to investigate how this relates to the computer’s neuronal structure. He showed his neural network images that to human eyes look like abstract patterns or static on a television screen, and discovered that an AI might see a robin, a gorilla, a parking meter, or a Windsor tie. Computers see things that to our eyes aren’t there, hinting at the gulf between the way we and machines see the world.

Yosinski begins by taking a network that has never seen a gorilla, for example, and making it generate images.13 He passes these images to a second network that has been trained to identify gorillas. This network rates the images generated by the first network. After thousands of cycles, the first network begins to receive feedback that it is now producing images of gorillas. But the human eye doesn’t see gorillas, simply abstract patterns like static on a TV screen. The machine may be 99.99 percent certain it sees gorillas. But these are optical illusions. So what is the knowledge gap that causes the machine trained to see gorillas—the image classifier—to incorrectly classify the images produced by the image generator?

The algorithm’s confusion comes down to the very different way in which it sees. When Alexander Mordvintsev created DeepDream, neurons in the layer being probed were stimulated by bird-like shapes in images of clouds that were not obvious to human eyes and picked them out, in much the same way in which we see shapes in cloud formations or on the face of the moon. Here the situation is more complicated in that the second computer was trained specifically to look for gorillas. As Jeff Clune, an associate professor of computer science at the University of Wyoming and a colleague of Yosinski, writes of neural networks, “We understand that they work, just not how they work.”14

To put a philosophical cast upon it, the gorilla in the static is like the sculpture of a beautiful woman hidden in a block of marble. The sculptor may see it, but we don’t. Similarly, the machine sees a gorilla, but we don’t. What the machine sees is the platonic image of the gorilla embedded in the static that we see. Perhaps it’s akin to the way that dogs can smell things we can’t. It’s even a little reminiscent of Oliver Sacks’s story “The Man Who Mistook His Wife for a Hat.” It casts light on the whole subject of perception—human and machine.

In this way, using AI neuroscience, Yosinski and his colleagues investigate which features a neuron in a machine has learned to detect, as well as how information is transferred between neural networks. Besides its value for teaching us how machines reason, there are implications for digital security, surveillance, and communications between people who want to hide messages in images. It’s also disturbing to realize how easily a neural network can be fooled.

As Yosinski says, understanding the reasoning that goes on in the hidden layers “will be a very important topic in the next few decades because clearly neural networks are here to stay. They will impact society, so understanding what goes on will be completely useful and also very necessary in some cases.” Art will certainly play a role in this quest.

Mark Riedl on Teaching Neural Networks to Communicate

Humans have the goals and intent, while computers have the skills.

—Mark Riedl15

Mark Riedl, an associate professor at the Georgia Institute of Technology, has a different method of looking into how the hidden layers reason. He feels that when robots start performing everyday tasks around the house, they should be able to explain themselves. “If we can’t ask a question about why they do something and get a reasonable response back, people will just put them back on the shelf,” he says.16 Ultimately the aim is to get us and AI to understand each other.

To investigate how a neural network reasons, he starts by getting people to play video games, explaining aloud why they are making each move. Then he trains a neural network to perform exactly the same moves and links that to the explanations, translating between the two from English to code and back again. He then trains the neural network to operate autonomously and describe what it is doing. The result is an AI that can describe its actions. While playing a game involving cars, the machine will say, “I’m waiting for a gap to open in traffic before I move.”

Riedl believes computers can be creative. He defines creativity as problem solving plus learning and in his work focuses on artificial neural networks. He is aware of their limitations. One, he tells me, is that they are “imitation modes: you pipe classical music in and you get classical music out.”17 They do badly, he says, in the “generative part because they imitate patterns” as opposed to generating, creating something new. Thinking of GANs, in which one network chooses to accept or reject the work of another, I disagree with this part of his opinion but agree that artificial neural networks are limited at present when it comes to writing music, which is indeed often derivative.

Riedl feels that Char-RNN (used to script Sunspring and Beyond the Fence), which uses statistics to choose which word follows another word, does not reflect the process of writing. “Humans don’t just go with the flow,” he says, pointing out that besides writing we also rewrite many times. Moreover, we write for a reason, we have intent, and for a neural network the intent element of the writing process is missing. “Human writers feel inspiration and they sometimes break rules,” he adds.

Nevertheless, Char-RNN is useful for exploring the concept of nonsense and has produced interesting poetry, some of which is altogether different from that produced by us—one of the goals of computer creativity. AlphaGo, he adds, certainly has intent—to win—as well as being able to learn and is therefore a better example of machine creativity than Char-RNN. Indeed, AlphaGo shows distinct glimmers of machine creativity, and the statistical way in which it works has parallels with the way in which we make decisions—that is, think.

At the moment, he says, “Humans have the goals and intent, while computers have the skills.” Suppose that I have a tune in mind but can’t develop it any further. I can give it to an AI that is trained to compose, and it will work on it until we are both satisfied. Project Magenta and François Pachet with his Flow Machine also favor a feedback loop between human and machine.

Riedl also conducts writing experiments in which he trains artificial neural networks using his own software. Instead of working toward a narrative arc—a story with a beginning, a middle, and an end—he begins with a very simple narrative, then layers on conflict, personalities, suspense, and drama, working from the simple to the complex.

He asks whether we will ever fully appreciate the process that machines have to go through to develop creativity. “Critics try to understand process for human artists, but overlook process for machines,” he tells me.

All of this will be resolved when we understand better what goes on in the hidden layers.

Notes