42 Computers with Feelings
What, precisely, is “thinking”?
—Albert Einstein50
In part I, I discussed the striking similarity between the way we process information to acquire knowledge and the way computers do so. From this we can deduce that the problem of understanding creativity is similar in both.
There I proposed a straightforward definition: creativity is the production of new knowledge from already existing knowledge. The impetus, I contend, is problem solving. We also saw that the final product needs to exhibit novelty, usefulness, and surprise, which are inevitably subjective criteria, as are all criteria that we use to evaluate artifacts that claim to be the products of creativity.
But what about the process of creativity? What drives great thinkers to create art, literature, music, and science? In reply, I introduced the hallmarks of high creativity and the marks of genius that emerged from my studies of individuals and their lives. To recap:51
- The need for introspection
- The need to know your strengths
- The need to focus, persevere, and not be afraid to make mistakes
- The need for collaboration and competition
- The need to beg, borrow, or steal great ideas
- The need to thrive on ambiguity
- The need for experience and suffering
And two marks of genius that cannot be taught:
- The ability to discover the key problem
- The ability to spot connections
Computers certainly exhibit little-c creativity. But do they also exhibit the hallmarks of high intelligence and the marks of genius? A cursory look at this list reveals many words to do with emotion and self-awareness: introspection, suffering, tolerating ambiguity. It seems that emotions play a vital role in creativity. But can computers have emotions?
Rosalind Picard on Developing Machines That Feel
Emotional intelligence is knowing what task you should do depending on what state you’re in.
—Rosalind Picard52
In 1997, Rosalind Picard, now a professor of media arts and sciences at MIT, published a book called Affective Computing, which she defines as “computing that relates to, arises from, and deliberately influences emotion.”53 Until then, she had been a hard-nosed electrical engineer and computer scientist who considered emotions to be fine for entertainment but was determined to “keep them out of science and computing.”54
Her dramatic change of course came about through her research into computer vision, building algorithms to enable computers to see using systems that attempted to emulate the visual cortex. Picard realized that, unlike machines, we decide which part of the visual field to focus on, pretty much to the exclusion of all else. The motivation for this decision is often emotion, such as focusing on a friend’s face in a crowd.
In the early 1990s, emotion was regarded as irrational, to be avoided at all costs by scientists. But then, in 1993, Picard came across neurologist Richard Cytowic’s The Man Who Tasted Shapes, an investigation of synesthesia, the condition in which senses seem to cross so that people associate sounds with colors, food with shapes, words with tastes and textures, or experience other mixing of the senses.55
Cytowic had begun to take an interest in synesthesia in the late 1970s, though he was not convinced himself that it really occurred. Then he went to a dinner party where the host apologized, saying, “there aren’t enough points on the chicken!”56 There it was—a case of synesthesia. The host’s experience of the chicken in his mouth was more than just taste, it was also a touch sensation of weight and shape that “wasn’t supposed to be.” Cytowic began to investigate synesthesia and reported his findings in his book.
Cytowic argued that multimodal perception happens mostly in the part of the brain directly below the cerebrum: the limbic system, the home of emotion, attention, and memory and traditionally regarded as the most ancient part of the brain. Reading on, Picard learned that emotion is key in affecting what goes into memory and determining what we find interesting. Thus, the limbic system contributes to unpredictability and creativity. She began to realize that to build a computer that could see, it was essential to take emotion into account.
But this was not what she wanted to hear. At the time, she was up for tenure at what was then a male bastion of engineering: MIT. She had already suffered snide remarks along the lines that as a woman, she was bound to be interested in emotions.
“A scientist has to find what is true, not just what is popular. I was becoming quietly convinced that engineering dreams to build intelligent machines would never succeed without incorporating insights about emotion,” she writes.57 Picard worked long hours to build a strong case for affective computing. With encouragement from colleagues, including Nicholas Negroponte, founder of the MIT Media Lab, she published a breakout article in Wired magazine. Why Wired? Because she had found it impossible to get an article on computer science and emotions into a peer-reviewed journal, with one reviewer suggesting it was more suited to an in-flight magazine.
Picard now heads the Affective Computing Research Group at the MIT Media Lab. Jokingly, she points out that some people already attribute emotions to computers and certainly feel emotions such as fury toward their computers—feelings that it would be helpful for the computer to pick up on. “But would a designer say that the computer has feelings like you and I do? For the moment we don’t know how to do that, though I’m not saying we never will.”58 Mathematically, she continues, if you want a computer to compose music or write a poem, you can write a program for it. But to enable it to experience emotions, when you’re not even sure how your own brain does so, is far more difficult. You can, however, program a computer to give the appearance of being in a particular mental state, such as happiness, by having it indicate on its screen that it feels calmer and just needed a rest. “But I’m not fooled. I know it’s just running a program,” she says.
Picard now does pioneering work on measuring emotions, communication technology, and developing technology for recognizing emotions. Her work focuses on building computers that are not only intelligent but can use emotional intelligence to help us solve problems, creating tools that can help computers understand emotions rather than trying to imitate them.
One of the most fruitful routes to investigate computer emotions is through studying autistic children. Many have similar limitations in their interactions with others to those we encounter in computers. Like computers, they lack empathy and find it hard to read the social and emotional clues of others.
Picard and her team have developed a variety of devices that enable autistic children to read the meaning of facial expressions, including one they call Mindreader. Similar devices are well under way for computers.
Thus Picard is shifting research away from the goal of creating a superintelligent conscious computer—a goal that may have the very unwelcome result of making the human race obsolete if these supercomputers take over the planet. “It’s more about building a better human-machine combination,” says Picard, “than it is about building a machine where we will be lucky if it wants us around as a household pet.”59
Machines Gaining Experience of the World
Picard is taking the first steps toward creating computers that can have human emotions. In the future, a computer might come across the concept of inspiration and decide it must be akin to the pleasure of making a good Go move or pursuing a particular line of reasoning in pattern recognition. Or computers that can read the web and learn about feelings like love or thirst may be able to convince us that we are talking to someone who has had these experiences, even though in reality they have never interacted socially or needed water. They may even be able to convince themselves.
All of which may go some way toward refuting the argument that because computers are not “out there” in the world, they cannot have emotions or ever really be artists, musicians, or writers.60
The paradox of today’s computers is that though they are able to carry out enormously complex activities, such as playing Go, they don’t have the skills of even a one-year-old child when it comes to perception and mobility. This is called Moravec’s paradox, after the roboticist Hans Moravec, who pointed out that the computational power involved in a complex task like playing Go is infinitely less than the resources a child needs to acquire the simplest sensorimotor skills.61
These are early days for computers, and hopefully they will be able to develop these skills once we start installing them as the brains of robots and letting them learn about the world as a child does. Swiss psychologist Jean Piaget argued that children construct knowledge about the world by interacting with it. In simple terms, they discover connections between objects they encounter and thus learn to imagine objects when they are no longer around, a first step toward abstract thinking. They also learn to grasp the relationships between objects, which leads to an understanding of geometry and to concepts of space and time.62 Computers will be able to begin to understand the world in the same way, which will also make them easier to relate to.
We have come across embodiment in Gil Weinberg and Mason Bretan’s robot musicians: Shimon’s face and rhythmically bobbing head give it an aura of humanity, making it something we can empathize with. Embodiment could even lead to a robot mastering that most difficult of tasks—tying its own shoelaces.
Machines can also have real world experiences. Simon Colton’s The Painting Fool “meets” its subjects. It draws real people and assesses the results. Similarly, Hod Lipson’s painting robots follow instructions such as how many brushstrokes and what kind of color schemes to use, and then build up collections of artworks so that they learn about people and things.
A painting computer need not only be fed JPGs as input. It can also be attached to a webcam and thus look at the world around it and choose a subject at random, giving it a sort of free will. There’s also Damien Henry’s video created by a machine building up its own version of a landscape from the many it has seen from moving trains, and of course Ross Goodwin’s AI that took a road trip with him, generating surreal sentences in response to the sights and sounds along the way.
We have not yet reached the point where we can build robots that can coexist with us or seem indistinguishable from us, but we have already built machines that people can become fond of, such as Tamagotchi, the artificial pet which many people spent time taking care of and feeding, and the Nao robot, which Tony Veale uses to read his stories, though at the moment these are just expensive toys. There are also sexbots, which have now moved on to become the focus of academic conferences, books, and articles.63
Who knows, computers might even learn to engage in intimate conversation, in the same way in which we learn, by watching others and making this knowledge personal, then practicing. Computers would learn through reading literature on the web, watching videos, interacting with people, and moving around in the world. Over time, as we become accustomed to coexisting with robots, we might decide to replace human companions, even human lovers, with robots who behave in sympathetic, loving, empathetic ways, always take care of us, and never lose their temper. In fact, robot carers for the elderly are already being explored. Perhaps one day artificial intimacy might become the new normal and be seen as real intimacy.64
Machines That Suffer
It is by logic that we prove, but by intuition that we discover.
—Henri Poincaré65
We’ve established that machines will one day be able to have experiences of the world. But will they experience suffering, one of the hallmarks that hones creativity? Could they inflict suffering on others? To do so, they will need to have emotions. Perhaps someday there will be computers with complex systems of sensors, regulatory mechanisms, and communication pathways that duplicate human emotions. Later they may go beyond their human creators to develop new and as yet unimaginable feelings. Might a computer feel grief or pain? In the future, it might become attached to someone and miss their touch and experience grief when it learns that that person is no longer available.
Like HAL 9000, the thinking and feeling computer in Stanley Kubrick’s film 2001, a computer might query why its hardware is being altered and object strenuously, perhaps even sensing that something is about to happen by the expressions on its human operators’ faces. It might even defend itself like a person faced with impending death. A time in which computers have acquired emotions on par with ours will be fraught with danger, for emotions bring about unpredictable behavior. Unpredictability is a key element in creativity. But will computers, which are based on the logic of mathematics, ever break free to start displaying unpredictable behavior?
The equations of physics are causal. They make predictions and enable us to anticipate how a system will develop in space and time. In the late nineteenth century, Henri Poincaré, the French mathematician, philosopher, and physicist whom we have met as an astute expositor of creativity, showed that these equations are extremely sensitive to the smallest variation in their initial conditions, such as the position of a system and how fast it was traveling when it began to evolve. These miniscule changes are exacerbated when you have complex systems and can lead to unpredictable—chaotic—behavior, as described in chaos theory. Thus a butterfly fluttering its wings in Brazil might generate massive storms in North America.
Chaos theory was first developed in the 1970s. It has since become clear that deterministic systems that are sufficiently complex, like computers, can produce unpredictable results. As Rosalind Picard wrote, “Coupling multiple emotion-producing mechanisms with rule-based reasoning systems and with continuous learning abilities will make behaviour that is unpredictable.”66 In other words, giving a computer emotions may affect, however slightly, the complexity of its hidden layers and open the possibility of its exhibiting the unpredictable behavior that is an essential element in creativity. Thus it might, on its own, decide to do something new. It might acquire volition. Like us, having acquired knowledge by scanning the web, it might decide to paint a picture of the Taj Mahal. By reading the web, it will have acquired more knowledge than we could gain in a lifetime. Perhaps a future AlphaGo will tire of playing endless games against itself and invent a whole new game.
Giving such a high-level advanced computer the freedom to act as it wishes, perhaps spurred on by emotions, may be dangerous. We will need rules such as Isaac Asimov’s famous Three Laws of Robotics.67 In brief, a robot must not injure a human being, must obey orders given by human beings, and must protect its own existence. Asimov’s laws put the welfare of humans first and assumed a computer will make rational decisions, which may not always be the case. We have to remember that computers with freedom of thought and action will always be able to reinterpret laws.
Brian Behlendorf, a primary developer of the Apache web server, the most popular web server on the internet, is concerned about the ethical issues surrounding computers. His suggestion is to train computers on the Bible as well as on Eastern and Western literature in order to develop a moral AI.68 Behlendorf says we will know when we have developed a moral AI when it starts to challenge the biases in the data it is being fed. It will also have to learn to question its own decisions.
At the moment, computers and computer intelligence are in the early stages of development. But as we develop more and more intelligent computers, we will have to be aware of the many qualities, not always welcome, that go along with intelligence and creativity. Highly creative people may be unpredictable, amoral, or even downright dangerous. We will have to find ways to ensure that a computer that has some or all of the hallmarks of creativity—which can introspect, experience suffering, and so on—does not use its newfound abilities in ways detrimental to its creators—to us.