23 “Affective Computing” Is Not an Oxymoron

Rosalind W. Picard and Adolfo Plasencia

10428_023_fig_001.jpg

Rosalind W. Picard. Photographs by Adolfo Plasencia.

I think that emotions are capable of being fully understood, although it may not be in this lifetime. We’re starting to understand pieces of them: I am now wearing a sensor that measures signals that correspond to some kinds of feelings I have, although it doesn’t correspond to all of them.

We are trying to invent the future: we bring to it certain values of what kind of future we want to make.

—Rosalind W. Picard

Rosalind W. Picard is Founder and Director of the Affective Computing Group at the MIT Media Lab and Codirector of the Media Lab’s Advancing Wellbeing initiative. She has cofounded two businesses, Empatica, Inc., which creates wearable sensors and analytics to improve health, and Affectiva, Inc., which delivers technology to help measure and communicate emotion.

Picard holds master’s and doctoral degrees in electrical engineering and computer science from MIT. She started her career as a member of the technical staff at AT&T Bell Laboratories, designing VLSI chips for digital signal processing and developing new algorithms for image compression. In 1991 she joined the MIT Media Lab faculty. She is the author of Affective Computing (MIT Press, 1990) and is a founding member of the IEEE Technical Committee on Wearable Information Systems, which helped launch the field of wearable computing. She is an active inventor, holding multiple patents.

Adolfo Plasencia:

The expression “affective computing” is something of a contradiction, as it may seem as though we are asking mathematics to teach us emotions. Mathematics—and computation is pure mathematics—is said to provide us with beautiful, even elegant equations, but it cannot be asked to provide us with emotions.

Is the expression “affective computing” an oxymoron?

Rosalind Picard:

When I originally defined “affective computing,” I said it was “computing that relates to, arises from, or deliberately influences emotion and other affective phenomena.” Thus computers might be given skills to handle emotion—perhaps responding differently to help a frustrated user versus a pleased user—without the computer itself having emotions or becoming emotional.

Where it becomes controversial is when you cross that line, so that computers are not simply helping somebody by responding more intelligently to their emotions but have mechanisms that make the computer appear emotional.

I remember one time after a talk I gave in Spain, somebody came up to me who was really angry. He had steam coming out of his ears and could hardly speak. My talk was translated into several languages, and one of the translators made me sound as if computers can have emotions just as we have. He was right to be upset about that. However, it was a mistranslation: I wasn’t saying that, nor do I think that it is possible, given what we know. Today I try to be very careful to distinguish that while we give computers some emotional abilities, they are quite different from human emotional abilities. Computers do not have emotional experiences as we do. They may compute in ways that express or simulate emotion, but they do not feel the way we do.

So the way I think of it, affective computing is not an oxymoron: computers are processing affective information. However, I think it can be misunderstood in a way that could be quite disturbing, worse than an oxymoron.

A.P.:

Your Affective Computing Research Group is trying to overcome or eliminate the divide between information systems and human emotions.

What characterizes that divide between machines and people?

Is what you are doing a kind of computational science “infected” or “tainted” by emotions?

R.P.:

I think the gap that most drove me to want to work in this area is the one where I felt that people’s feelings are really important, they really matter, and they were being ignored by computers. You interact with a computer, then it frustrates you and it keeps acting like you are happy, and you’re not. And that’s a recipe for making you even more frustrated! One guy took a gun and shot his computer through the monitor once, and multiple times through the hard drive! Another man, a chef, got so mad at how his computer responded to him that he threw it in the deep-fat fryer. So I saw that how computers were behaving was unintelligent, and if we’re going to make computers intelligent, then they have to start by being smart about responding to human emotions.

So the main gap I was interested in when I started in this area was to make computers more emotionally intelligent.

While we are starting to close this gap by giving computers much better abilities to see the emotions of their users, there is still a huge gap between what computers can do and what people can do. The biggest gap seems to be our ability to feel, to experience emotion, which is not something computers have. We can give computers lots of information about human feelings, tell them about our experiences, and instruct them in best practices for how to better treat people who are happy, bored, frustrated, or angry. But the best we can hope to do is not even as good as teaching a man what it feels like when a woman has a baby. While a man will probably never be able to experience childbirth, he can liken its pain to some other experiences he has had. He can respond better to a woman’s pain by making this comparison. A computer can also learn descriptions of feelings such as pain, and it may even be better than some men at knowing what to say to you when you are in pain. But the computer cannot truly experience or understand pain as a human can; it has some fundamental differences that we do not know how to bridge.

A.P.:

You often use the expression “emotional communication” in your texts.1 Perhaps you agree with the physiologist Claude Bernard, who, at the end of the nineteenth century, said that feeling is the origin of everything.

Do you agree?

Should the relationship between machines and human beings also be emotional, or do you believe there may be communication with humans that is not emotional?

R.P.:

The relationship is emotional, whether we want it to be or not. It’s appealing to a scientist, and to me, trained as an engineer, to think we could put a piece of communication out like a pure mathematical expression and somehow allow it not to be tainted by emotion or feeling. However, what I came to learn is that even a purely logical and completely neutral piece of information is received by one or more persons, each of whom has emotions, and it might be received positively by one and negatively by another, even though it’s a neutral piece of information. For example, if you say the word presence out of context, some people in a good mood are more likely to hear the word presents (gifts), or if you say the word band, some people in a bad mood are more likely to hear the word banned. The listener’s mood colors even neutral information negatively or positively.

So that led me to understand that all communication has a sender and a receiver—think of Claude Shannon and information theory—and whether or not the sender puts emotion into it, the receiver can receive it as if it contained emotion.

So emotion becomes a part of all communication even if the content of the communication might not explicitly have emotion in it.

A.P.:

The Turing test was a test proposed by Alan Turing to demonstrate the existence of intelligence in a machine. In your opinion, which kind of test (or Turing test type) would have to be passed by a machine in order to call it a machine with emotions and feelings?

R.P.:

I wrote in my original book, Affective Computing, a bit about how we might test for different emotional abilities.2

Since then, the thing that I think is most important is the relationship that evolves when two people are together. A relationship also evolves when a person is interacting with a machine or with a robot over time. In all of the versions of the Turing test I’ve seen, they test short-term interaction (although the new movie Ex Machina performs a one-week test, which is a first!).

When you interact with a person over time, you might treat that person formally the first time you meet him or her, but then that loosens up over time: your language changes, your rapport changes. Your willingness to self-disclose more about your feelings usually increases over time, and all of these aspects of emotional intelligence, of showing empathy and responding and “reading between the lines,” as we say, tend to change if an entity is truly emotionally intelligent. The classic Turing test cannot get at that because it uses just a short period of time. In a brief window of interaction, any entity can just “act emotional,” and it is not sufficient to convey skill emotional skill or intelligence. A false empathy might be revealed only when you repeatedly interact with it and see that it is inconsistent and does not grow with a relationship.

A.P.:

Roger Penrose was recently in Barcelona delivering a lecture and said that perhaps someday, somebody in some lab may create an intelligent machine, but it will certainly not be a computer, because … “I believe that intelligence cannot be simulated using algorithmic processes, that is, by means of a computer.”3

Penrose thinks that mind and brain are two separable entities and that computers cannot be intelligent in the same way as a human being, as, according to him, formal systems of sequential instructions will never give a machine the capacity to understand and arrive at the truth.

Do you believe that in order to understand and for there to be a conscience in some place or in some machine, emotions have to exist?

R.P.:

This is a huge question. The question of consciousness for most people is not just logical, like something you would write on a blackboard. It is not simply the ability to ask and answer, “What is going on? This is what is going on.” While that is a part of consciousness, consciousness also involves feelings that are ineffable. While some can be described, such as a feeling of being, a feeling of knowing, a feeling of pain or hunger, and a feeling of significance, there is still a sense that there is more. We anticipate that feeling involves a lot of different components that we think are rooted in our physiology and in our chemistry, but even that may not be sufficient to describe them fully.

It’s really interesting to look at when some aspects of these feelings are absent in some patients with certain kinds of damage. There are people who can’t feel pain or can’t feel their bodies, and people who can’t tell that their internal feelings are changing until they change in unusually extreme ways. We’ve worked with some of those people, and there is no doubt to me that they are conscious. Parts of their feeling systems may be damaged, or maybe they are simply different in their ability to communicate or introspect. I think it is fine that some people’s conscious abilities operate differently than others’—it adds diversity that may be good for all of us—and we should not diminish their humanity. Some fascinating studies have shown that even when an electroencephalograph of the scalp is “flat” there can still be signals generated deep in the emotion centers of the brain, which show up in electrodermal sensors on the wrist. These patients tend to survive. We are clearly dealing with a very complex system, and, like the blind scientists and the elephant, we have only felt the trunk, the tail, the legs, and the ears. We have a lot more to learn.

While my thinking may change as more scientific findings are discovered, right now I believe that emotional experience is a core part of a normal functioning conscious system.

A.P.:

In another conversation I had recently with the Harvard neurophysiologist Alvaro Pascual-Leone, he said that he agrees that the mind and the intelligence are emergent properties of the brain.4

Do you think that emotions are also emergent properties of the brain?

R.P.:

With all due respect for the great scientists who use the term “emergent,” I think it’s a cop-out. It’s a single word for saying, “We don’t really understand it.” When you call something emergent you’re saying, “It’s there but we don’t really know what it is or how it came into being.”

I think that emotions are capable of being fully understood, although it may not be in this lifetime. We’re starting to understand pieces of them: I am now wearing a sensor that measures signals that correspond to some kinds of feelings I have, although it doesn’t correspond to all of them. So, as we start to learn about these pieces, we start to see that they’re not merely an emergent phenomenon; they actually have components that can be measured. But am I then saying emotions are fully reducible to stuff we can measure? I won’t go that far. There might be more to them, things we don’t know how to measure or that we may not know how to measure for another century or more. It may be a long time before we understand, maybe in the next world that we understand, as now we see through a glass dimly. But right now, I’d say that they are not simply emergent. There is real understanding to be sought.

A.P.:

At the Affective Computing lab which you are in charge of here at the Media Lab, your present is like the rest of us imagine our future. But what does the future look like from the affective computing perspective? How do you see the future, here in your laboratory?

R.P.:

We are trying to invent the future: we bring to it certain values of what kind of future we want to make. One of the things that is in our minds constantly is, “What would we like that future to be?” I’d say, at present, what we would really like is to enable technology to be in a partnership with people to help people have much better experiences, much better well-being, much better understanding of their own emotions, much better ability to manage and regulate their own emotions. I am especially interested in helping people who have disabilities and difficulty regulating and comprehending emotions—in helping people who want to communicate those emotions. For example, we work with people who are nonspeaking, or people who just maybe have a hard time putting words and numbers around their feelings. They need to be better understood, and technology can help be a kind of affective prosthesis or tool to help them do better.

We do not want to falsely claim that we can reduce feelings to words and numbers. We make approximations with the technology, just like words or music approximate feelings. None of these replaces true feelings; rather, they are channels of their communication. It is my hope that affective technologies will be used with people in a way that always respects and honors human dignity and freedom. Our aim is to show that respect by helping emotions to be much better understood. The ultimate purpose is much greater than simply advancing science; it is advancing the worth of all people, no matter what their level of disability or ability.

A.P.:

Thank you so much, Rosalind, and we hope to see you again soon and, if possible, again in Spain and Valencia.

Thank you!

R.P.:

Thank you. It’s been a pleasure to talk with you, and I look forward to my next trip to Spain.

Notes