A sociable robot is able to communicate and interact with us, understand and even relate to us, in a personal way. It should be able to understand itself and us in social terms. We, in turn, should be able to understand it in the same social terms—to be able to relate to it and to empathize with it. Such a robot must be able to adapt and learn throughout its lifetime, incorporating shared experiences with other individuals into its understanding of self, of others, and of the relationships they share. In short, a sociable robot is socially intelligent in a humanlike way, and interacting with it is like interacting with another person. At the pinnacle of achievement, they could befriend us, as we could them.
—Cynthia Breazeal1
Attitudes to Relationships
It is well established that people love people and people love pets, and nowadays it is relatively commonplace for people to develop strong emotional attachments to their virtual pets, including robot pets. So why should anyone be surprised if and when people form similarly strong attachments to virtual people, to robot people? In response to this question, some might ask, “But why would anyone want to?” There are many reasons, including the novelty and the excitement of the experience, the wish to have a willing lover available whenever desired, a possible replacement for a lost mate—a partner who dumped us. And psychiatrists will no doubt prescribe the use of robots to assist their patients in the recovery process—after a relationship breakup, for example—since such robots could be well trained for the task, providing live-in therapy, including sexual relations, and benefits that will certainly exceed those from Prozac and similar drugs.
I believe that one of the most widespread reasons humans will develop strong emotional attachments to robots is the natural desire to have more close friends, to experience more affection, more love. Timothy Bickmore explored the concept and implications of having computer-based intimate friendships in his 1998 paper “Friendship and Intimacy in the Digital Age,” in which he surveyed the state of friendship in our society and found it to be “in trouble.” Bickmore explains:
Many people, and men in particular, would say they are too busy for friends, given the increasing demands of work, commuting, consumerism, child care, second jobs, and compulsive commitments to television and physical fitness.
Bickmore supports this assertion by quoting from the 1985 McGill Report on Male Intimacy:
To say that men have no intimate friends seems on the surface too harsh, and it raises quick observations from most men. But the data indicate that it is not very far from the truth. Even the most intimate of friendships (of which there are very few) rarely approach the depth of disclosure a woman commonly has with many other women. Men do not value friendship. Their relationships with other men are superficial, even shallow.2
Bickmore also quotes the statistic that “most Americans (70 percent) say they have many acquaintances but few close friends,” and he then posits that “technology may provide a solution.” His argument is clear and convincing. Given the great commercial success of the rather simple technology employed in virtual pets such as the Tamagotchi and the AIBO robotic dog, and the popularity of the even simpler conversational technology employed in ELIZA and other “chatterbot” programs,* it seems clear that a combination of these technologies, with additional features for self-disclosure and simulating an empathetic personality in the robot, would provide a solid basis for a robotic virtual friend. It is of course reasonable to question why someone would have time for a robot friend but insufficient time for a human one. I believe that among the principal reasons will be the certainty that one’s robot friend will behave in ways that one finds empathetic, always being loyal and having a combination of social, emotional, and intellectual skills that far exceeds the characteristics likely to be found in a human friend.
AIBO is clearly the most advanced virtual pet to make any commercial impact thus far, but AIBO’s vision and speech capabilities are limited in comparison with the best that technology could offer today if cost were no object. Nevertheless, even with these limited capabilities, AIBO appeals to many children and adults as a social entity. Progress in creating everyday lifelike behavior patterns in robots will increase our appreciation for them, and as robotic pets and humanoid robots increasingly exhibit caring and affectionate attitudes toward humans, the effect of such attitudes will be to increase our liking for the robots. Humans long for affection and tend to be affectionate to those who offer it.
As a prerequisite of adapting to the personality of a human, robots will need to have the capacity for empathy—the ability to imagine oneself in another person’s situation, thereby gaining a better understanding of that person’s beliefs, emotions, and desires. Without empathy a satisfactory level of communication and social interaction with others is at best difficult to achieve. For a robot to develop empathy for a human being, it seems likely that the robot will need to observe that person’s behavior in different situations, then make intelligent guesses as to what is going on in that person’s mind in a given situation, in order to predict subsequent behavior. The acquisition of empathy is therefore essentially a learning task—relatively easy to implement in robots.
The psychological effect on computer users of interacting with an empathetic program was evaluated in an experimental study at Stanford University. The participants were asked to play casino black-jack on a Web site, in the virtual company of a computer character who was represented by a photograph of a human face. The computer character would communicate with the participants by displaying text in a speech bubble adjacent to its photograph. The participant and the computer character “sat” next to each other at the blackjack table, and both played against an invisible dealer. After each hand was completed, the computer character would react with an observation about its own performance and an observation about the participant’s performance.
Two versions of the program were used, one in which the computer character appeared to be self-centered and one where it appeared to be empathetic. In order to simulate self-centeredness, the character would express a positive emotion if it won the hand, by its facial expression and what it said, and a negative emotion if it lost, but it showed no interest in whether the user won or lost. The empathetic version displayed positive emotions when the participant won a hand and negative emotions when the participant lost.
The investigators found that when the computer character adopted a purely self-centered attitude, it had little or no effect on the participants’ reactions to its virtual personality. But when the computer character appeared to empathize with the users’ results at the blackjack table, the participants developed a liking, a trust for the character, and a perception that the character cared about their wins and losses and was generally supportive. The conclusion of the study was that “just as people respond to being cared about by other people, users respond to [computer characters] that care.”3
A robot’s social competence, and therefore the way it is perceived by humans as a social being, is inextricably linked to its emotional intelligence.* We saw in chapter 3 that the design of robot dogs benefits from the canine-ethology literature. Similarly, creating an accurate and sophisticated model of human emotion is a task that benefits from the literature on human psychology, and it is unlikely to be many years before all the key elements described in that literature have been modeled and programmed. Just imagine how powerful these combined technologies will become a few decades from now—speech, vision, emotion, conversation—when each of them has been taken to a humanlike level, a level that today is only a dream for AI researchers. The resulting combination will be an emotional intelligence commensurate with that of a sophisticated human being. The effect will be sensational.
Even though computers have such a wide range of capabilities that they are already pervasive throughout many aspects of our lives, they are not yet our intellectual and emotional equals in every respect, and they are not yet at the point where human-computer friendships can develop in a way that mirrors human-human friendships. Perhaps the strongest influence on the attitudes of those who do not believe in a future populated with virtual friends is their difficulty in relating to an artifact, an object that they know is not alive in the sense we usually employ the word. I do not for a moment expect all this to change overnight, and until computer models of emotion and personality are sufficiently advanced to enable the creation of high-quality virtual minds on a par with those of humans, it seems to me inevitable that there will be many who doubt the potential of robots to be our friends. At the present time, we are happy (or at least most of us are) with the idea of robots assembling our cars, robots mowing our lawns and vacuuming our floors, and with robots playing a great game of chess, but not with robots as baby-sitters or robots as intimate friends. Yet the concept of robots as baby-sitters is, intellectually, one that ought to appeal to parents more than the idea of having a teenager or similarly inexperienced baby-sitter responsible for the safety of their infants. The fundamental difference at the present time, between this responsibility and that of building cars or playing grandmaster level chess, is surely that robots have not yet been shown to be capable baby-sitters, whereas they have been shown to excel on the assembly line and on the chessboard. What is needed to convert the unbelievers is simply the proof that robots can indeed take care of the security of our little ones better than we can. And why not? Their smoke-detection capabilities will be better than ours, and they will never be distracted for the brief moment it can take for an infant to do itself some terrible damage or be snatched by a deranged stranger.
One example of how a strong disbelief and lack of acceptance for intelligent computer technologies can change to a diametrically opposite viewpoint has been seen in the airline industry, with automatic pilots on passenger planes. When I was first an airline passenger, around 1955, we had the comfort of seeing the captain of the aircraft walking through the cabin nodding a hello to some of the passengers and stopping to chat with others while his co-pilot took the controls. There was something reassuring about this humanization of the process of flying, to know that people with such obvious authority and the nice uniforms to match were up at the front ensuring that our takeoffs and landings were safe and negotiating the plane securely through whatever storms and around whatever mountain ranges might pose some risk of danger. In those days if all airline passengers had been offered the choice between having an authoritative human pilot in charge and having a computer responsible for their safety, I feel certain that the vast majority would have preferred the human. But today, fifty-plus years later, the situation is very different. Computers have been shown to be so superior to human pilots in many situations that there have been prosecutions brought in the United States against pilots who did not engage the computer system to fly their aircraft when they should have done so. This about-face, from a lack of confidence in the capabilities of a computer to an insistence that the computer is superior to humans at the task, will undoubtedly occur in many other domains in which computer use is being planned or already implemented, including the domain of relationships. The time will come when instead of a parent’s asking an adolescent child, “Why do you want to date such a schmuck?” or “Wouldn’t you feel happier about going to the high school prom with that nice boy next door?” the gist of the conversation could be, “Which robot is taking you to the party tonight?” And as the acceptability of sociable robots becomes pervasive and they are treated as our peers, the question will be rewritten simply as, “Who’s taking you to the party tonight?” Whether it is a robot or a human will become almost irrelevant.
Different people will of course adapt to the emotional capacities of robots at different rates, depending largely on a combination of their attitude and their experience with robots. Those who accept that computers (and hence robots) already possess or will come to possess humanlike psychological and mental capabilities will be the first converts. But those who argue that a computer “cannot have emotions” or that robots will “never” have humanlike personalities will probably remain doubters or unbelievers for years, until well after many of their friends have accepted the concept and embraced the robot culture. Between those two camps, there will be those who are open-minded, willing to give robots a try and experience for themselves the feelings of amazement, joy, and emotional satisfaction that robots will bring. I believe that the vast majority in this category will quickly become converts, accepting the concept of robots as relationship partners for humans.
Bill Yeager suggests that this level of acceptance will not happen overnight, because the breadth and depth of the human experience currently go far beyond the virtual pets and robots made possible by the current state of artificial intelligence. As long as robots are different enough from us to be regarded as a novelty, our relationships with them will to some extent be superficial and not even approach the relationships we have with our pets. One of the factors that cause us to develop strong bonds with our (animal) pets is that they share our impermanence, our frailties, being caught up in the same life-death cycle that we are. Yeager believes that to achieve a level of experience comparable with that of humans, robots will have to grow up with us; acquire our experiences with us; be our friends, mates, and companions; and die with us; and that they will be killed in automobile accidents, perhaps suffer from the same diseases, get university degrees, be dumb, average, bright, and geniuses.
I take a different view. I believe that almost all of the experiential benefits that Yeager anticipates robots will need can either be designed and programmed into them or can be compensated for by other attributes that they will possess but we do not. Just as AI technologies have made it possible for a computer to play world-class chess, despite thinking in completely different ways from human grandmasters, so yet-to-be-developed AI technologies will make it possible for robots to behave as though they had enjoyed the full depth and breadth of human experience without actually having done any such thing. Some might be skeptical of the false histories that such behavior will imply, but I believe that the behavior will be sufficiently convincing to minimize the level of any such skepticism or to encourage a robot’s owner to rationalize its behavior as being perhaps influenced by a previous existence (with the same robot brain and memories but in a different robot body).
I see the resulting differences between robots and humans as being no greater than the cultural differences between peoples from different countries or even from different parts of the same country. Will robots and humans typically interact and empathize with one another any less than, say, Shetland Islanders with Londoners, or the bayou inhabitants of Louisiana with the residents of suburban Boston?
Preferring Computers to People
Many people actually prefer interacting with computers to interacting with other people. I first learned of this tendency in 1967, in the somewhat restricted domain of medical diagnosis. I was a young artificial-intelligence researcher at Glasgow University, where a small department had recently started up—the Department of Medicine in Relation to Mathematics and Computing. The head of this department, Wilfred Card, explained to me that his work into computer-aided diagnosis took him regularly to the alcoholism clinic at the Western Infirmary, one of Glasgow’s teaching hospitals. There he would ask his patients how many alcoholic beverages they usually drank each day, and his computer program would ask the same patients the same question on a different day. The statistics proved that his patients would generally confess to a significantly higher level of imbibing when typing their alcohol intake on a teletype* than when they were talking to the professor. This phenomenon, of people being more honest in their communication with computers than they are to humans, has also been found in other situations where questions are asked by a computer, such as in the computerized interviewing of job applicants. Another example stems from a survey of students’ usage of drugs, investigated by Lee Sproull and Sara Kiesler at Carnegie Mellon University, in which only 3 percent of the students admitted to using drugs when the survey was conducted with pencil and paper, but when the same survey was carried out by e-mail, the figure rose to 14 percent.
A preference for interacting with a computer program that appeared sociable rather than with a person was observed a year or so after Card’s experience by Joseph Weizenbaum at MIT, when a version of his famous ELIZA program was run on a computer in a Massachusetts hospital. ELIZA’s conversational skills operated simply by turning around what a user “said” to it, so that if, for example, the user typed, “My father does not like me,” the program might reply, “Why does your father not like you?” or “I’m sorry to hear that your father doesn’t like you.”* Even though ELIZA was dumb, with no memory of the earlier parts of its conversation and with no understanding of what the user was saying to it, half of those who used it at the hospital said that they preferred interacting with ELIZA to interacting with another human being, despite having been told very firmly by the hospital staff that it was only a computer program. This stubbornness might have arisen from the fact that the patients knew they were not being judged in any way, since they would have assumed, correctly in this case, that the program did not have any judgmental capabilities or tendencies.
The preference for interacting with computers rather than with humans helps to explain why computers are having an impact on social activities such as education, guidance counseling, and psychotherapy. As long ago as 1980, it was found that a computer could serve as an effective counselor and that its “clients” generally felt more at ease communicating with the computer than with a human counselor. Sherry Turkle describes this preference as an
infatuation with the challenge of simulated worlds…. Like Narcissus and his reflection, people who work with computers can easily fall in love with the worlds they have constructed or with their performances in the worlds created for them by others.4
Communicating information is by no means the only task for which people prefer to interact with a computer rather than with another human being. It was also noticed in early studies of human-computer interaction that people are generally as influenced by a statement made by a computer as they are when the same statement is made by a human and that the more someone interacts with a computer the more influential that computer will be in convincing the person that it is telling the truth.
I strongly suspect that the proportion of men preferring interaction with computers to interaction with people is significantly higher than the proportion of women, though I’m not aware of any quantitative psychology research in this area. Evidence from the McGill Report, for example, shows men to be more prone than women to eschewing human friendships, leaving men with more time and inclination than women to relate to computers. This bias, assuming that it does exist, suggests that men will always be more likely than women to develop emotional relationships with robots, but although this might be the case in the early years of human-robot emotional relationships, I suspect that in the longer term, women will embrace the idea in steadily increasing numbers. One reason, as will be discussed in chapters 7 and 8, is that women will be extremely enthusiastic about robot sex, once the practice has received good press from the mainstream media in general and women’s magazines in particular, and in their robot sexual experiences, women will, more than men, want a measure of emotional closeness with their robot. Another scenario that I foresee as being likely is that from the positive publicity about human-robot relationships women who are in or who have recently left a bad relationship will come to realize that there’s more than one way of doing better. Yes, it would be very nice to start a relationship with a new man, but one can never be sure how it’s going to work out. I believe that having emotional relationships with robots will come to be perceived as a more dependable way to assuage one’s emotional needs, and women will be every bit as enthusiastic as men to try this out. In today’s world, there are many women, particularly the upwardly mobile career-minded sort, who would have more use for an undemanding robot that satisfied all of their relationship needs than they would for a man.
What is the explanation for the preference of interacting with a computer over interacting with people? The feeling of privacy and the sense of safety that it brings make people more comfortable when answering a computer and hence more willing to disclose information. And some psychologists explain why people often prefer computers to people and can develop a strong affection for computers by describing this form of affection as an antidote to the difficulties many people face in forming satisfactory human relationships. While this is undoubtedly true in a significant proportion of cases, there are also many people who enjoy being with computers simply because computers are cool, they’re fun, they empower us.
Robotic Psychology and Behavior
The exploration of human-robot relationships is very much a new field of research. While the creation of robots and the simulation of humanlike emotions and behaviors in them are fundamentally technological tasks, the study of relationships between humans and robots is an even newer research discipline, one that belongs within psychology. This field has been given the name “robotic psychology” and practitioners within the field are known as “robopsychologists.” Among those who have taken a lead in developing this nascent science are a husband-and-wife team at Georgetown University’s psychology department, Alexander and Elena Libin, who are also the founders of the Institute of Robotic Psychology and Robotherapy in Chevy Chase, Maryland.
The Libins define robotic psychology as “a study of compatibility between robots and humans on all levels—from neurological and sensory-motor to social orientation.”5 Their own research into human-robot communication and interaction, although still in its infancy, has already demonstrated some interesting results. They conducted experiments to investigate people’s interactions with NeCoRo, a sophisticated robotic cat covered with artificial fur, manufactured by the Omron Corporation and launched in 2001. NeCoRo stretches its body and paws, moves its tail, meows, and acts in various other catlike ways, getting angry if someone is violent to it and expressing happiness when stroked, cradled, and treated with lots of love. Additionally, NeCoRo’s software incorporates learning methods that cause the cat to become attracted to its owner day by day and to adjust its personality to that of its owner. One of the Libins’ earliest experiments was designed to investigate how biological factors such as age and sex, psychological factors such as a person’s past experiences with real pets and with technology, and cultural factors such as the social traditions that affect people’s communication styles influence the way a person interacts with such a robot.
This experiment found that older people get more pleasure from the responses of the robot cat (its “meows”) than do younger people when they touch it. This was attributed to the fact that younger people use cell phones, computers, and household devices more intensively than their elders do and generally experience a greater enjoyment of technology. Another finding was that men get more pleasure than do women from playing with NeCoRo, generally experiencing more excitement when the cat turns its head, opens and closes its eyes, and changes its posture. This bias seems likely to be a symptom of the fact that men, more than women, enjoy interaction with computers, though further research is necessary to test this assumption. Similarly, further experiments will be needed to explain another of the Libins’ results: that the American subjects in their experiment enjoyed touching the cat more and obtained more pleasure from the way the cat cuddled them when they were stroking it than did the Japanese subjects. This could be because cats are more popular as pets in American homes than they are in Japan, an explanation given credence by yet another of the Libins’ experimental findings, that the degree to which someone likes pets influences the way that they interact with the robotic cat and the enjoyment received from picking it up and stroking it.
Experimental results such as these will help guide robopsychologists toward a greater understanding of human-computer and human-robot interactions, by providing data to assist the robot designers of the future in their goal of making robots increasingly acceptable as friends and partners for humans. As the human and artificial worlds continue to merge, it will become ever more important to study and understand the psychology of human-robot interaction. The birth of this new area of study is a natural consequence of the development of robot science. Our daily lives bring us more frequent interaction with different kinds of robots, whether they be Tamagotchis, robot lawn mowers, or soccer-playing androids. These robots are being designed to satisfy different human needs, to help in tasks such as education and therapy, tasks hitherto reserved for humans. It is therefore important to study the behavior of robots from a psychological perspective, in order to help robot scientists improve the interactions of their virtual creatures with humans.
Much of the early research in this field has been carried out with children, as this age group is more immediately attracted to robot pets than are their parents and grandparents. One of the first findings from this research was intuitively somewhat obvious but nevertheless interesting and useful in furthering good relations between robots and humans. It was discovered that children in the three-to-five age group are more motivated to learn from a robot that moves and has a smiling face than from a machine that neither moves nor smiles. As a result of recognizing these preferences, the American toy giant Hasbro launched a realistic-looking animatronic robot doll called My Real Baby that had soft, flexible skin and other humanlike features. It could exhibit fifteen humanlike emotions by changing its facial expressions—moving its lips, cheeks, and forehead—blinking, sucking its thumb, and so forth. By virtue of these features, it could frown, smile, laugh, and cry.
The appeal to children of My Real Baby lies in its compatibility with them, a compatibility that breeds companionship. And the shape and appearance of a robot can have a significant effect on the level of this compatibility. A study at the Sakamoto Laboratory at Ochanomizu University in Japan investigated people’s perceptions of different robots—the AIBO robotic dog and the humanoid robots ASIMO and PaPeRo—and explored how these perceptions compared with the way the same group of people perceive humans, animals, and inanimate objects. One conclusion of the study was that appearance and shape most definitely matter—people feel more comfortable when in the company of a friendly-shaped, humanlike robot than when they are with a robotic dog.
In chapter 3 we discussed the use of ethology, the study of animals in their natural setting, as a basis for the design and programming of robot animals. Since humans are also a species of animal, it would seem logical to base the design and programming of humanoid robots on the ethology of the human species, but unfortunately the ethological literature for humans is nowhere near as rich as it is for dogs, and what literature there is on human ethology is mainly devoted to child behavior. For this reason the developers of Sony’s SDR humanoid robot have adapted the ethological architecture used in the design of AIBO, an architecture that contains components for perception, memory, and the generation of animal-like behavior patterns, adding to it a thinking module* to govern its behavior. SDR also incorporates a face-recognition system that enables the robot to identify the face of a particular user from all the faces it has encountered, a large-vocabulary speech recognition system that allows it to recognize what words are being spoken to it, and a text-to-speech† synthesizer allowing it to converse using humanlike speech.
Emotions in Humans and in Robots
Building a robot sufficiently convincing to be almost completely indistinguishable from a human being—a Stepford wife, but without her level of built-in subservience—is a formidable task that will require a combination of advanced engineering, computing, and artificial-intelligence skills. Such robots must not only look human, feel human, talk like humans, and react like humans, they must also be able to think, or at least to simulate thinking, at a human level. They should have and should be able to express their own (artificial) emotions, moods, and personalities, and they should recognize and understand the social cues that we exhibit, thereby enabling them to measure the strengths of our emotions, to detect our moods, and to appreciate our personalities. They should be able to make meaningful eye contact with us and to understand the significance of our body language. From the perspective of engendering satisfying social interaction with humans, a robot’s social skills—the use of its emotional intelligence—will probably be even more important than its being physically convincing as a replica human.
Lest I be accused of glossing over a fundamental objection that some people have to the very idea that machines can have emotions, I shall here summarize what I consider to be the most important argument supporting this notion.* Certainly there are scholars whose views on this subject create doubts in the minds of many: How can a machine have feelings? If a machine does not have feelings, what value can we place on its expressions of emotion? What is the effect on people when machines “pretend” to empathize with their emotions? All of these doubts and several others have attracted the interest of philosophers for more than half a century, helping to create something of a climate of skepticism.
To my mind all such doubts can be assuaged by applying a complementary approach like that of Alan Turing when he investigated the question, “Can machines think?”† Turing is famous in the history of computing for contributions ranging from leading the British team that cracked the German codes during World War II to coming up with the solution to a number of fundamental issues on computability. But it was his exposition of what has become known as the “Turing test” that has made such a big impact on artificial intelligence and which enables us, in my view, to answer all the skeptics who pose questions such as, “Do machines have feelings?”
The Turing test was proposed as a method of determining whether a machine should be regarded as intelligent. The test requires a human interrogator to conduct typed conversations with two entities and then decide which of the two is human and which is a computer program. If the interrogator is unable to identify the computer program correctly, the program should be regarded as intelligent. The logical argument behind Turing’s test is easy to follow—conversation requires intelligence; ergo, if a program can converse as well as a human being, that program should be regarded as intelligent.
To summarize Turing’s position, if a machine gives the appearance of being intelligent, we should assume that it is indeed intelligent. I submit that the same argument can equally be applied to other aspects of being human: to emotions, to personality, to moods, and to behavior. If a robot behaves in a way that we would consider uncouth in a human, then by Turing’s standard we should describe that robot’s behavior as uncouth. If a robot acts as though it has an extroverted personality, then with Turing we should describe it as being an extrovert. And if, like a Tamagotchi, a robot “cries” for attention, then the robot is expressing its own form of emotion in the same way as a baby does when it cries for its mother. The robot that gives the appearance, by its behavior, of having emotions should be regarded as having emotions, the corollary of this being that if we want a robot to appear to have emotions, it is sufficient for it to behave as though it does. Of course, a robot’s programmed emotions might differ in some ways from human emotions, and robots might even evolve their own emotions, ones that are very different from our own. In such cases, instead of understanding, through empathy and experience, the relationship of a human emotion to the underlying causes, we might understand nothing about robotic emotions except that on the surface they resemble our own. Some people will not be able to empathize with a robot that is frowning or grinning—they will be people who interpret the robot’s behavior as nothing more than an act, a performance. But as we come to recognize the various virtual emotions and experiences that lie behind a robot’s behavior, we will feel less and less that a robot’s emotions are artificial.
Our emotions are inextricably entwined with everything we say and do, and they are therefore at the very core of human behavior. For robots to interact with us in ways that we appreciate, they, too, must be endowed with emotions, or at the very least they must be made to behave as though they have emotions. Sherry Turkle has found that children deem simple toys, such as Furby, to be alive if they believe that the toy loves them and if they love the toy. On this basis the perception of life in a humanoid robot is likely to depend partly on the emotional attitude of the user. If users believe that their robot loves them, and that they in turn love their robot, the robot is more likely to be seen as alive. And if a robot is deemed to be alive, it is more likely that its owner will develop increased feelings of love for the robot, thereby creating an emotional snowball. But before robot designers can mimic emotional intelligence in their creations, they must first understand human emotions.
Human emotions are exhibited in various ways—in the changes in our voice, in the changes to our skin color when we blush, in the way we make or break eye contact—and robots therefore need similar cues to help express their emotions. Just as face and sound are used as a matter of course, instinctively and subconsciously, by humans communicating with other humans, so similar forms of communication are being exhibited by emotionally expressive robots to communicate their simulated emotions to their human users.
Many studies have shown that the activity of the facial muscles in humans is related to our emotional responses. The muscle that draws up the corners of the lips when we smile* is associated with positive experiences, while the muscle that knits and lowers the brows when we frown† is associated with negative ones. Much of today’s research into the use of facial expression in computer images and robots stems from a coding system developed during the 1970s by Paul Ekman, a psychologist at the University of California at San Francisco. Ekman classified dozens of movements of the facial muscles into forty-four “action units”—components of emotional expression—each combination of these action units corresponding to a different variation on a basic facial expression such as anger, fear, joy, or surprise. It has been shown as a result of Ekman’s work that the creation of emotive facial expressions is relatively easy to simulate in an animated character or a robot, while research at MIT has revealed that humans are capable of distinguishing even simple emotions in an animated character by observing the character’s facial expressions. The recognition, by a machine, of these various action units can therefore be converted to the recognition of a human emotional state. And the simulation of a combination of action units becomes the simulation, in a robot or on a computer screen, of a human emotion. Yes, this is an act on the part of the robot, but as time goes on, the act will become increasingly convincing, until it is so good that we cannot tell the difference.
The study of emotions and other psychological processes is a field that predates the electronic computer, providing researchers in robotics with a pool of research into which they can tap for ideas on how best to simulate these processes in robots. If we understand how a particular psychological process works in humans, we will be able to design robots that can exhibit that same process. And just as being human endows us with the potential to form companionable relationships, this same potential will be designed into robots to help make them sociable. Some would argue that robot emotions cannot be “real” because they have been designed and programmed into the robots. But is this very different from how emotions work in people? We have hormones, we have neurons, and we are “wired” in a way that creates our emotions. Robots will merely be wired differently, with electronics and software replacing hormones and neurons. But the results will be very similar, if not indistinguishable.
An example of a robot in which theories from human psychology have been synthesized is Feelix, a seventy-centimeter-tall humanoid robot designed at the University of Århus and built with Lego bricks. The manner in which a user interacts with Feelix is by touching its feet. One or two short presses on the feet make Feelix surprised if they immediately follow a period of inactivity, but when the presses become more intense and shorter, Feelix becomes afraid, whereas a moderate level of stimulation, achieved by gentle, long presses on its feet makes Feelix happy. But if the long presses become more intense and sustained, Feelix becomes angry, reverting to a happier state and a sense of relief only when the anger-making stimulation ceases.
Feelix was endowed with five of the six “basic emotions” identified by Paul Ekman: anger, fear, happiness, sadness, and surprise.* All five emotions have the advantage that they are associated with distinct corresponding facial expressions that are universally recognized, making it possible to exhibit the robot’s emotions partly by simulating those facial expressions. Anger, for example, is exhibited by having Feelix raise its eyebrows and moderately opening its mouth with its upper lip curved downward and its lower lip straight, while happiness is shown by straight eyebrows and a wide closed mouth with the lips bent upward. When it feels no emotion—that is, when none of its emotions are above their threshold level, Feelix displays a neutral face. But when it is stimulated in various ways, Feelix becomes emotional and displays the appropriate facial expression.
In order to determine how well humans can recognize emotional expressions in a robot’s face, Feelix was tested on two groups of participants, one made up of children in the nine-to-ten age range and one with adults aged twenty-four to fifty-seven. The tests revealed that the adults correctly recognized Feelix’s emotion from its facial expression in 71 percent of the tests, with the children slightly less successful at 66-percent recognition. These results match quite well the recognition levels demonstrated in earlier tests, using photographs of facial expressions, that had been reported in the literature on emotion recognition, providing evidence that the simulation of expression of the basic emotions is not something from science fiction but can already be designed into robots. Accepting that an acted-out emotion is just that, an act, will make it difficult to believe that the acted emotion is being experienced by the robot. But again, as the “acting” improves, so any disbelief will evaporate.
Robot Recognition of Human Emotions
To interact meaningfully with humans, social robots must be able to perceive the world as humans do, sensing and interpreting the same phenomena that humans observe. This means that in addition to the perception required for physical functions such as knowing where they are and avoiding obstacles, social robots must also possess relationship-oriented perceptual abilities similar to those of humans, perception that is optimized specifically for interacting with humans and on a human level. These perceptual abilities include being able to recognize and track bodies, hands, and other human features; being capable of interpreting human speech; and having the capacity to recognize facial expressions, gestures, and other forms of human activity.
Even more important than its physical appearance and other physical attributes in engendering emotional satisfaction in humans will be a robot’s social skills. Possibly the most essential capability in robots for developing and sustaining a satisfactory relationship with a human is the recognition of human emotional cues and moods. This capability must therefore be programmed into any robot that is intended to be empathetic. People are able to communicate effectively about their emotions by putting on a variety of facial expressions to reflect their emotional reactions and by changing their voice characteristics to express surprise, anger, and love, so an empathetic robot must be able to recognize these emotional cues.
Robots who possess the capability of recognizing and understanding human emotion will be popular with their users. This is partly because, in addition to the natural human desire for happiness, a user might have other emotional needs: the need to feel capable and competent, to maintain control, to learn, to be entertained, to feel comfortable and supported. A robot should therefore be able to recognize and measure the strength of its user’s emotional state in order to understand a user’s needs and recognize when they are being satisfied and when they are not.
Communicating our emotions is a process called “affect,” or “affective communication,” a subject that has been well investigated by psychologists. It is also a subject of great importance in the design of computer systems and robots that detect and even measure the strength of human emotions and in systems that can communicate their own virtual emotions to humans. The Media Lab at MIT has been investigating effective communication since the mid-1990s, in research led by Rosalind Picard, whose book Affective Computing has become a classic in this field. Affective computing involves giving robots the ability to recognize our emotional expressions (and the emotional expressions of other robots), to measure various physiological characteristics in the human body, and from these measurements to know how we are feeling.
Inexpensive and effective technologies that enable computers to measure the physiological indicators of emotion also allow them to make judgments about a user’s emotional state. Thanks largely to Picard, detecting and measuring human emotion has become a hot research topic in recent years. By measuring certain components of the human autonomic nervous system,* it is already possible for computers to distinguish a few basic emotions. A simple example of such measurements is galvanic skin response—the electrical conductivity of the skin. This has long been known as an indicator of stress and has therefore been employed in some lie detectors, but more recently it has also been used as a metric for helping to recognize certain emotional states other than stress. Heart rate is another easy-to-measure example—it is known to increase most during fear but less when a person is experiencing anger, sadness, happiness, surprise, and disgust, the last of these eliciting only the barest minimum of a heart-rate change. Yet another example is blood pressure, which increases during stress and decreases during relaxation, the biggest increase again being associated with anger.
It is a relatively simple matter to measure human blood pressure, respiration, temperature, heart rate, skin conductivity, and muscle tension using what are currently regarded as sophisticated items of electronic equipment. Research into “affective wearables,” usually items of clothing and other attachments that may be worn unobtrusively and come with electronic sensors for taking such measurements, will inevitably lead to the development of technologies that can monitor all these vital signs without our even noticing that we’re wearing them. By transmitting the measured data, affective wearables will thus enable robots to recognize and quantify at least some of our emotions, allowing them to judge our moods, based on our displays of emotion as they appear to the electronic monitors. For example, by combining the data from only four different measures—respiration, blood pressure volume, skin conductance, and facial-muscle tension—Rosalind Picard, Elias Vyzas, and Jennifer Healey developed an emotion-recognition system capable of 81-percent accuracy when distinguishing among eight emotions: anger, hate, grief, platonic love, romantic love, joy, reverence, and the neutral state (no emotion).
Additional help in detecting human emotion can come from auditory and visual cues. Facial-recognition technology is making dramatic advances, spurred on by the impetus of a fear of terrorism—the technology that today successfully identifies faces seen on a closed-circuit TV camera will tomorrow be identifying not only the person behind the face but also that person’s mood. Similarly with voices. Voice recognition has taken on increased import as a means of identification for security purposes, turning the sound characteristics of the human voice into measurable quantities that can act as an additional aid to identification. Iain Murray and John Arnott have investigated the vocal effects associated with several basic emotions, establishing links between voice characteristics and emotion that make possible the design of a voice-based emotion recognizer. This particular slant on the technology comes from the measurement of the pitch of a voice, the speed with which words are uttered, the frequency range of the voice, and changes in volume. Someone who is sad or bored will typically exhibit slower, lower-pitched speech, while a person who is afraid, angry, or joyous will speak louder and faster, with more words spoken at higher frequencies.
In summary, the creation of natural and efficient communication between human and robot requires that each display emotions in ways that the other is able to recognize and assess. But the emotionally intelligent robot must not only be able to recognize emotions in humans and to assess the strength of those emotions, it should also demonstrate that it recognizes the emotions displayed by its human. As the development of emotion-recognition and emotion-simulation technologies advances, so will the development of emotional intelligence in robots, and their relationships with humans will come to mirror a healthy human-human relationship.
Three Routes to Falling in Love with Robots
There are three distinct progressions that I believe will lead enormous numbers of humans to develop affection for and fall in love with robots. One route will develop in a humanlike loving way, as robots become more and more human in appearance and personality, encouraging us to like and to love them. This is a natural extension of normal human loving and is the easiest of the three routes to comprehend. And, just as with the Tamagotchi, the human tendency to nurture will help to engender in us feelings of love for robots.
Another route is via a love for machines and technology per se, sometimes called “technophilia.” People who “love” computers and machines do so in different ways. There are those who rush out and buy every new technological gizmo the moment it is put on sale—theirs is a love for all new technology. There are those for whom the technology converts into some other form of emotional or even erotic stimulation, such as pornography on the Internet or on a DVD. There are the technophiles, usually programmers but also those who love pressing buttons to make their gizmos do weird and wonderful things; theirs is a love of control, whether it is control by writing the programs that instruct their computers what to do or the much simpler form of control achieved by pressing the buttons on devices that have already been programmed. And the act of programming has itself been compared to sex, in that programming is a form of control, of bending the computer or the gadget to the will of the programmer, forcing the computer to behave as one wishes—domination.
A love of technology and its benefits was at first largely the province of the technically more adept, the economically upward mobile, and, predominantly, of adolescents and those in their twenties and thirties. As the cost of electronics has come down, enabling consumer-electronics manufacturers to create electronic toys and other products especially for children, so the age range of technophiles has widened considerably. Nowadays, with primary-school children and even preschoolers finding themselves the owners of a plethora of electronic products, we are creating future generations of adults for most of whom the latest gizmos will seem perfectly normal rather than amazing. And so it will be with robotics. Those who are born surrounded by electronics will grow up eager for and receptive to whatever new electronic inventions become available during their lifetimes. The love that yesterday’s children and young adults demonstrated for their Furbies and Tamagotchis will be the basis for the adults of the future to find it perfectly normal first to love their interactions with robots and then to love the robots themselves.
The evolution of loving relationships between humans and robots will be yet another example of how technology changes the way we live in dramatic, even mind-boggling ways. One of the most glaring examples from the twentieth century is television. Who at the time of the First World War would imagine that one day they would be able to look at a box that showed something happening, at that very moment, on the other side of the world, or even on the moon? Who at the time of theSecond World War would have believed that by the end of the century telephone booths in the street would fast become redundant, because just about everyone would be walking around with their very own wireless telephone in their pocket? Who at the time of the Vietnam War would have expected handwritten letters to gradually go out of fashion in the United States and many other countries as more and more people would take to the computer as their primary or sole means of writing letters and even sending them, at virtually no cost, to their friends and relatives, in no more time than it takes to click a computer mouse? And which did you, dear reader, use more recently as a source of information, a reference book or an Internet search engine such as Google?
The entertainment industry has been reshaped more than most by the tools of technology. Animation, made so popular for generations of children (and adults) by Walt Disney and originally hand-drawn, painstakingly, by teams of artists, is nowadays created automatically by superfast computers, costing animators their jobs by the thousands. Music that in my youth came into our homes on gramophone records that rotated at 78 revolutions per minute, and later at 45 and then 33 rpm, the slower speeds allowing more music to be stored on a single disc, now comes to our handheld boxes by “download” via the Internet, making available to us a colossal collection of pop, rock, jazz, classical, and all other types of music without our having any need to go to a store. And then there are video games, probably the biggest-ever product success in the entertainment industry—games that today offer the user the most amazing sights, sounds, and action, all in an easily portable package. Other video-based products such as DVDs and their precursors—videocassettes—have also created huge changes in the way we entertain ourselves, enabling us to have the movies of our choice in our homes, to watch and watch again as often as we wish. (And the genre that has achieved the biggest financial success in that particular technological field is pornographic movies, because sex seems always to find a way to reach the marketplace. Sex sells.)
But back to robots. A third route in the evolution of love for robots will arise out of emotions similar to those that have made Internet relationships so hugely popular. Let us recall Deb Levine’s words, quoted in chapter 1:
For some people, online attraction and relationships will become a valid substitute for more traditional relationships. Those who are housebound or rurally isolated and those who are ostracized from society for any number of different reasons may turn to online relationships as their sole source of companionship.6
The same could equally be said of human-robot relationships, and some will find this worrying. Most people who develop emotional attachments to robots, and to whom robots exhibit their own demonstrations of love, will have in their mind the knowledge that the robot is just that, a robot, and not a human being. This “you are only a robot” syndrome will be some kind of boundary across which a human must pass to feel love to its fullest extent for a robot, though in the case of certain groups within our society, crossing that boundary will seem perfectly natural. Those who prefer to relate to computers rather than to humans will doubtless find it no problem at all. Nor will many nerds, many social outcasts, and those who will be only too happy to find someone, almost anyone, who exhibits affection for them. But what about the more normal members of the population? What will it take for them to cross this boundary? One could argue that the first requirement will be incredibly good engineering, so that robots are as convincing in their appearance and actions as Stepford wives—almost indistinguishable from humans. But as we saw in chapter 3, the Tamagotchi experience and the reactions of the owners of AIBO pet dogs indicate that very strong emotional attachments can develop in humans even when the object of such affection is not humanlike in appearance.
This aside into the world of Internet romances and its implications has another important point to make in my line of argument on the subject of love with robots. One conclusion that can safely be drawn from the phenomenon of falling in love via the Internet, as with a pen pal, is that it is not a prerequisite for falling in love ever to be in the presence of the object of one’s love. The falling-in-love process can be conducted completely in the physical absence of the loved one. This is consistent with and a much stronger form of the phenomenon noted by Robert Zajonc.* Of course there are photographs and video images of the loved one that can be received via the Internet. And the loved one’s voice can be heard via the Internet or the telephone, but their physical presence is simply not necessary.
Now consider the following situation: At the other end of an Internet chat line, complete with a webcam to transmit its image, a microphone to carry the sound of its voice, and a smell-detection and-transmission system to convey its artificial bodily scent to you, there is a humanlike robot endowed with all of the artificially intelligent characteristics that will be known to researchers by the middle of this century. You sit at home looking at this robot, talking to it, and savoring its fragrance. Its looks, its voice, and its personality appeal to you, and you find its conversation simulating, entertaining, and loving. Might you fall in love with this robot? Of course you might. Why shouldn’t you? We have already established that people can fall in love without being able to see or hear the object of their love, so, clearly, being able to see it, finding its looks to your liking, being able to hear its sexy voice, and being physically attracted by its simulated body fragrance can only strengthen the love you might develop in the absence of sight, sound, and smell.
And if you do fall in love with a robot, what will be the nature of this love and how will it differ from the way you feel about the love of your life in the world as it is today?
As noted earlier, one important difference will be that robots are going to be replicable, even to the point of their personality, their memories, and their emotions. Those readers who are frequent computer users will know that it is good practice to back up your work on the computer just in case of a disaster that causes the loss of some or all of your data. Similarly, it will become common practice for the knowledge, personality, and emotion parameters—and all the other software aspects of a robot’s “brain”—to be backed up on a frequent basis. By midcentury this process will almost certainly be fully automatic, so that neither the robot nor its owner needs to do anything. At regular intervals the contents of the robot’s brain, its consciousness, its emotions, will all be transmitted to a secure memory bank. If, heaven forbid, a robot is damaged or destroyed and its owner wishes an exact copy, the physical characteristics can be replicated in the robot factory, and then the contents of the brain, predamage, can be downloaded into the new copy of the original robot. This capability creates one enormous difference between the love one feels for another human being and the love that will be felt for robots. If you love someone enough, you will willingly undertake any risk, or knowingly sacrifice your own life, in order to save theirs. This is only partly because of the strength of your love for them. It is also partly because they are irreplaceable. But in the case of love for a robot, it will be as though death simply does not exist as a concept that can be applied to the object of your love. And if it can never truly die, because it can always be brought back to life in an exact replica of its original body, there will never be any need for a human to sacrifice their own life for their robot or to take a major risk on its behalf.
Another important difference is that robots will be programmable never to fall out of love with their human, and they will be able to reduce the likelihood of their human falling out of love with them. Just as with the central heating thermostat that constantly monitors the temperature of your home, making it warmer or cooler as required, so your robot’s emotion-detection system will constantly monitor the level of your affection for it, and as that level drops, your robot will experiment with changes in behavior aimed at restoring its appeal to you to normal.
Robot Personalities and Their Influence on Relationships
Personality is one of the most important factors that drive the processes of falling in love and falling in lust, so before we examine the specific causes of falling in love with robots, we shall first consider some of the significant research on robot personality that has been conducted during the past decade or so.
Robot personality is a subject that some readers might regard with skepticism—how can a robot have a personality? In the mid-1990s, Clifford Nass and some of his colleagues in the Department of Communication at Stanford University showed it to be relatively straightforward to create humanlike characteristics in computers—computer personalities—using a set of cues drawn from the extensive literature on the subject of human personality. In psychological terms, personality is the set of distinctive qualities that distinguish individuals. Nass and his group have conducted more than thirty-five experiments to investigate some of these qualities, to determine how they can be simulated in computer programs and how such simulations compare with the corresponding trait in humans.
One of the experiments carried out by Nass’s group is related to the team element of a partnership relationship. Couples act as a team in myriad ways: She might wash the dishes while he dries, she might do the laundry while he does the gardening, he might be the principal breadwinner while she devotes more time to taking care of the children—or vice versa. It is not only the drudge tasks that are shared in a partnership relationship, it is also the more pleasant ones, and in both cases the sharing of responsibilities will often act as a bonding factor, helping to sustain the relationship. A study of computers as teammates is therefore of considerable interest in estimating how a computer-human dyad might also function as a team.
Nass and Byron Reeves based their study into computers as teammates on social-psychology experiments showing that there are two key factors in a team relationship—group identity and group interdependence. Group identity simply means that a team must have something to identify it by, often just a name such as “Mr. and Mrs. Bloggs,” or “the Smith family,” or “Christine and David.” The importance of group interdependence lies in the fact that the behavior of each member of a team can affect all the other members.*
The teams created for this study each consisted of a human and a computer, with the team identified by a color and the members of the team sporting a ribbon of that color and a notice saying “blue team,” for instance, on that team’s computer. Half of the people in the experiment were told they were on the blue team. They were also told that their performance would be graded and that its final evaluation would depend not only on their own efforts but also on those of the blue team’s computer. The other half of the people in the experiment were treated as though they were not on the same team as the computer with which they were collaborating. These subjects also wore a blue ribbon, but their computer was dressed in green and carried a notice affirming that it was a “green computer.” The experimenters made no mention to the humans in the second group of any collaboration between them and the computer, in order to avoid creating an association of teamwork in their minds. These subjects were told that their performance would be graded solely on the basis of their own work with the computer—that the computer was simply there to help.
The participants were set to work on a problem-solving task commonly employed in experimental psychology, a task known as the Desert Survival Problem.* When the participants first attacked the problem, they would try to solve it by themselves, creating their own ranking for the survival items. They then went into another room, one at a time, where they worked on the task in collaboration with their assigned computer. They all exchanged information with their computer about each of the twelve survival items, and, if they wished, the participants could then change their initial rankings. Once the human participants had interacted with their computer, they would be sent into a third room, where they wrote out their final rankings and responded to questions about their interaction with their computer, questions such as “How similar was the computer’s approach to your own approach in evaluating the twelve items?” and “How helpful were the computer’s suggestions?”
The results of this experiment revealed a lot about how people perceive team relationships. When the humans believed that they were on the same team as the computer, they assessed the computer as being more like themselves relative to how much like themselves the participants thought the computers to be when the participants worked alone. These “teamed” participants also thought that their “teammate” computer had adopted a problem-solving style more similar to their own and that their computer agreed more completely with their own ranking of the items. Another tendency was for the teamed participants to believe that the information given to them by the computer was more relevant and helpful and that it was presented in a friendlier manner compared to the participants who did not believe they were members of a human-computer team—all this despite the fact that the information was identical and was presented in an identical manner in both cases. Other indications of relationship building between the human participants and their computers were that the teamed participants tried harder to reach an agreement with their computer on the rankings and were more receptive to their teammate’s suggestions and influences.
One of the most important conclusions of this study was to confirm the work of earlier psychologists who “have long been excited by how little it takes to make people feel part of a team, and by how much is gained when they do.” Reeves and Nass had extended this earlier research by showing that feelings of being part of a team are powerful enough to affect people’s interactions with computers, once they believe that their own success depends also on the success of the computer.
This research was groundbreaking work at that time, but even more remarkable than the ease with which their goal was accomplished was what the experimenters learned when they tested two simple computer personalities, each designed into a program that collaborated with a human user on the Desert Survival Problem. One of these computer personalities was “dominant,” using strong language in its assertions and commands, displaying a high level of confidence when communicating with the human test subjects, and leading off the dialogues with its human collaborators. The other computer personality was “submissive,” using weaker language, in which assertions were replaced by suggestions and commands with questions, and inviting or allowing the human collaborator to start each dialogue. It was found that those humans who themselves had more dominant personalities* enjoyed interacting with the dominant computer more than they did with the submissive one, while those with a more submissive personality preferred interacting with the submissive computer. Furthermore, not only did the human subjects prefer to interact with a computer similar in personality to their own, but they also experienced a greater satisfaction in their own performance on the problem-solving task when collaborating with the similar computer. These results led to the conclusion that not only do humans prefer to interact with other humans of similar personality, but they also prefer to interact with computers that have similar (virtual) personalities to their own.
Other experiments conducted by Nass and his group confirmed that humanlike behavior by a computer enhances the user’s experience of the interaction and makes the computer more likable. One example of this phenomenon is the ability of computers to increase users’ liking of them by means of flattery, by matching the users in personality, and through the use of humor, which has been found to lead to assessments of them as being more likable, competent, and cooperative than computers that do not exhibit any humor. Another example came from highly expressive teaching programs that were found to increase students’ feelings of trust in the programs because the students perceived them as helpful, believable, and concerned.
Designing Robot Personalities
Designing a robot with an appealing personality is an obvious goal, one that would allow you to go into the robot shop and choose from a range of personalities, just as you will be able to choose from a range of heights, looks, and other physical characteristics. One interesting question is whether it will be necessary to program robots to exhibit some sort of personality friction for us to feel satisfied by our relationships with them and to feel that those relationships are genuine. Certainly it would be a very boring relationship indeed in which the robot always performed in exactly the manner expected of it by its relationship partner, forever agreeing with everything that was said to it, always carrying out its human’s wishes to the letter and in precisely the desired manner. A Stepford wife. Perfection. No, that would not be perfection, because, paradoxically, a “perfect” relationship requires some imperfections of each partner to create occasional surprises. Surprises add a spark to a relationship, and it might therefore prove necessary to program robots with a varying level of imperfection in order to maximize their owner’s relationship satisfaction. Many people have relatively stable personalities and would therefore probably appreciate robots whose own personality and behavior exhibited some, but not a huge amount of, perturbation. This variable factor in the stability of a robot’s personality and emotional makeup is yet another of the characteristics that can be specified when ordering a robot and that can be modified by its owner after purchase. So whether it is mild friction that you prefer or blazing arguments on a regular basis, your robot’s “friction” parameter can be adjusted according to your wishes. Your robot will be programmed to recognize and measure friction when it is there, by the nature of your conversation with it and the tone of your voice, and to increase or decrease the level of friction according to your preferences.
One important consideration for robot programmers when planning a robot’s personality and behavior will be how best to cope with different cultures. Just think of the courting rituals and the chaperone phenomena in some Latin countries, the Chinese tendency not to be too physically demonstrative in public and the contrasting lack of inhibitions displayed in some other countries, and the tradition of arranged marriages in certain cultures—a tradition that ought to present no problem for robots, because the parents of the human bride or groom will simply make all the choices in the robot shop as to its physical appearance and other characteristics, rather than leave these decisions to their offspring. Whatever the social norms of the prospective owners and their culture, a robot will be able to satisfy them. Similarly with religion, the details and intensity of which can be chosen and changed at will—whether you’re looking for an atheist, an occasional churchgoer, or a devout member of any religion, you have only to specify your wishes when placing your order at the robot shop. The key here will be ensuring that the robot has a flexible personality. It will most likely leave the factory with a set of personality traits, some standard and others chosen by the customer, but a robot will be able to set any or all of these traits aside as required, to allow the robot itself to adapt to the personality needs of its owner.
The example of the dominant and submissive problem-solving programs devised by Nass and his team suggests that creating artificial personalities will probably not be an immensely difficult task for robot scientists. Likewise, the creation of blue eyes, a sexy voice, or whatever other physical characteristics turn you on, are all within the bounds of today’s technology. And if what turned you on when you purchased your robot ten years ago no longer turns you on today, the adaptability of your robot and the capability of changing any of its essential characteristics will ensure that it retains your interest and devotion. When robots are able to exhibit the whole gamut of human personality and physical characteristics, their emotional appeal to humans will have reached a critical level in terms of attracting us, inducing us to fall in love with them, seducing us in the widest sense of the word. We will recognize in these robots the same personality characteristics we notice when we are in the process of falling in love with a human. If someone finds a sexy voice in their partner a real turn-on, they are likely to do so if a similar voice is programmed into a robot. If it’s blue eyes that one is after, simply select a blue-eyed robot when you make your choice. If it’s a particular personality trait, your robot will come with that trait ready-made, or it will learn the trait as it discovers its importance to you.
While much of the development work on the hardware for new robot technologies is being carried out in Japan, the West is not lagging behind in the research effort into software for the robots’ emotions and personality.* One reason for the Japanese bias toward hardware is because the Japanese government is determined to employ robots in the future to assist with the massive task of taking care of their aging population, a task for which the hardware must be totally reliable and robust. Another motivation for the Japanese investment in robot hardware research is that it will be the Japanese consumer-electronics conglomerates that will reap the greatest commercial benefits when robots are on sale to the public in high-volume quantities.
These world leaders in robotics, Japan and the United States, have somewhat different approaches and goals. The United States produces and uses far fewer robots than does Japan, because the United States is more reliant on less expensive immigrant labor. According to the latest industry figures in 2006, the United States had only 68 robots in manufacturing industries for every 10,000 human manufacturing workers, whereas Japan had 329 per 10,000. But an even greater distinction lies in the cultural differences between Japan and the United States and how these differences transfer to the different perceptions of the people in these countries to the prospect of our future with robots.
In an article in USA Today,† Kevin Maney summarizes these differences:
U.S. labs and companies generally approach robots as tools. The Japanese approach them as beings. That explains a lot about robot projects coming out of Japan.
A more detailed explanation of these cultural differences was given by the Economist magazine,‡ in an article entitled “Better Than People,” which explained “why the Japanese want their robots to act more like humans.” The article focuses on how these cultural differences affect robotics development in Japan. The reasons are partly economic—the huge growth predicted for the sale of service robots (to $10 billion) by the year 2015—but also cultural.
It seems that plenty of Japanese really like dealing with robots. Few Japanese have the fear of robots that seems to haunt Westerners. In Western books and movies, robots are often a threat, either because they are manipulated by sinister forces or because something goes horribly wrong with them. By contrast, most Japanese view robots as friendly and benign. Robots like people and can do good. The Japanese are well aware of this cultural divide, and commentators devote lots of attention to explaining it. The two most favored theories, which are assumed to reinforce each other, involve religion and popular culture.
Religion plays a role because Shintoism “is infused with animism: it does not make clear distinctions between inanimate things and organic beings.” For this reason the attitude in Japan is to question not why the Japanese like robots but why many Westerners view robots as some kind of threat. And this somewhat benevolent attitude toward robots has been enhanced by their popularity, both in newspaper and magazine cartoons and in films, ever since the launch of Japan’s robot cartoon character Tetsuwan Atomu in 1951.
Robot Chromosomes
A huge step forward on the path to creating robots with humanlike personalities and emotions has recently been taken by Jong-Hwan Kim* and his team at the Robot Intelligence Technology Laboratory in Daejeon, South Korea, who have been working on the development of successive versions of a robot called HanSaRam. In a 2005 conference paper, “The Origin of Artificial Species,” Kim and his colleagues describe the artificial chromosomes they have developed for robots.
The basis of Kim’s idea is that the entire collection of a robot’s artificial chromosomes will contain all the information about the robot that corresponds to the information stored in our DNA. Thus Kim’s programmed genetic makeup is modeled on human DNA, although instead of being a complex double-helix shape as in a human chromosome, each artificial chromosome is equivalent to a single strand of genetic makeup. In humans the principal functions of genetic makeup are reproduction and evolution, but in robots the makeup can also be used for representing the personality of the robot and can be electronically transferred to other robots.
Kim’s approach to robot personality was inspired by the evolutionary biologist Richard Dawkins, whose book The Selfish Gene asserts that, “We and other animals are machines created by our genes.” Kim draws a parallel between humans and humanoids by proposing that the essence of the origins of an artificial species such as humanoids must be the genetic code for that species. His paper presents the novel concept of the artificial chromosome, which Kim describes as the essence for defining the personality of a robot and the enabler for a robot to pass on its traits to its next generation, just as in human genetic inheritance. Thus the artificial chromosome creates a simulation of evolution for its artificial species.
If we think in terms of the essence of the creatures, we must consider this the origin of artificial species. That essence is a computer code, which determines a robot’s propensity to “feel” happy, sad, angry, sleepy, hungry, or afraid.7
Continuing the parallel between humans and humanoids still further, Kim suggests that the main functions of a robot’s genetic code are reproduction and evolution and that the code should be designed to represent all the traits and personality components of these artificial creatures. Thus his artificial chromosomes, being a set of computerized representations of a DNA-like code, will enable robots to think, feel, reason, express desire or intention, and could ultimately empower them to reproduce,* to pass on their traits to their offspring, and to evolve as a distinct species.
Kim’s team has designed fourteen robot chromosomes in all, six of which are related to the robots’ motivation, three to their homeostasis,† and four to their emotions. These chromosomes dictate how robots should respond to various stimuli: avoiding unpleasantness, achieving intimacy and control, satisfying curiosity and greed, preventing boredom, as well as engendering feelings of happiness, sadness, anger, and fear and creating states of fatigue, hunger, drowsiness, and so on, all of which will combine to imbue the robot with “life.” Kim’s robots will be able to react emotionally to their environment, to learn and make reasoned decisions based on their individual personalities.
For ease of development and testing, Kim’s simulated chromosomes have been programmed into a simulated creature—a software robot called Rity, living in a virtual world—that can perceive forty-seven different types of stimuli and is able to respond with seventy-seven different behaviors. As determined by their genetic codes, no two Rity robots react in the same way to their surroundings. Some become bored with their human handlers while others, because they have a different personality, pant and express their “happiness” at the sight of their humans. It’s all in their genes! One of the next steps by Kim and his team will be to create the equivalent of the human X and Y chromosomes, conferring on robots their own version of sexual characteristics, including lust. Thus if male and female robots like each other, “they could have their own children.”
Kim readily admits one of the principal messages of the movie I, Robot—namely, that the feasibility of giving robots their own personalities and emotions might make them a danger to humanity. To counter this he suggests employing artificial chromosomes “to design brilliant but mild-tempered and submissive robots,” which is one way to ensure that we do not become enslaved by our creations as they evolve. Given this elementary precaution, by the time “malebots” and “fembots” are available for general consumption the market will be ready for them.
The Ten Factors as Applied to Human-Robot Relationships
We saw in chapter 2 how common it is for people to develop strong feelings of affection, including love, for their pet animals. And in chapter 3 we examined the same phenomenon as it relates to virtual pets such as the Tamagotchi. Now we come to examine the ten principal factors that cause humans to fall in love with humans, as discussed in chapter 1. Let us consider which of these factors might also be important in causing humans to fall in love with robots.
At the outset we should recall the importance of proximity and hence repeated exposure as major factors that contribute to placing people in a situation in which falling in love becomes more likely. In the case of a robot, both proximity and repeated exposure are easy to achieve, subject to the robot’s cost. Simply buy a robot and take it home and both of these criteria are instantly satisfied.
In chapter 1 we also discussed Byrne’s law, which shows that we are more inclined to like someone when we feel good. The empathetic robot, able to determine what makes a particular human feel good, will therefore have a head start in its attempts to seduce. The robot will do its best to create “feel-good” situations, perhaps by playing one of its human’s favorite songs or by switching on the TV when its human’s favorite baseball team is playing, and then it will exhibit virtual feelings that mirror those of the human, whether they be feelings of enjoyment when hearing a particular song or cheering on a baseball team.
Another lesson from chapter 1 on the subject of getting someone to fall in love with you was that self-disclosure of intimate details can be a powerful influence in this direction. Robots designed to form friendships and stronger relationships with their users will therefore be programmed to disclose virtual personal and intimate facts about their virtual selves and to elicit similar self-disclosure from humans.
Now to the ten reasons for falling in love. Which of them might have parallels in human-robot relationships, parallels strong enough to lead humans to develop feelings of love for robots?
Similarity
Of the most important similarities referred to in chapter 1, only one of them—coming from a similar family background—is not easy for a robot to imitate convincingly, given that its human will know that the robot was made on an assembly line. But as to the other key similarities, I forsee no problem in replicating them, including the most important of all, similarity of personality. It will be recalled from one of Clifford Nass’s experiments, described earlier,* that not only do humans prefer to interact with other humans of similar personality, but they also prefer to interact with computers that have similar personalities to their own. That finding is of great significance when considering the importance of similarity of personality in the process of falling in love. Attitudes, religious beliefs, personality traits, and social habits—information on all of these can be the subject of a questionnaire to be filled out when a human orders a robot, or it could be acquired by the robot during the course of conversation. Once the robot’s memory has acquired all necessary information about its human, the robot will be able to emulate sufficient of the human’s stated personality characteristics to create a meaningful level of similarity. And as the robot gets to know its human better, the human’s characteristics will be observable by the robot, who can then adjust its own characteristics, molding them to conform to the “design” of its human.
One example of a similarity that will be particularly easy to replicate in robots is a similarity of education, since just about all of the world’s knowledge will be available for incorporation into any robot’s encyclopedic memory. If a robot discovers through conversation that its human possesses knowledge on a given subject at a given level, its own knowledge of that subject can be adjusted accordingly—it can download more knowledge if necessary, or it can deliberately “forget” certain areas or levels of knowledge in order that its human will not feel intimidated by talking to a veritable brain box. This self-modifying capability will also allow robots to develop an instant interest in whatever are its human’s own interests. If the human is an avid train buff, then the robot can instantly become a mine of information about trains; if its human loves Beethoven, the robot can instantly learn to hum some of the composer’s melodies; and if the human is a mathematician, the robot will have the reasoning powers necessary to prove the popular mathematical theorems of that time. Not only will robots have extensive knowledge, they will also have the power of reasoning with that knowledge.
Desirable Characteristics of the Other
The key “desirable” characteristics revealed by the research literature are personality and appearance. Just as a robot’s personality can be set to bear a measure of similarity to that of its human, so it can be adjusted to conform to whatever personality types its human finds appealing. For a robot, as for a human, having a winning (albeit programmed) personality will be arousing in many respects, including sexually arousing. Again, the choice of a robot’s personality could be determined partly prior to purchase by asking appropriate questions in the customer questionnaire, and then, after purchase, the robot’s learning skills will soon pick up vibes from its human, vibes that indicate which of its own personality traits are appreciated and which need to be reformed. And when its human, in a fit of pique, shouts at the robot, “I wish you weren’t always so goddamn calm,” the robot would reprogram itself to be slightly less emotionally stable.
A desirable appearance is even easier to achieve in a robot. The purchase form will ask questions about dimensions and basic physical features, such as height, weight, color of eyes and hair, whether muscular or not, whether circumcised (if appropriate), size of feet, length of legs (and length of penis, in the case of malebots)…. Then the customer will be led effortlessly through an electronic photo album of faces, with intelligent software being employed to home in quickly on what type of face the purchaser is looking for. The refinement of this process can continue for as long as the purchaser wishes, until the malebot or fembot of his or her desire is shown on the order screen. If it’s a pert nose that turns you on, your robot can come with a pert nose. If it’s green eyes, they’re yours for the asking. By being able to choose all these physical design characteristics, you will be assuring yourself of not only an attractive robot partner but also the anticipation of great sex to come.
Personality and appearance are far from being the most difficult characteristics to design into robots. Synthesizing emotion and personality are active research topics at several universities in the United States and elsewhere,* as well as in some of the robotics laboratories in Japanese consumer-electronics corporations. Creating a physical entity in a humanlike form that is pleasing to the eye is relatively straightforward, and the Repliee Q1 robot demonstrated in Japan in 2006 is perhaps the first example. By 2010, I would expect attractive-looking female robots and handsome-looking males to be the norm rather than the exception, all with interesting and pleasant (though somewhat unsophisticated) personalities.
Reciprocal Liking
Reciprocity of love is an important factor in engendering love—it is more likely for Peter to fall in love with Mary if Peter already knows that Mary loves him. So the robot who simulates demonstrations of love for its human will further encourage the human to develop feelings of love for the robot.
Reciprocal liking is another attribute that will be easy to replicate in robots. The robot will exhibit enthusiasm for being in its owner’s presence and for its owner’s appearance and personality. After an appropriate getting-to-know-you period, it will whisper, “I love you, my darling.” It will caress its human and act in other ways consistent with human loving. These behavior patterns will convince its human that the robot loves them.
Any discussion of reciprocal liking with respect to robots will inevitably suggest questions such as “Does my robot really like me?” This is an important question, but a difficult one to answer from a philosophical perspective. What does “really” mean in general, and particularly in this context? I believe that Alan Turing answered all such questions with his attitude toward intelligence in machines—if it appears to be intelligent, then we should assume that it is intelligent. So it is with emotional feelings. If a robot appears to like you, if it behaves in every way as though it does like you, then you can safely assume that it does indeed like you, partly because there is no evidence to the contrary! The idea that a robot could like you might at first seem a little creepy, but if that robot’s behavior is completely consistent with it liking you, then why should you doubt it?
Social Influences
With time, social influences undergo huge change. What was considered a social aberration fifty years ago or less might now be very much the norm. One important example of this is the tendency in certain cultures for young people to be strongly encouraged to marry within their own culture. Not only are there fewer influences on marital choice nowadays, from parents, peers, and society in general, but there is more resistance from young people to be molded into marital relationships dictated by their cultural and social backgrounds. Attitudes to robots will also change with time—now they are our toys and items of some curiosity; before long the curiosity will start to diminish and robots will make the transition from being our playthings to being our companions, and then our friends, and then our loved ones. The more accepted robots become as our partners, the less prejudice there will be from society against the notion of human-robot relationships, leading more people to find it acceptable to take robots as their friends, lovers, and partners.
Filling Needs
If a robot appreciates the needs of its human, it will be able to adapt its behavior accordingly, satisfying those needs. This includes those relationships in which the human’s needs relate to intimacy, even to sex, as explained in part two of this book. One can reasonably argue that a robot will be better equipped than a human partner to satisfy the needs of its human, simply because a robot will be better at recognizing those needs, more knowledgeable about how to deal with them, and lacking any selfishness or inhibitions that might, in another human being, militate against a caring, loving approach to whatever gives rise to those needs.
Arousal/Unusualness
This factor depends for its existence on the situation in which a human and the potential love object initially find themselves together, and not on the love object itself. The arousal stimulus is external to the couple. As a result there would appear to be no difference between the effect of a particular arousal stimulus on someone in the presence of another human and the effect of that same arousal stimulus on that same someone in the presence of a robot. In both cases the stimulated human will find the situation arousing, possibly even to the extent that it might make the human feel more attracted to the robot than to another human under the same circumstances. After all, in a situation that appears dangerous, would not a robot be more likely than a human to be able to eliminate or mitigate the danger?
Specific Cues
Absolutely no problem! After a trial-and-error session at the robot shop, you will be able to identify exactly what type of voice you would like in your robot, which bodily fragrances turn you on, and all the other physical characteristics that could act as cues to engender love for your robot at first sight.
Readiness for Entering a Relationship
As in the case of arousal, with this feature it is one’s situation that gives rise to the affectionate feelings. If you’ve just been dumped by your partner and are looking for a flirtation or a fling to redeem your self-esteem, your robot can be right there ready for all eventualities, with no need for speed-dating sessions or for placing an ad in the lonely hearts columns.
Isolation from Others
This is yet another factor where the circumstance dictates what happens. If you have a robot at home, you will be likely to spend considerable time in isolation with it—as much time as you wish.
Mystery
Robots are already something of a mystery to most people. Imagine how much more of a mystery they will become as their mental facilities and emotional capacities are expanded as a result of artificial-intelligence research. This is not to say that robots should be “perfect.” By having different levels of performance that can be set or can self-adapt to suit those with whom a robot interacts, the behavior and performance of the robot can be endowed with humanlike imperfections, giving the user a sense of superiority when that is needed to benefit the relationship. The element of mystery, like variety, will be the spice of life in human-robot relationships.
What Does This Comparison Prove?
I submit that each and every one of the main factors that psychologists have found to cause humans to fall in love with humans can almost equally apply to cause humans to fall in love with robots. The logical conclusion, therefore, is that unless one has a prejudice against robots, and unless one fears social embarrassment as a result of choosing a robot partner, the concept that humans will fall in love with robots is a perfectly reasonable one to entertain. It is possible that at first it might only be the twenty-first-century equivalents of Sherry Turkle’s 1980s computer hackers* who fall in love with robots, the latter-day versions of the young man who’d “tried out” having girlfriends but preferred to relate to computers. Yet robots in a human guise will be far more tempting as companions and as someone to love than were computers to Turkle’s generation of hackers. And even if the computer geeks are the first to explore love with robots, I believe that curiosity, if nothing else, will prompt just about every sector of society to explore these new relationship possibilities as soon as they are available. What we cannot really imagine at the present time is what loving a robot will mean to us or how it might feel. Some humans might feel that a certain fragility is missing in their robot relationship, relative to a human-human relationship, but that fragility, that transient aspect of human-human relationships, as with so much else in robotics, will be capable of simulation. I do not expect this to be one of the easier tasks facing AI researchers during the next few decades, but I am convinced that they will solve it.
Robot Fidelity, Passion, and the Intensity of Robot Love
For the benefit of most cultures, robots should be faithful to their owner/partner—what we might call robot fidelity.* Robots will be able to fall in love with other robots and with other humans apart from their owner, possibly giving rise to jealousy unless the owner is actually turned on by having an unfaithful partner. Problems of this type can, of course, be obviated, simply by programming your robot with a “completely faithful” persona or an “often unfaithful” one, according to your wishes. How different life would be for many couples if the possibility of infidelity simply did not exist. But, in contrast, while the infidelity of one’s robot might be something to be avoided by careful programming, the possibility equally exists for humans to have multiple robot partners, with different physical characteristics and even different personalities. The robots will simply have their “jealousy” parameters set to zero.
Being able to set one’s robot to any required level of fidelity will be but one feature of robot design. It will also be appealing to be able to set the love-intensity level and the passion level of your robot to suit your desires. Your robot will arrive from the factory with these parameters set as you specified, but it will always be possible to ask for more ardor, more passion, or less, according to your mood and energy level. And at some point it will not even be necessary to ask, because your robot will, through its relationship with you, have learned to read your moods and desires and to act accordingly.
Marrying a Robot
For many of the readers of this book, any discussion on the history or current status of the institution of marriage will take place within the somewhat conservative confines of traditional Judeo-Christian thinking and attitudes and those of some of the other major world religions. Within these confines, marriage can only be the union of one man with one woman, a union intended to last for life, a union that usually has as one of its principal goals the creation of children. Yet this view of marriage is not the only view, because there are and long have been cultures within which marriage is viewed very differently. One of the most obvious examples of such differences is that between monogamy, one of the fundamental tenets of marriage in Western society, and polygamy, which is and has been the norm in many other cultures, including tribes in Africa, North and South America, and Asia, and a bedrock of religions such as Mormonism and Islam.* Surely if we are to enter a balanced debate on the history, the current state, or the future of marriage, our discussions should take into account all cultures, their customs, and how they regard marriage. Why should any of us assume that our own attitudes are inevitably the only correct ones and that cultures other than our own are in some way wrong?
America is perhaps the best example in the world of a mixture of races, religions, and cultures that is, precisely because of its mix, fast becoming a society in which the tolerance and acceptance of nontraditional customs and ideas create the very basis of society as it evolves. In such a society, if it is to evolve and thrive harmoniously, such acceptance is an essential moral prerequisite. Sometimes we must accept that it is our own views that might be inappropriate, possibly because they are outmoded, and that the more radical, more modern views of others are more suitable for the times in which we live and for the future. This phenomenon, whereby changes in opinion lead to massive social change, has been seen in recent decades with attitudes to homosexual relationships.†
The trend toward the toleration and acceptance of same-sex marriages is but one aspect of the changing face and meaning of marriage. The November-December 2004 issue of Harvard Magazine published a highly charged essay, “The Future of Marriage,” by Harbour Fraser Hodder,8 which, although primarily intending to examine how changes in demographics, economics, and laws have altered the meaning of marriage in America, actually makes a number of points that can also be used to support the prediction that marriage to robots will by midcentury raise no more eyebrows than same-sex marriages and civil unions do today. One such point is based on the observation by Nancy Cott, a Harvard professor of American history, that “marriage itself has therefore come in for a broad reassessment.”9
The reassessment to which Cott refers is that due to the polarizing views of the advocates of same-sex marriage and their “family values”–oriented opponents. Cott explains that “as same-sex couples line up for marriage licenses at courthouses across Massachusetts, opponents predict the death of marriage itself. One side sees tragedy in the making, the other wants to rewrite the script entirely.”
It is my belief that marriage to robots will be one of the by-products of the rewriting of the script, a belief rooted in the type of argument employed by those judges who have ruled in support of same-sex marriage. In 1998, for example, in a superior court ruling in Alaska, Judge Peter Michalski called the right to choose one’s life partner constitutionally “fundamental,”10 a privacy right that ought to receive protection whatever its outcome, even a partner of the same sex. “Government intrusion into the choice of a life partner encroaches on the intimate personal decisions of the individual…. The relevant question is not whether same-sex marriage is so rooted in our traditions that it is a fundamental right, but whether the freedom to choose one’s own life partner is so rooted in our traditions.” Michalski’s 1998 ruling and many since then have pointed the way not only to a liberalizing of the legislature’s attitude to same-sex marriage but also to a strengthening of the attitude toward the right to choose.
The controversy over same-sex marriage is not the only reason why attitudes to marriage in America have undergone dramatic change. Cott mentions how women’s legal identities and their property used to be subsumed into those of their husbands, and we should not forget that in the past, wives were sometimes themselves regarded as the property of their husbands. These issues of unequal ownership have been erased with time, but the subject of ownership seems likely to reappear, though in a completely equal guise, when humans of either sex acquire and thereby own robots that act as their lovers and their spouses.
Cott also touches on another important and relevant change in the history of marriage in the United States, “the dissolution of marital prohibitions based on race.” Even though such unions were previously far from unknown, it was not until 1967 that interracial marriages were ruled to be legal in the United States, when the U.S. Supreme Court overruled the sixteen states that still at that time considered marriage across the color line to be void or criminal. The statistics for interracial marriage have since given proof to the overwhelming need for that change: The number of marriages in the United States between African-Americans and Caucasians rose from 51,000 in 1960 to more than 440,000 in 2001.
Same-sex marriage, ownership of a wife and her property, and interracial marriage are but a few of the most significant changes that are apparent from a study of the history of marriage in the United States. Other major changes include an acceptance of the fact that marriage is not necessarily for life, as evidenced by the 50-percent-plus divorce rate in the United States, and the increasing proportion of couples who opt not to have children. All these and other changes of attitude to marriage lead us to the conclusion, succinctly enunciated by Nancy Cott, that “change is characteristic of marriage. It’s not a static institution…. People can cohabit without great social disapproval; they can live in multigenerational families; there are scenes of group living; there are gay unions or civil unions. There is a greater variety of household forms that are approved and accepted, or at least tolerated….”
Social change is happening faster now than it did two hundred, one hundred, or even fifty years ago, with the result that change in the meaning and purpose of marriage is also happening faster than ever before, and the rate of such change seems certain to accelerate. Chapter 8 provides a relevant example—it is an analysis of how our sexual mores and attitudes have changed over time. In the case of marriage, it seems eminently reasonable to assume that changes in the approval, acceptability, and tolerance of different ideas and new forms of marital relationship will take place over periods no longer than the few decades that were needed to make interracial marriage and same-sex marriage socially acceptable to many and legally acceptable to the state. Cott points out that in the late twentieth century, marriage moved “towards the spouses themselves defining what the appropriate marital role or preference is.” This newfound freedom for couples to define their respective roles within their marriages now extends into the realm of legal agreement. Elisabeth Bartholet, holder of the Wasserstein Public Interest Chair in Law at Harvard, observes that the legal context of marriage has shifted from one in which the state has “enormous control over marriage” to one where people write “the terms of their own marriage” and are “allowed to have pre-marital contracts.”11 Furthermore, Bartholet comments that the trend of recognizing de facto relationships means that “if you look like a family, feel, smell like a family—you cook meals together, share bank accounts—then you are a family for the purpose of the law.”
In summary, marriage is changing at such a rate that there appear to be ever-increasing levels of acceptance and tolerance of how any given couple wishes to conduct their lives together. And as part of the right to choose will come the right to choose one’s spouse, even a robot spouse. By the time that today’s infants are entering matrimony, many of them will be deciding for themselves almost all the rules and laws that are to govern their unions.* By the time their children are ready for marriage, around the middle of this century, I believe that such a freedom of decision will be almost universally exercised.
How, then, will today’s children and their children make use of their own generations’ newfound freedom of marital choice? In attempting to answer this question, we first consider the main criteria employed in the choice of marriage partner. Elaine Hatfield and Susan Sprecher have examined preferences in marital partners in three different cultures—the United States, Russia, and Japan—in preparation for which they selected twelve criteria after studying several other lists of reasons for mate selection from the psychology literature. A total of 1,519 college students took part in their survey (634 men and 885 women), in which they were asked to rate each of the twelve criteria on a scale from 1 (unimportant) to 5 (essential). The results given in Table 1 indicate that of the twelve criteria, only the seventh-ranked—“being ambitious”—and the three lowest-ranked characteristics could reasonably be argued to be inappropriate descriptors for the robots of the next few decades. All six of the top-ranked characteristics will be demonstrable by robots within that time frame, and as for being physically attractive and skilled as a lover, these characteristics will in my opinion be among the first to be demonstrated with some measure of success.
TRAIT |
MEAN RATING (OUT OF 5) |
Kind and understanding |
4.38 |
Has sense of humor |
3.91 |
Expressive and open |
3.81 |
Intelligent |
3.73 |
Good conversationalist |
3.72 |
Outgoing and sociable |
3.47 |
Ambitious |
3.36 |
Physically attractive |
3.27 |
Skill as a lover |
3.17 |
Shows potential for success |
2.95 |
Money, status, and position |
2.50 |
Athletic |
2.50 |
Ratings of the Mate Selection Traits12
TABLE 1
With the freedom for couples to define the parameters of their own marriages will also come the freedom for the individual to define what he or she intends his or her own marriage to mean. Seeking a suitable human spouse might then become not only an exercise in matching interests, personalities, and the various other factors that we know to influence the falling-in-love process but also a search for someone who has used this same freedom of choice as to the meaning, rules, and purpose of marriage to create a model that matches one’s own. This relaxation of the constraints that used to provide a stable basis for the rules and expectations of marriage might therefore make it more difficult to find a spouse, since different potential spouses will be looking to play according to different sets of rules. For this reason one of the factors that I believe will contribute to the popularity of the idea of marrying a robot is the avoidance of the difficulty of finding a human partner with matching views on marriage—your robot will be programmed with views that complement your own.
Even more relevant to the practice of marriage to robots will be the question “To what extent will the new freedoms of choice regarding marriage extend to a choice of who (or what) people will legally be allowed to marry?” The United States has already seen some major changes in this respect, as interracial marriage has shifted from illegal to legal and many people’s minds and hearts are now open to the possibility of same-sex marriage. And in 2005 the Netherlands hosted a ceremony of a civil union involving three partners—a man and his two “wives”—when Victor de Bruijn, aged forty-six, from Roosendaal, “married” both Bianca (thirty-one) and Mirjam (thirty-five) in a ceremony performed before a notary who duly registered their civil union.
What novel form of civil union will be next? In future decades the sciences of creating prosthetic limbs and artificial hearts and other organs will continue to develop with accelerating pace, perhaps even adding artificial brains to the ever-growing list of body parts that surgeons can replace. The Norwegian philosopher Morten Søby discusses this trend in terms of the manner and extent to which it more and more reduces the distinction between man and machine and “becomes an element in the great story of evolution and development of civilization.”13 Writing about what prosthesis offers for the future, Søby explains that:
More and more artificial parts are added to the body—the result being a more artificial body. Research is being carried out with neural interfaces to develop auditory and visual prostheses, functional neuromuscular stimulants and prosthesis control through implanted neural systems, etc. Biosociological research into complex self-generating and self-referral systems is another example. Information technology and virtualization not only occupy man, nature and culture but are also about to outdate the genre of science fiction.
And to emphasize the point, Søby quotes other prominent philosophers: Paul Virilio in The Art of the Motor, who argues that “the basic distinction between Man and machine no longer applies. Both biological research and computer technology question the absolute difference between living machine and dead matter”;14 and Donna Haraway’s 1985 essay “A Manifesto for Cyborgs,” in which she asserts that “late-twentieth-century machines have made thoroughly ambiguous the difference between natural and artificial, mind and body, self-developing and externally-designed, and many other distinctions that used to apply to organisms and machines.”15
Thus with artificial limbs, organs and just about everything else body-related blurring the boundaries between real life and virtual life, it is appropriate to ask what impediments need to be lifted to make marriage between human and robot legally and socially acceptable. Right now there is no legal impediment to keep someone with an artificial leg from marrying, nor against someone with two artificial legs, or all four artificial limbs, or an artificial heart…. Where and why should society draw the line? Can we reasonably argue that it should be legally acceptable to marry someone 20 percent of whose body is made up of artificial limbs and organs, but that if the proportion were to rise to 21 percent, then such a union should be illegal? What logic dictates that a partner who is half natural and half artificial should be an acceptable marriage candidate but that a three-quarters, or 90-percent, or 100-percent artificial partner should not? Here lies a difficulty for the lawmakers of the future, those who are given the responsibility of drafting changes designed to bring the law up to date. As robots become increasingly sophisticated, as people have them in their homes as companions, when people have sex with them and fall in love with them, so it will become appropriate for those lawmakers to paraphrase Elisabeth Bartholet’s argument thus: “If your robot looks like a partner, feels, smells like a partner—you cook meals together, share bank accounts—then you are partners for the purposes of the law.” And as to the question of a robot’s being legally able to consent to its marriage, if it says that it consents and behaves in every way consistent with being a consenting adult, then it does consent.
Finally, there are those who would ask, “Why marry?” when discussing human-robot relationships, by which they would mean, “Why would anyone want to marry any robot?”—as opposed to why marry a particular robot. Two of the most commonly given reasons as to why people marry are love and companionship. Part one of this book has, I hope, convinced the reader that loving a robot will come to be viewed as a perfectly normal emotional experience and that before very long, robots will be regarded by many as interesting, entertaining, and stimulating companions. If these two reasons for getting married, love and companionship, are the foundation for so many millions of marriages between human couples, why should the same reasons not provide a valid basis for the decision to marry a robot?
Some Aspects of the Physical Design of Robots
The eventual acceptance of robots as sentient beings, worthy of our friendship, our love, and our respect will be greatly facilitated by the physical design and construction of robots whose appearance matches our notions of friendliness. Masahiro Mori, head of the robotics department at Tokyo University, was one of the first roboticists to suggest that a robot with a humanlike appearance will be apt to engender feelings of familiarity and affection from humans. This view is borne out by a study based on one of the first controlled experiments to examine the effect of a humanoid robot’s appearance on people’s responses, with a machinelike robot used as a comparison. The study suggests that people may be more willing to share responsibility with a humanoid as compared with robots that are less humanlike and more machinelike. And if the physical design of a robot creates an appearance in the human image, the robot’s physical actions and movements will provide immediate and easily comprehensible social cues, thereby enhancing a human’s perception of any interaction with the robot and making it easier for the human to engage with it socially. If, for example, the human swears at the robot, it could stick out its tongue as a gesture of complaint. But if the robot did not have a tongue to stick out, it would not be able to convey its feelings in this humanlike way, while if the robot’s tongue were not designed into its mouth but instead were located on the lower part of one of its legs, perhaps the action of sticking out its tongue might not have the same effect on the human.
Even though a robot’s appearance brings nothing to bear on its intellectual capabilities, it has been shown by psychologists that in general we prefer to interact with robots with whom we find it easy to identify, as compared to robots whose appearance is strikingly nonhuman.* But there is still a way to go before humanoids are as physically appealing as Stepford wives and their malebot counterparts. Although they are technically remarkable for their time and great fun to watch, the robots of today are not exactly Mr. Handsome or Ms. Beautiful, nor are they as cuddly as pet cats, dogs, rabbits, or Furbies. The Carnegie Mellon University robot, Grace, who attended an academic conference in Canada in 2002, managed to find its way around the conference building well enough to register for the conference, reach a lecture room by itself (asking for directions only when necessary), and deliver a talk on how it worked. But Grace did not look at all humanlike or even animal-like. Its “face” was an image displayed on a computer screen that formed the top part of its construction, while the remainder of its body was a mass of metal parts, electronics, wheels, and much of the other paraphernalia one would expect to find in an engineering laboratory. So although Grace performed admirably and with a certain measure of physical dexterity (she could navigate her way into an elevator and exit at the correct floor), she was not exactly anyone’s idea of a great-looking date.
One might argue that only the capabilities of a robot should matter to us and not its looks, but I believe that looks will matter a lot, a belief that stems partly from an experience I had around the age of ten. The first time I visited Madame Tussauds museum in London, I asked a gentleman dressed in a uniform the way to some particular part of the exhibition, only to realize after a second or two that he was not on the museum staff—he was one of the waxworks. So convincing was the wax janitor’s appearance that I’d been fooled into thinking “he” would respond to my question and would know the answer. After all, he looked just as I expected a museum janitor to look. This experience has doubtless been shared by many thousands of the museum’s other visitors, and it is a valuable lesson in understanding an important aspect of human-robot relationships. The appearance of a robot will affect how people perceive it, particularly their first impressions, as well as how they interact with it and the development of their relationships with it. If a robot has all the appearances of being human, then we will increasingly adopt an anthropomorphic attitude toward it and find it much easier to accept the robot as being sentient, of being worthy of our affections, leading us to accept it as having character and being alive. Thus the appearance of a robot’s head and face are clearly extremely important factors in our initial reactions when meeting it. First impressions do count. This is why it is not sufficient for the Graces of the future to look like electronics laboratories on wheels, or even on awkwardly moving legs. They must walk in a humanlike fashion, and above all they must be appealing in their appearance. Only then will huge numbers of people want them as their friends and lovers.
One year after Grace made her debut as a conference attendee, David Hanson, a graduate student at the University of Texas at Dallas, demonstrated a lifelike talking head. Its face had soft, flesh-colored, artificial skin made of an elastic, flexible polymer developed by Hanson especially for this purpose. The face on Hanson’s artificial head had finely sculpted cheekbones and big blue eyes. When connected to a computer the head could smile, it could frown, it could sneer, and its brow could develop furrows to give a worried look. Equally, the robot could turn its head, and particularly its eyes, toward a human, taking in through its vision system whatever emotional cues the human might be exhibiting and using this information to help it react with appropriate facial expressions. This kind of expressive power will enable robots to interact more easily with humans, using their electronic minds to control their facial expressions and head movements in accordance with whatever emotions the robot wishes to display. It is part of the human mechanism for developing two-way emotional relationships, a mechanism that will be enhanced with the affective technologies described earlier in this chapter.*
The design of the head that Hanson demonstrated was based on that of his blue-eyed girlfriend, Kristen Nelson. In April 2002, he had gone to a bar in the trendy Exposition Park area of Dallas, complete with a pair of calipers, in search of someone whose head would be suitable as a model for what Hanson had in mind. There he saw Kristen, whom he knew casually, and asked her, “Can I make you into a robot?” He did. The movements of Hanson’s artificial head are made possible by a collection of twenty-four motors, invisible to the observer, that simulate the actions of most of the muscles in the human face. The motors are driven by two microprocessors, and they employ nylon fishing line to tug the artificial skin when it needs to move. The eyes contain digital cameras to enable the head to see the people who are looking at it and, if required, to imitate their facial expressions, courtesy of its “muscle” motors.
Following its first convincing demonstration and the aura of publicity that surrounded it, the head attracted interest from companies in fields ranging from artificial limbs to sex dolls. And that was in 2003. In the time line for the development of sentient, lovable robots, Hanson’s work puts head design ahead of schedule. Add Hanson’s artificial head to Grace’s body and already the physical appearance of robots will have reached new heights of acceptability. And just as a robot’s emotional and intellectual makeup and its face and voice can be selected on an individual basis, so it can be designed with any wished-for physical characteristics, including skin, eye, and hair color; size of genitalia; and sexual orientation.
Feel and Touch Technologies
In designing artificial skin for robots, the most important properties will probably not be its appearance and expressiveness but rather its sensing capabilities—feel and touch. From a purely practical perspective, having a well-developed sense of feel will enable a robot to detect changes in its surroundings and move accordingly. But it is the more romantic aspects of feel that concern us here—how a robot can detect a physical expression of love, a caress or a kiss. Though perhaps with different research goals in mind, scientists in Japan, Italy, and the United States are working on high-tech skin development. The sensuous robot will be one of the spin-offs of their research.
At the University of Tokyo, a group led by Takao Someya is developing a synthetic skin, based on the technology for printing enormous numbers of flexible, low-cost pressure sensors on a large area of the skin material. Meanwhile in Italy, at the University of Pisa, Danilo de Rossi and his team are making skin using artificial silicone, which has the properties of elasticity (human skin stretches if pulled) and sensitivity to pressure. And in the United States, scientists at NASA are employing infrared sensors embedded in a flexible plastic covering—the sensors detect an object as the robot touches it and then send a signal to the robot’s computer, its “brain,” corresponding to the size, shape, and feel of the object.
The different types of sensor and the different skin materials being investigated by these groups reflect that the study of artificial-skin technology is still in its infancy and there is not yet a consensus as to what materials and technologies make for the best artificial skin. Future artificial-skin materials are likely to be more tactile and to provide even more sensors to afford greater sensitivity, but from the perspective of skin as an important component of a robot love object or sex object, it is hardly important what types of sensors are being used, or how many. What is important is that robots will be able to feel and recognize the touch and caress of an affectionate human, to know when their human is making the first physical overtures of passionate, romantic love. Similarly, a delicate sense of touch will be needed by a gentle robot lover, able to return its human’s tender caresses and initiate its own. Scientists at the Polytechnic University of Cartagena in Spain have created a sensitive robotic finger that can feel the weight of pressure it is exerting and adjust the energy it uses accordingly, allowing a robot to caress its human partner with the sensitivity of a virtuoso lover.
Smell and Taste Technologies
One novel technology that will contribute to a robot’s physical appeal is smell synthesis. The right kind of bodily fragrance can act as a powerful attraction and aphrodisiac, and not necessarily the kind of scent that comes in small bottles with big price tags. Instead the idea is to create electronically any smell to order. Just as your stereo speakers play out digitally stored music, so its smell equivalent will spray out the digitally stored smells generated by this technology. Your robot can exude a favorite perfume or a realistic counterfeit of your (human) loved one’s body fragrance, or even a body fragrance of its own that has been designed to appeal to you and to cater to your hormones and your personal desires.
The early attempts at bringing smell technology to the market were not exactly a great success. Despite serious investment, reportedly $20 million in one company alone,* the sweet smell of success eluded the pioneers in this field. By 2005, however, a new generation of digital-smells companies were racing to be the first to launch viable smell-creation technology,† and technologies very similar to those employed in the generation of smells to order can also be employed in the creation of artificial flavors that taste just like the real thing.
The fascinating aspect of this technology, from the perspectives of love and sex, lies in the creation of scents that can set a partner’s hormones running. These sense technologies will provide some of the foundation for the amorous and sexual attraction that humans will feel for robots. Sex usually involves several senses simultaneously: We enjoy the sight of our loved one, we enjoy the sound of their voice, the feeling of their skin when we caress it and the feeling on ours when we are touched, we enjoy their smell and their taste. All of these senses heighten our erotic arousal, and all of their corresponding technologies can be designed into robots to make them both alluring and responsive.
Robot Behaviors
An important facet of designing robots that promote satisfactory relationships with humans (satisfactory from the human point of view) is an analysis of the extent to which the robot needs to behave in a sociable way with humans in different types of situation. If, in a particular situation, a robot exhibits none of the normal human characteristics of emotion, it will probably appear to be insensitive, indifferent, even cold or downright rude. Solving this problem is not that simple. There might be some people—some nationalities, some age groups, or one of the sexes—who do not perceive a robot to be any of these things in the given situation, simply because of their cultural, educational, or social background. What is cold, rude, or uncouth to one group in society might appear to be completely normal, acceptable, even friendly to another group. A sociable robot that has emotional intelligence will therefore need to be able to make this distinction, to decide how to behave with different people in the same situation in order to be perceived as sociable by all of them. (Robots will be programmed to want to be liked by everyone, just as you and I do.)
Other factors that might affect the appropriate way for a robot to behave include where the human-robot interaction is taking place. Is it in the home, where a more overtly friendly behavior by the robot would be appropriate? Or is it at work, where the human might be the robot’s boss (or vice versa), and therefore a more overtly respectful attitude would be required of the robot (or the human)? Robots will need to be endowed with many “rules” of sociability for all sorts of situations and contexts, and this rule set can be expanded through the use of learning technologies. If a robot acts in a manner that appears rude to a human, the robot can simply be told, “That is rude,” whereupon, like a well-brought-up child, the robot can learn to improve its manners and behavior.
An interesting question here is whether robots should merely be designed to imitate human sociability traits or whether they should be taught to go further and create sociability traits of their own, traits that are atypical of humans but can nevertheless be appreciated by humans. To do so would be a form of creativity, possibly no more difficult to program than the task of composing “Mozart’s” Forty-second Symphony or painting a canvas that can sell in an art gallery for thousands of dollars—tasks that have already been accomplished by AI researchers.*
At the ATR Intelligent Robotics and Communication Laboratories in Kyoto, a robot called Robovie has been developed as a test bed for ideas in robot-human communication. Robovie has a humanlike body that is only four feet tall, so as not to be overly intimidating to the humans outside the laboratory with whom it comes into contact from time to time. Robovie has two arms, two eyes, and a system of three wheels to enable it to move around. (Legs are not yet considered a necessity for Robovie’s principal sphere of activity, which is communication with humans rather than tasks involving movement.) Robovie has an artificial skin, to which have been attached various sensors, sixteen of them, made from pressure-sensitive rubber. It can speak, it can hear and recognize human speech, and it can charge its own batteries when necessary.
Robovie’s developers believe that there is a strong correlation between the number of appropriate behaviors a robot can generate and how intelligent it appears to be. The more often a robot can behave in what is perceived to be an appropriate manner, the more highly will its intelligence be regarded. The scientists developing Robovie plan to continue to develop new behavior patterns until Robovie has advanced to the point where it is much more lifelike than a simple automaton. Part of this progress will come from the robot’s tendency to initiate interaction with a human user, rather than merely being reactive. You and I don’t always wait until we are spoken to before we say something, so why should a robot? You and I don’t always wait until someone stretches out their hand to us and says, “Hi. Nice to meet you.” Nor should a robot. Robovie will in appropriate circumstances shake hands with you; hug you; greet, kiss, and converse with you; play simple games such as rock-scissors-paper; and sing to you. And these are just some of the behavior patterns it had been taught up to mid-2004.
Robovie’s arms, eyes, and head also contribute to the robot’s ability to interact with humans and to how they perceive it, partly because of the importance of eye contact in the development of human relationships and therefore in the creation of empathetic robots. We humans greatly increase our understanding of what others are saying to us, the subtext as well as the words themselves, when we establish eye contact and observe a speaker’s body gestures. Research has repeatedly shown that during a conversation humans become immediately aware of the relative position of their own body and that of the person to whom they are speaking—the body language improves the communication. This explains the tendency for Japanese roboticists to build human-shaped robots, endowing them with effective communication skills and employing the results of research from cognitive science to create more natural communication between robot and human.
Experiments with a group of twenty-six university students showed that Robovie exhibits a high level of performance when interacting with humans, while the students generally behaved as though they were interacting with a human child, many of them maintaining eye contact with the robot for more than half the duration of the experiment. Some of the students even joined in with the robot in its exercise routines, moving their own arms in time with the robot’s movements. The natural appearance of the students’ interactions in the experiment was attributed to the humanlike appearance and behavior of the robot.
Humanoid Robots—from the Laboratory to the Home
The development of humanoid robots has thus far been a long and slow process. The first serious development of humanoids began at the School of Science and Engineering at Waseda University in Japan, with the commencement of the WABOT project in 1970. The first full-scale humanlike robot, WABOT-1, was completed in 1973. It could talk (in Japanese), it could measure distances, it could walk, and it was able to grip and carry objects with hands that incorporated tactile sensors to allow the robot to feel what it was carrying. It also had an artificial mouth, ears, and eyes.
In 1984 came the musician robot WABOT-2, designed to play a keyboard instrument. This task was chosen by the Waseda engineers as one that requires humanlike intelligence and dexterity. WABOT-2 could read a musical score, play tunes of average difficulty on an electronic organ, and accompany someone who was singing a song.
The most dramatic development thus far in the Waseda project started in 1986: creating a robot that can walk like a human. Well, almost. Its feet edge slowly and deliberately forward, and even after twenty years’ research it is not yet able to qualify for the walking championship in the Olympic Games. But it has long been able to climb up and down stairs and inclines, it can set its own gait so as to be able to move on rough terrain and avoid obstacles, and it can walk on uneven surfaces.
The March of the Humanoids
Once upon a time, before the advent of the PC, computers were so expensive that they were rarely found outside the confines of government, big business, and academia. Reasons for this expense included the high cost of powerful processing units—the “electronic brains” that enabled the computers to compute—and of the computer memories that had to be employed to store the programs and their data. All this changed in the late 1970s, when inexpensive microprocessors became available, devices that cost a few dollars but could perform calculations and the electronic manipulations of data that only a few years earlier would have required a “mainframe” computer.* Suddenly there were computers in the home, such as the Commodore PET and the Sinclair Spectrum, inveigling themselves into people’s daily lives. Androids have not yet reached that level of integration into our society, but their day is fast approaching.
Robots are not yet just like us, obviously. They behave in most respects in what we currently refer to as “robotlike” or “robotic” ways. One physical manifestation of this is how biped robots walk, slowly and deliberately moving their feet, making it obvious to the observer that they’re thinking about every step. Even the most advanced android robots today move in this extremely slow and deliberate manner.* Similarly, the best of today’s conversational software can be recognized as artificial by just about all the judges at the annual computer-conversation competitions. So as yet we cannot fairly describe our robots as being sociable, because to be considered sociable they would first need to be more humanlike. But that will come. When robots are perceived as making their own decisions, people’s perceptions of them—as solely tools for mowing the lawn and other domestic tasks—will change. And just as the day will arrive when, all of a sudden, robots are sufficiently humanlike to be considered for the epithet “sociable,” so the day will also come when robots are sufficiently sociable, in human terms, to be considered as candidates for our deepest affections.
Why do I believe that the necessary change in thinking will take place among a wide body of the population, a change sufficiently dramatic to alter people’s perception of robots from that of servants to their being our friends, companions, and more? It is because we have already seen other instances of the process necessary to bring about similar changes in our ideas about the roles of robots. This process requires two components—a change in our social and/or cultural thinking and a significant leap in technological capability.
There are several examples from the twentieth century of major social and cultural changes—particularly those relating to women: their enfranchisement as voters; their role in the home and in parenting, developing from that of dutiful housewives to members of a more equal partnership; their role in the workplace, from filling only the more menial jobs to taking on management and executive positions; the advances in female contraception that have given women more choices regarding their lifestyles and careers. Society is also undergoing a change in ideas regarding senior citizens, moving away from the expectation that one works with retirement in mind—and the sooner the better—to what is becoming regarded as a more economically sound model—namely, that later retirement means more earning potential and a lesser financial burden on the state, on one’s children, and on inadequate pension schemes. Another change that has become apparent in recent years is in society’s view of human appearance, as our concerns over obesity can be seen to lead to cultural expectations regarding the “correct” body size and shape, the result being that many women develop eating disorders while they try to stay (or become) thin. Also more apparent nowadays are cultural changes in individuals, as those who encounter people of other cultures sometimes question the ideas and conventions of their own culture, and change as a result.
Leaps in technology occur frequently. In the case of humanoid robots with the capabilities described in this book, most of the more difficult advances will be in the realm of the robot’s software—the computer programs that give it emotions and personality, that enable it to think, to understand what is said to it, to conduct a conversation, to make intelligent deductions and assumptions. These advances will come partly through new techniques in artificial intelligence—in other words, through new programming ideas—and partly because of developments in computer hardware, in the chips or whatever it is that will do the thinking, and in the computer memories that store the massive amounts of information robots will need. We have seen for many years that computing speeds and computer memory sizes increase steadily, year upon year, but the increases we have witnessed during the past two or three decades will pale into insignificance when completely new technologies become mainstream, technologies that go under names such as “optical computing,” “quantum computing,” “DNA computing,” and “molecular computing.” So rest assured, the advances in technology needed to create the robots that I describe in this book will indeed come. It is only a matter of time, and technological advances are happening ever faster as time goes on. The more we know about a science, the faster we are able to discover even more about that science and to develop technologies based on this new knowledge.
When we combine significant change in our social and cultural thinking with massive advances in technology, one result is the creation of entire new product categories, products that take advantage of new technologies to implement the ideas that make social change possible. When we have the technology, when we are receptive to the social change, society will move forward in that new direction. Robots as dance partners, for example—in 2005, Tohoku University in Japan demonstrated a dancing robot that can predict the movements of its dance partner, enabling it to follow its partner’s lead and to avoid treading on any toes. Another example is robots as university lecturers and public speakers—Hiroshi Ishiguro, from Osaka University’s Intelligent Robotics laboratory, has made casts of himself that form the basis for clones that he sends to deliver lectures in his stead. Then there is the robot sales assistant, developed by Fujitsu, that works in a Japanese department store, guiding customers around the store and carrying their shopping. And a receptionist, only twenty inches tall, manufactured by the Business Design laboratory in Nagoya, Japan, that asks visitors their name, can recognize as many as ten different faces, and tells visitors when the person they have come to see is ready to meet them. The examples go on and on, every year coming with its own crop of new applications for robots.* Robot jockeys that ride camels in races, robot butlers…And most of them, as you will have realized by now, are developed in Japan.
One non-Japanese product that has been a big commercial hit is the Robosapien android robot, a Chinese-American coproduction. Robosapien was the first affordable humanoid to come on the market. It was a toy designed by Mark Tilden, a former NASA scientist, manufactured in China and incorporating simple forms of some of the tech nologies described in this book. It could exhibit several movement-related capabilities, including using its articulated arms to pick up objects such as cups, socks, pencils, and other small light objects; throwing, dancing, and effecting a few karate moves. The toy reacted to touch and sound signals and had sensors in its feet to enable it to detect and avoid obstacles. It could also walk at two different speeds. Robosapien had personality as well—if it wasn’t given any commands for a while, it would go to sleep and start to snore! At the price, around eighty-nine dollars in the United States, Robosapien was a sensation. The first of its kind.
The commercial success of the Robosapien during the second half of 2004, when in Britain alone some 160,000 were sold, was perhaps the first stage of the assimilation process for robots. Robosapien was remarkable mainly for its ability to perambulate, albeit in a typically deliberate and robotic manner. When vision technology is added to enable this toy and others to recognize people and objects, when natural-language-processing and speech-synthesis technologies understand what people say to them and to reply sensibly, when cognitive technologies learn and are able to plan how to solve problems, then robot toys will become part of the family, rather like a new breed of family pet. But instead of requiring feeding, vet bills, and expensive places to stay when you take your vacation, these electronic pets will carry a once-in-a-lifetime cost of a hundred bucks or thereabouts, rechargeable batteries included. In the meantime humanoid robots are somewhat more expensive. Mitsubishi’s Wakumaru will look after your house while the family is absent, monitor the health of a sick relative, connect itself to the Internet and sort your e-mails, recognize up to ten faces, understand some ten thousand spoken words (in Japanese), encourage you to visit the gym, and be “convenient for the life of family members.”16 A real deal, at around $14,300.
In concluding the first part of this book, I very much hope that any readers whom I have failed to convince as to the viability of emotional relationships between humans and robots will not close their minds to the possibility but at least be willing to observe without prejudice as advances in robotics and AI arrive thick and fast during the coming years. Deb Levine’s stimulating turn-of-the-milennium article “Virtual Attraction: What Rocks Your Boat,” makes an excellent case for at least remaining open-minded:
As time goes on, it will be important for society to recognize the various ways people are interacting intimately as valid and equal. Right now, some relationships, specifically marriage between heterosexual couples, are valued more than others are. As technology enters more people’s lives, and we are exposed to a variety of different attractions and relationships, it will be important to recognize and equalize virtual forms of attraction and communication with more traditional face-to-face interactions.17