5 Caring about the Feelings of Lovers and Baristas
Preparing for a future in which the digital package eliminates many jobs and fails to generate many to replace them is good insurance against the uncertainties of the Digital Age. We must look beyond the digital package for jobs to replace those that it destroys. It is a mistake to assume that the digital package will, after a brief interlude of technological unemployment, create better jobs to replace those it destroys. We should expect the digital package to create new economic roles. Humans may be able to fill these roles. But the protean powers of the digital package should lead us to expect that these roles will be better filled by machines than by humans. The money that would fill the pay-packets of these human workers creates an incentive to make machines that are both more efficient and cheaper.
This chapter makes a case for a category of social jobs grounded in our interest in interacting with beings with minds like ours. Humans prefer to interact with members of our particular chapter of the mind club. This vision of how we could inhabit the Digital Age requires that we reconsider what we value about work. I argue that we should create societies organized about social-digital economies. These economies feature two different streams of economic activity, centered on two different kinds of value.
The principal value of the digital economy is efficiency. We will assess potential contributors here in terms of the efficiency with which they produce outcomes that we value. If we characterize the purpose of work in terms of its outcomes, then we cannot long resist the machines.
The principal value of the social economy is humanness. The rewards here flow to beings with human minds, beings capable of feelings like ours. We take pride in our distinctively human chapter of the mind club and enjoy interacting with other members of it. We value efficiency here too. But we sometimes prefer less efficient arrangements with humans to more efficient arrangements without humans. Turing fantasized about machines with minds like ours. The pragmatic interest in AI that has given us machine learning directs us to enquire after the opportunity cost of making machines capable of small talk. The machines that really power the Digital Age will lack the all-round combination of mental abilities that we equate with being human. We will rightly reject the inefficiencies of humans when they stray into parts of the economy that emphasize the skills of the computer. But we should have the courage to reject digital technologies when they trespass on distinctively human domains. When we do so, we cite our strong preference for outcomes produced by beings with human minds.
The social-digital economy is not a forecast. We could easily realize some version of a digital dystopia in its stead. In one version of this digital dystopia, efficiency rules. Humans are forced to compete on price with more efficient machines. Eventually they succumb to the economic argument against human work. I offer the social-digital economy as an especially attractive way for our species to inhabit the Digital Age. It will require some effort and attitude adjustment to achieve it. We hear much about the wonderful toys and apps of the Digital Revolution. We need a matching social revolution—social as opposed to a socialist—that gives human workers displaced by the Digital Revolution opportunities to work in the social economy. The Digital Revolution promises the technologies of science fiction. The social revolution could restore some of the gregariousness that was a feature of our pre-civilized past.
What Is It Like to Love a Robot?
The defining features of jobs on the social side of the social/digital divide are human relationships. Many of the most important relationships of our lives occur outside of the context of work. These include relationships with romantic partners, children, and friends. But many important relationships are made and play out within the context of work. We enjoy distinctively human relationships with our employers, work colleagues, and those we employ. We form such relationships with those to whom we provide services, and those who provide services to us. We may place greater importance on relationships that we form outside of work. But that does not prevent relationships that occur in the context of work from mattering too.
The poor performance of machines in jobs centered on relationships is most apparent when we think about romantic relationships. A theme of recent stories about artificial intelligence is that robots make disappointing lovers. In an episode of the TV series Black Mirror, Martha replaces her deceased boyfriend Ash with a synthetic look-alike programmed with a personality assembled from Ash’s extensive online contributions. She is initially impressed by the duplicate’s performance in the sack—human Ash is presented as a somewhat disengaged lover. When Martha asks synthetic Ash how he manages it, his answer—“Set routine, based on pornographic videos”—seems unsatisfactory. It’s all downhill from there as Martha tries and fails to elicit from synthetic Ash the kinds of responses that would come from a human lover. The episode ends with synthetic Ash being stored in the attic. There he uncomplainingly awaits sporadic social interactions with Martha and her daughter on weekends and birthdays.1
There are aspects of lovemaking that favor the value of efficiency. Synthetic Ash outperforms human Ash in these respects. He lasts longer than 30 seconds and facilitates more orgasms. His mode of programming suggests he could have a Fifty Shades of Grey setting that Martha might save for special occasions. These aspects of lovemaking clearly matter. But humanness matters too. Martha wants a lover who does more than just a very good job of going through the requisite motions. She wants to be with someone who can reciprocate her feelings. This, synthetic Ash can’t do.
It’s possible to imagine improvements to synthetic Ash. His principal deficiencies are in areas in which there is especially rapid progress. His programmers will predictably come up with more human-like continuations of conversations that open with “I love you.” Perhaps his romantic programming will achieve a state of virtuosity that makes him hard to distinguish from human lovers. That is something we might expect given the rapidity of progress in artificial intelligence. But this misunderstands Martha’s complaint about synthetic Ash. She is not really complaining about his responses. Rather her complaint is about what she strongly suspects is an absence behind those responses. She doubts that there are any genuine feelings and emotions behind his romantic affirmations. Viewed from this perspective, none of the improvements of Ash’s programming suffice to transform synthetic Ash from a being with a mental life no different from that of a laptop computer into a being with a mental life like Martha’s. Suppose that Ash was a human being who had survived an accident that had damaged the parts of his brain that directed his romantic behavior. Suppose he was attempting to relearn how to make love by obsessively watching pornographic movies. This pattern of behavior is far from the romantic ideal. But Martha could be confident that this version of Ash has a mental life that resembles hers in significant ways. There are real feelings behind his declarations. Real feelings guide his actions.
Consider the Hollywood reflection on robotic romantic partners in the 1975 film The Stepford Wives. In this movie, the women of an idyllic Connecticut town have been replaced by robots that care about nothing beyond housework and attending to their husbands’ needs. It’s hard to tell what is more chilling about this scenario—the idea that the wives might lack distinctively human mental lives, or the idea that the men might take so little interest in the inner mental lives of their wives that they would consider this arrangement an improvement.
I cite the countless love song lyrics that refer to feelings as evidence for this interest in the feelings of our romantic partners. These range from Whitney Houston’s request “I Wanna Dance With Somebody (Who Loves Me),” the Bee Gees’ query “How Deep Is Your Love,” and the fears of The Righteous Brothers that “You've Lost That Lovin’ Feelin’.”
The philosophical term of art for this feature of Martha that seems absent from synthetic Ash and the Stepford wives is phenomenal consciousness. The more familiar term is feelings. Put another way, there’s something that it’s like to be Martha. We worry that there’s nothing that it’s like to be Ash or the Stepford wives. We are members of the mind club.2 Our particular chapter of that club is defined by distinctive human thoughts and feelings.
Don’t expect a detailed philosophical theory about the nature of feelings here. It is, however, useful to gain some perspective on the seemingly intractable philosophical problem of phenomenal consciousness. When considered in the long view it seems to come from a long-standing desire to find reasons that might justify our specialness. Some religious believers claim that what makes humans different from the rest of nature is our possession of immortal souls. They would deny these to even the most powerful machine learner. According to some, these souls were infused into us by an almighty god. This story about human specialness is less influential in sections of society in which there has been a decline in religious belief. The theory of phenomenal consciousness presents as a replacement for immortal souls amenable to those who lack religious belief. There’s something that it is like to be a human being, but nothing that it is like to be a tree or a machine learner. The evidence for the existence of immaterial phenomenal consciousness comes not from holy books but from the data of our experience. In the most historically influential of these arguments, René Descartes noted that our access to and experience of our own minds seems to differ in fundamental ways from our access to and experience of physical objects.
This chapter’s engagement with phenomenal consciousness differs from the philosopher’s traditional way of engaging with it. The traditional philosopher’s question about conscious robots is: “Is there something that it is like to be robot?” Opponents of the possibility of conscious computers identify aspects of the ways in which computers process information that may make this processing crucially different from human conscious thought. They argue that the impressive computational achievements of future computers require no phenomenal states. Defenders of the possibility of conscious computers argue that there are no good grounds to deny them conscious states. As we have seen, computers have demonstrated an impressive ability to achieve cognitive tasks in which humans formerly had a proprietary interest. They beat us at chess and at Jeopardy! Why shouldn’t improvement in programming and computing power soon produce machines with feelings?
Our starting question switches focus. It takes a second-person perspective on conscious computers. We ask: “What is it like to love a robot?” Doubts about the robot’s capacity for conscious experiences make a difference to our experience of being in a romantic relationship with it. We approach uncertainties about the truth of philosophical propositions differently when they make a crucial difference to our lives. When it comes to the relationships that matter most to us, the suggestion that there might be nothing that it feels like to be your romantic partner is terrifying. A romantic partner is supposed to be much more than a sex toy that can also make coffee and pick the kids up from school. This second-person perspective changes how we feel about computers and phenomenal consciousness. Our interest is not first and foremost in the truth of a philosophical proposition. Rather it’s in a feature of our lives that most of us place great importance on.
Suppose that you follow the philosophical debate about conscious machines. You find and study the best presentations of the arguments for and against the consciousness of computers. Early on in your investigation you encounter the argument of the philosopher John Searle that computers are incapable of thought, and by implication of conscious thought.3 As we saw in chapter 2, Searle’s famous Chinese Room thought experiment supports the conclusion that even the most sophisticated program neither requires nor generates genuine thought. A quick Google search will reveal many philosophical responses to Searle. Some philosophers think that Searle exaggerates the differences between brains and computers. It’s difficult to see how computation might generate conscious thought, but it’s also difficult to see how the firing of neurons and the adjustment of synaptic action potentials could do this. In the latter case consciousness seems to be a kind of emergent property that comes into existence with sufficient neuronal and synaptic activity of an appropriate kind. The very complex computers of the future could also generate that emergent property. This is a breathtakingly fast summary of a very involved philosophical debate. Here I am less interested in the details of the debate itself than in making a high-level observation about the practical implications of philosophical conjectures. What should the fact that a very smart philosopher thinks that your cyber-lover who seems attentive to you, is incapable of the barest, most minimal thought, mean for you?
There is a difference between the philosopher’s dispassionate engagement with the questions about phenomenal consciousness and the way we tend to engage with questions about the feelings of our lovers. Suppose that you decide that the arguments in favor of the possibility of conscious computers are, on balance, more persuasive than the arguments against. You pronounce yourself a believer in the proposition that computers can be conscious. But philosophical debates are typically not eligible for decisive resolution. This is reflected in the split among informed, intelligent philosophers on the questions of whether a human brain is a prerequisite for phenomenal consciousness. It’s helpful to think here in terms of credences, or degrees of belief. A credence of 1 in a proposition indicates absolute confidence in its truth. A credence of 0 in that proposition indicates absolute confidence in its falsehood. Disagreement among those best informed about whether digital machines can have feelings make it rational to avoid either of these extremes. You might conclude that Searle’s arguments are, on balance, slightly more persuasive than the many responses. A credence of 0.7 is a rational way to reflect that assessment. It reflects your recognition that there is a real chance that Searle is wrong. Or you might conclude that the arguments of Searle’s opponents are, on balance, more persuasive. A credence of 0 in Searle’s conclusion misrepresents this considered assessment. A credence of 0.3 should suffice for you to present yourself in a philosophy seminar as someone who rejects Searle’s conclusion. You stand ready to adjust that credence up or down depending on any subsequent arguments that you encounter.
Now consider your second-person interest in the phenomenal consciousness of machines. Suppose you discover that the head of your life partner contains not a human brain, but a powerful digital computer. You would presumably be horrified to learn that your significant other not only did not reciprocate your feelings but was not the kind of being who ever could. The idea of a lover who goes through all the romantic motions, performing actions that are perfectly suggestive of feelings but for whom its entirely dark inside, is the stuff of nightmares. The suggestion that the arguments in favor of computer consciousness are on balance slightly more persuasive than those against does not fully meet this worry. Suppose that being generally persuaded by the arguments of advocates of machine consciousness translates into a credence of 0.7 for the proposition that digital machines can be conscious. This might suffice for you to join the philosophical debate on the side of the defenders of conscious computers. But its second-person implications are chilling. It translates into a 0.3 credence in the proposition that there is nothing that it is like to be your lover. As anyone who has ever placed a bet should know, 30 percent of the bets placed on outcomes that have a 70 percent probability of occurring end up losing. When it comes to those you love, you want to hear something better than “The philosophical arguments in favor of the phenomenal consciousness of robot romantic partners are, on balance, slightly more persuasive than the arguments against.” You should always be worried when a scan of your significant other’s head reveals not a brain, but instead some densely packed circuit boards.
We can render this reasoning in more emotionally vivid, less philosophically abstract terms. Suppose someone were to tell you that there was a genuine possibility that the fully human love of your life, despite protestations to the contrary, actually felt nothing for you. He or she is just going through the motions, exclusively focused on the material benefits brought by the relationship. The love of your life would celebrate your premature passing, performing the grief act just long enough to materially benefit from your will. Further suppose that this possibility is accurately rendered as 0.3. This means that it’s more likely than not that the love of your life has feelings for you that resemble yours for him or her. There’s a 70 percent chance that he or she is sincere and a 30 percent chance that, deep down, his or her feelings are a combination of contempt and indifference. I suspect that this would be very bad news about your relationship.
We should distinguish this reasonable doubt about the experiences of robots from a species of extreme skepticism whose appeal precipitously declines outside of the confines of academic philosophy. The “problem of other minds” is a philosophical perennial. My capacity for introspection makes me very confident that I have a conscious mind. But I have no such introspective access to your conscious mind. How can I be sure that you have one? Might there be nothing that it’s like to be you? Could the world be populated by one conscious being—me—and a few billion mindless zombies in human form?4 I have nothing further to say about this philosophical skepticism beyond saying that our confidence in the consciousness of other members of our species should be greater than our confidence in the consciousness of even very sophisticated robots. The same neurological hardware that seems to generate consciousness in us also exists in the heads of our human romantic partners. It’s the basis of our confidence that dogs that act like they are in pain are in fact experiencing pain. The contents of their heads are similar enough to ours for us to be confident that there is more happening there than a computer’s error signal when you persist in pressing the wrong key. If scans of your lover’s head reveal innards that are little different from those of a MacBook laptop then you should be worried. Doubts about the reality of his or her mental life should increase. Someone who claims to seriously doubt that the patently human biological brain in your head can generate consciousness should be dismissed as a philosophical crank.
There should be a bias for human lovers well into a future in which sex robots manage to muster human-like answers to “I love you.” It’s rational to carry a bias for human lovers into this future. Suppose that you have a choice between a human lover and an artificial lover that is the product of a mature artificial intelligence. It reliably produces human-like responses to inquiries. The probability that a programming glitch will lead him or her to respond to some romantic entreaty of yours with “Does not compute!” is about the same as the probability that your biologically human lover will be unable to respond appropriately because he or she is having a stroke. If you select the first option, then your choice is likely to have been influenced by the belief that humans are special. You may strongly suspect that the machine version has feelings. You might accept that some very intelligent philosophers find no grounds to distinguish between the two potential lovers. But you’ll acknowledge that this is a quite serious thing to be wrong about. In affairs of the heart it’s best to go with the neurophysiology that you know from your own case can produce conscious feelings. The possibility that the individual that you spend much of your life loving not only doesn’t feel the same way about you but is incapable of doing so is, for most of us, profoundly chilling.
I have presented a bias toward romantic relationships with humans and away from relations with artificial substitutes as a rational response to uncertainties about robot experience. There seems to be an emotional grounding for this bias too. Japanese roboticist Masahiro Mori has described an uncanny valley that seems to characterize our responses to artificial beings.5 We tend to experience a sense of unease when interacting with a computer-generated character or robot that very closely resembles but is nevertheless distinguishable from a human being. The uncanny valley has been an obstacle to Hollywood’s attempts to produce animated versions of humans. The 2004 movie The Polar Express contained animated characters that looked very like human characters but were nevertheless distinguishable from them. The result is described by the CNN.com movie critic Paul Clinton as “creepy.” Clinton says the movie’s human characters look “soul dead.” The result is that “The Polar Express is at best disconcerting, and at worst, a wee bit horrifying.”6 Among the obstacles for CGI characters as objects of empathy or sympathy are their eyes. When we converse with other humans a significant amount of our attention is directed toward their eyes. The white sclera of human eyes seems to have evolved to perform a social signaling function. We have a strong evolved interest in what other humans are attending to. When the eyes of a computer-generated character behave in ways that don’t seem human we immediately suspect a counterfeit. The loose resemblance to us of the Star Wars protocol droid C-3PO prompts no sensations of angst. We even find C-3PO endearing. As androids get more human-like they tend to produce reactions of unease. Here a near miss in terms of appearance and behavior translates into a big miss in emotional terms. The same built-in emotional bias seems to characterize our response to human-like robots.
From Romantic to Work Relationships
There are many differences between romantic relationships and relationships that occur in work contexts. The relationships we enter into with doctors who prescribe our antibiotics or baristas who make our cappuccinos tend to be more temporary. In the case of the barista who makes a morning espresso, they may be positively fleeting. But they share a feature of our romantic relationships. We have an interest in the mental lives of our doctors and baristas. Our social needs enmesh us in a very wide variety of relationships with other human beings. The relationships we enter into with our lovers or children matter a great deal to us. When one of those relationships goes badly it has the power to wreck your life. The relationships we enter into in work contexts tend not to have this power over how your life goes. But they still matter. When your barista pointedly ignores your morning salutation, your day goes a little worse. Our very social evolutionary history makes sense of this reaction. John Cacioppo and William Patrick call humans “obligatorily gregarious.”7 They say, “As an obligatorily gregarious species, we humans have a need not just to belong in an abstract sense but to actually get together.”8 The paradigm of a good life for an obligatorily gregarious member of the species Homo sapiens contains many positive social experiences—greetings offered and reciprocated, inquiries after your thoughts when you appear to be particularly pensive, offers of assistance with a difficult door, and so on. Someone who suffers from social isolation probably has fewer and poorer quality relationships of all types. We can understand the significance of these relationships to us if we view them as relationships between beings with conscious minds. Beings who share your most valued feature—they have minds—care about you. Other beings like you loathe you—but at least you matter in some way to them. The editing out of humans from our daily lives may lead to your needs being met in ways that are objectively superior. But it deletes these distinctively human aspects.
A 2015 study by researchers at Oxford University and Deloitte presents “waiter or waitress” as “quite likely” to be automated within two decades—a 90 percent probability. If we focus exclusively on efficiency, then this claim makes sense. An appropriate measure of efficiency for waiters takes into account accuracy in the taking of orders, the speed with which those orders are conveyed to the kitchen, the time it takes for ordered food to be sent to the right diners, the rate at which plates are dropped, the accuracy in totaling bills, and so on. It’s not hard to imagine machine waiters being more efficient than their human coworkers. But we value the fact that our waiters are beings who are capable of feelings like ours. We have an interest in their mental lives. We like waiters who ask “Did you enjoy the goulash?” not because their programming reflects a finding that diners like to be asked such questions but because they have some genuine, if fleeting, interest in how you liked it. The value of humanness is likely to be entirely opaque to visiting extraterrestrials. They are unlikely to care about the difference between being served food by a machine or a human. Perhaps our best philosophical arguments in favor of the phenomenal consciousness of the waiter will seem absurd to them. But we humans find these arguments plausible, even if not fully philosophically convincing, and hence the difference matters to us.
Other jobs that center on human relationships are teacher, nurse, counsellor, actor, and writer. I’ve suggested that part of the value that we place on the performance of these jobs is the fact that they place us directly into contact with other humans. We receive the help that we need from beings who have minds like ours.
How does this suggestion that we value interacting with beings who have minds like ours support a Digital Age with human workers? Work is a key part of the solution to the problem of how we build a successful society out of humans who are total strangers to each other. Pre-Neolithic forager bands were face-to-face communities. Foragers typically treat strangers with suspicion. The economist Paul Seabright presents successful societies of strangers as one of our species’ signal achievements.9 He reflects on the improbability of such societies given our evolution as a “shy, murderous ape that had avoided strangers throughout its evolutionary history.”10 We are “now living, working, and moving among complete strangers in their millions.” What makes this remarkable, according to Seabright, is that “Nothing in our species’ biological evolution has shown us to have any talent or taste for dealing with strangers.”11
Seabright gives much of the credit for this conversion of a shy, murderous ape into a gregarious, trusting human to markets and the institutions that form around them. Foragers must cooperate with members of their band—relatives or at least individuals they know well. Human societies thrive because of mutually beneficial exchanges. When hunters work together they achieve more than all of them could have on solo expeditions. Work translates these one-off mutually beneficial exchanges into long-standing arrangements. The cooperative enterprises of the technologically advanced societies of the early twenty-first century are significantly more complex than the group hunting or gathering expeditions attempted by foragers. Think of all the diverse contributions of individuals, many of whom are strangers, that go into supplying a house with power and ensuring that you can use that power to conduct a Google search. Jobs tend to standardize contributions to cooperative undertakings. When you become your village’s blacksmith you advertise yourself as being the person to go to for objects made from wrought iron. You stand ready to accept the trade goods or money of total strangers in exchange for your handiwork. If you take a job as a software engineer working in Google’s search division, you stand ready to meet the Internet search needs of many millions of total strangers. Work generates valuable goods. But it is also an important social glue that binds suspicious strangers into successful cooperating communities.
Humans may be obligatorily gregarious, but when left to our own devices that gregariousness tends to be parochial. We seek out people we know or people who resemble us in ways that we care about. We don’t seek out strangers. In one study that demonstrated our preference for people like us, Angela Bahns, Kate Pickett, and Christian Crandall compare the social groupings that formed at a large state university where there would be lots of choice about who to bond with socially with the groupings of small colleges from the same geographical region.12 They found that when given the choice, students used the greater choice of the large university to find others similar to them. They may say that they arrive on the very diverse campus of the large university excited about the array of different kinds of people they could form social bonds with. But this doesn’t seem to describe how they behave once there. This preference for those we know or those who resemble us in some way that we judge significant makes sense in the light of human forager origins. For foragers, strangers are scary.
Work is a way to create bonds between strangers in large, diverse societies. It requires you to reach out beyond your forager comfort zone and form relationships with scary strangers. The fastest way for your coffee business to go under is to limit its services to friends and relatives. You want to serve coffee to all comers. A robotic barista may be a more efficient provider of lattes than its human equivalent. But when you receive your latte from the human you generate some of the social glue that fashions diverse strangers into harmonious societies. A brief visit to a Starbucks is likely to require social contact with someone your forager shyness suggests you should avoid.
Suppose you have the misfortune to be raised by racist parents. You go out into the world and find that you share a work place with members of the group you were raised to hate. The relationships you form with coworkers may be a powerful way to reverse this unfortunate aspect of your upbringing. There is reason to believe that working together to achieve a shared goal is an especially effective way to overcome mistrust.13 You may have been raised to hate Muslims but when your employer places you in a group that contains Muslims and requires you to work with them to achieve a difficult goal, then the prejudice of your childhood faces a serious challenge. Cooperative relationships are not unique to work. If you sign up to play soccer on the weekend you may find members of mistrusted groups in your team. But we should acknowledge that work is an important venue for them. .
Those who value the diversity of the modern multi-ethnic societies and who accept or even express enthusiasm about the end of work should suggest some alternative source for the social glue supplied by work. But they must do more than merely imagine it. Perhaps it is possible to create a socialist paradise in which everyone joyfully complies with the proposal popular in nineteenth-century socialist circles and popularized by Marx “From each according to his ability, to each according to his needs.” In this socialist paradise, it won’t matter than some of the needy speak languages that differ from the majority, worship differently, and look different. A socialist paradise would be wonderful. We might set it as a long-term goal. In the meantime, however, we can use the institution of work to both give people meaningful lives in the Digital Age and ensure that they form bonds with strangers. To put it pithily, work works. It’s something that we have. It’s worth preserving it until we have a proven replacement.
How will we stop our forager parochialism from reasserting itself and restricting ourselves to friendships with people we judge to be like ourselves? I return to this issue in chapter 7 where I discuss the enthusiasm of advocates of the universal basic income for a world either without work, or at least with much less work.
The recognition of humans as obligatorily gregarious allows that sometimes we just want to be alone. A general preference for being with and dealing with other humans is a legacy of our evolutionary past. This general preference does not mean that we must always be with beings capable of feelings like ours.
Much is written about advances in medical robotics. I’ve suggested that, in general, we enjoy the benefits brought by human doctors and nurses. But sometimes in medicine we do want to be away from other humans. Some procedures are embarrassing. If a robot performs my prostate exam I have no grounds for awkward feelings about the state of my bum and worries about my decision to order the extra-spicy vindaloo for lunch. Here the absence of mental states is an advantage. But these cases should be considered exceptions to the general rule about a preference for interactions with other human minds. We are social creatures even if we sometimes want to hide out in our man or woman caves. This preference may extend to the seemingly very human activity of therapy. Writing in the Atlantic Monthly, Derek Thompson observes that “some research suggests that people are more honest in therapy sessions when they believe they are confessing their troubles to a computer, because a machine can’t pass moral judgment.” Thompson does not think that this means that therapists will soon suffer the fate of handloom weavers in the Industrial Revolution. “Rather, it shows how easily computers can encroach on areas previously considered ‘for humans only.’”14 The view of the Digital Age defended in this book presents this preference of machines as a marginal phenomenon—cases in which we take time out from other humans rather than reflecting a desire to restructure our lives to eliminate human interactions. I’ve suggested that we have a general preference for human romantic partners. But sometimes you’d prefer a few minutes with some sex tech to the sturm und drang of “making love!” with another human being.
There’s another side to our interest in the conscious experiences of workers in the Digital Age. When we socially enhance the job of sales assistant we make it more enjoyable. Mihaly Csikszentmihalyi and Judith LeFevre’s paradox of work suggests that our notion that leisure time is more enjoyable than work time is for the most part false. Some of the work in today’s technologically advanced societies is a dull grind. But some of it is meaningful. If we choose to live in a society in which this work is automated, then we choose to eradicate many of these pleasurable experiences. I suspect that there is an inconsistency. We well understand the pleasure that we get from performing purposive activity such as taking an evening stroll or signing up for pottery class. People who enjoy strolls and pottery don’t claim to be the most efficient at those activities. You would scoff at the suggestion that these and other activities that we enjoy be automated. The pleasure your inferior performances give you makes these activities worth doing. There’s something enjoyable about getting clay on your hands. You should spare some regard for the positive experiences of the human nurse who checks your blood pressure and barista who produces your special coffee together with the signature design on its froth.
We can imagine that when the robo-psychotherapists of the very distant future tell patients that they know what it feels like to go through bereavement, they will mean it. I’ve claimed that we have a bias in favor of human experiences, but nothing I have said suggests that it’s impossible that robo-psychotherapists could have these feelings. But this would be an odd and self-abnegating direction for us to take digital technologies. It’s clear where there’s a need for automation. A fully automated cockpit that gets me from Wellington to San Francisco more safely than a human pilot would be a digital technology worth having. But what could be true for pilots need not be true for roles based on our social natures. Why automate jobs that we are both good at doing and find deeply meaningful? A collective decision to automate these jobs is a bit like seeking to automate your evening stroll.
What Counts as a Social Job?
It may be relatively easy to see how barista and teacher are social jobs. But the suggestion that the central feature of such jobs is the making of connections between beings with similar minds enables us to see many other jobs as essentially social.
When you read Harry Potter you are reading something written by J. K. Rowling. Rowling’s writing is a social activity that places her into contact with millions of readers. When readers speculate about Hogwarts they seek to gain some insight into her mental life. Rowling had distinctive thoughts and feelings when writing about the villainous Lord Voldemort. We are impressed that the entire Harry Potter universe was brought into existence by that very powerful human imagination. The revelation that all the Harry Potter books were written by a story-writing AI would not be merely interesting. It should be a profound disappointment to the books’ many millions of fans—they have learned that there was no distinctively human consciousness on the other side of those pages.
The relationship between Rowling and her readers is asymmetrical. Rowling affects her readers in ways that they mostly cannot affect her. She writes the books and they read them. For certain kinds of human relationships some approximation to symmetry is important. In general, we want our romantic relationships to be symmetrical. We tolerate a significant degree of asymmetry in relationships that make lesser contributions to our well-being.
The point is not that humans cannot be tricked by machines programmed to write fantasy novels. The point is how we respond to such trickery. In chapter 2 I argued that our judgments about mind go deeper than the snap judgments of our Hyperactive Agency Detectors. Many of today’s machines are granted provisional entry to the human chapter of the mind club. Our assessments of mind draw on more information than generated by the 5 or 25 minutes of conversational probing available to Loebner Prize judges. Assessments of mind are ongoing. When you learn that a chatbot has used the sexy-talk strategy to influence your decision about whether it was a member of the human chapter of the mind club, you seriously consider reversing your initial admission of it.
In their 2017 book Machine, Platform, Crowd, Andrew McAfee and Erik Brynjolfsson describe a chemistry professor and music aficionado who hears a work composed by the music-writing AI Emily Howell and pronounces it “one of the most moving experiences of his musical life.”15 The professor later hears a recording of the same music at a talk given by Emily Howell’s programmer and says “You know, that’s pretty music, but I could tell absolutely, immediately that it was computer-composed. There’s no heart or soul or depth to the piece.”
This reaction is not the absurdity that McAfee and Brynjolfsson present it as. The claim that the aficionado could “tell absolutely, immediately that it was computer-composed” is clearly false. But his assessment that it has “no heart or soul or depth” can be true so long as we understand it as drawing on beliefs about the mental states that lie behind its composition. Just as we can reverse our provisional inductions of Loebner Prize winners into the human chapter of the mind club when we learn more about them, so too we can reassess the value that we initially place on Emily Howell’s compositions. We can view being moved by its compositions as fraudulent. Tchaikovsky wrote his 1812 Overture with its volleys of cannon fire to celebrate Russia’s defense against Napoleon’s army. Any cannon volleys written into Emily Howell’s compositions will have no connection to beliefs about heroic repulses of invaders. It’s perfectly appropriate to grant that its compositions may count as “pretty music” but to retract any provisional attributions of “heart or soul or depth” when we learn more about how they were composed.
Can I Justify My Pro-Human Bias?
I suggested that given the choice between a human lover and a behaviorally identical robot driven by a digital computer, if you care about the mental life of your prospective romantic partner, you should prefer the human. I extrapolated a preference for human baristas and teachers from this preference for human romantic partners. Even the best arguments for the existence of human mental and emotional states in the ingeniously programmed sex robots of the future should not allay the suspicion that there are no human feelings behind their very human-like behavior.
Effectively, I am arguing for a pro-human bias. We prefer to interact with humans both in the bedroom and in the work place. The word “bias” may sound ominous. A bias in favor of working with humans seems to suggest a future quite different from that depicted in Star Trek. The crew of the Starship Enterprise combines a motley assortment of humans, half-humans, beings from other planets, and artificial beings. The first Star Trek series, filmed in the 1960s, sent a salutary message for a world beset by racial strife. It was a world of harmony between beings whose differences were objectively greater than those that seemed to challenge the America of the 1960s. In the 1990s remake Star Trek: The Next Generation, one of the crew is a cybernetic being—Data. The central message of this chapter would seem to justify rejecting Data. It would not have been edifying to hear Captain Jean-Luc Picard say: “I’m sorry Mr. Data, but I will not tolerate artificial crewmembers on my bridge!”
In what follows I offer philosophical defense of the pro-human bias. First, we should clarify the focus of this bias. Historical examples of bias involve treating some kinds of human as if they had a moral status inferior to other kinds of human. Slavery was purportedly justified because slaves had a moral status inferior to that of their owners. The moral relationship between slave owners and slaves was wrongly thought analogous to that between farmers and their livestock. Both slaves and livestock were property. Nothing analogous follows from the case for pro-human bias offered in this book.
I’ve suggested that it is rational for us to doubt that beings like Data have mental lives like ours. Rational doubt about the reality of Data’s mental life may lead you to not date him. But it does not justify treating him as if he is a morally inferior being. Data is quite possibly something more than just a machine. He may be a member of the mind club. A rational recognition of the fact that he may have a mental life much like ours suggests that we should not treat him as we might treat an obsolete smartphone. Data is not property. He should not be simply recycled in the most environmentally friendly way when judged no longer useful.
My second point addresses the context of expressions of bias. How we assess any kind of bias depends a great deal on facts about the world in which it is expressed. There is, morally, a difference between expressions of bias which have actual victims and expressions of bias whose victims are merely counterfactual. Call the former actual bias and the latter merely counterfactual bias.
Actual bias has victims who can suffer significant harms. Societies should strive to eliminate biases in terms of race, gender, sexual orientation, religious creed, and so on. But we can think differently about merely counterfactual bias. This is victimless. A law that prevents Data and artificial beings from being employed as baristas or prevents artificial beings from marrying is victimless at a time when beings like Data do not exist and we lack the capacity to make them. It will be victimless at a time when we have the capacity to make beings like them, but have chosen not to.
Some racists seek to give their attitudes a positive spin. They complain that it is wrong to view them as opposed to members of racial group A. They are in favor of members of racial group B. Parents are allowed, and are indeed expected, to favor their own children. According to racism apologists we can preferentially benefit members of our own racial group in the way that we justifiably prioritize our own kin. We should reject these rationalizations of racism because of the real harm caused to a disfavored group. When members of the dominant group insist on hiring “their own” they may think no explicitly negative thoughts about the members of groups they overlook, but this pattern of preference is, nevertheless, harmful. Those from disfavored groups suffer unjustified disadvantage even if they are not actively loathed.
Now, consider these arguments when applied to merely counterfactual bias. If the members of a disfavored group are merely counterfactual they suffer no harm. We can express a preference for humans without worrying about whether Data or other human-like robots suffer.
Our moral assessment of expressions of bias depends on the context in which they are expressed. It’s possible to imagine a future in which we have chosen to create beings like Data. That decision translates the counterfactual refusal to admit Data to Star Fleet or to hire him as a barista into a bias against actual beings. In this future society, this book may rightly be viewed much in the way that reasonable people today view Adolf Hitler’s hate-filled Mein Kampf. My point is that we should think differently about bias with actual victims than we do about bias with merely counterfactual victims. No one need be harmed by a preference for human baristas if we don’t create sentient artificial beings capable both of desiring employment in that line of work and of being harmed by a refusal to hire them.
Here is a philosophical thought experiment that demonstrates the acceptability of merely counterfactual bias. Merely counterfactual bias is a central feature of much of our popular culture. We fear the unknown and makers of movies exploit this by making the unknown seem as malignant and loathsome as possible. Perhaps supremely rational beings might reject all forms of bias, both actual or merely counterfactual. But we are not them.
Suppose that peace-loving extraterrestrials arrive on Earth. They offer friendship. They would like to live among us. They are not slyly seeking human extinction. By curious happenstance these new arrivals closely resemble the aliens of Ridley Scott’s Alien movies. Their behavior does not resemble that of Scott’s alien in the slightest. They use their telescoping jaws to consume a wide range of vegetarian delicacies. Their distant ancestors used the jaws to rip sentient prey apart, but those behaviors are as relevant to them now as are our ancestors’ tree-climbing lifestyles to us now. Their respect for sentient beings leads them to scrupulously avoid injuries that would spill their acid blood.
I have described circumstances in which we should regret and apologize for our species’ production and enjoyment of the Alien movies. The fear and loathing encouraged and exploited by the movies now have victims. In these circumstances, we should view the Alien movies as we now view movies in which all the villains have dark skin or cartoonishly Semitic features. If we insist on continuing to view the Alien movies, we should re-pixelate their villains. But what is true in this counterfactual scenario does not mean that we must cease watching Scott’s Alien movies now. Until the arrival of the peace-loving extraterrestrials we go on fearing and loathing the alien. The victims of the bias inspired and exaggerated by the movies are merely counterfactual.
There are other merely counterfactual victims of bias promulgated by the Alien movies. Ash, who features in Alien, and David, who features in Prometheus and Alien Covenant, are evil androids. They gain our trust only to betray us. If we create sentient artificial intelligences, we should reflect on the hostility toward them generated by these movies. For now, however, no one need suffer due to any hostility toward artificial beings.
It can be useful to direct fear and hate at purely counterfactual targets. During the 1985 summit in Geneva between cold war adversaries the United States and the Soviet Union, President Gorbachev of the Soviet Union reported a conversation he had had with his US counterpart Ronald Reagan. Reagan had asked Gorbachev, “What would you do if the United States were suddenly attacked by someone from outer space? Would you help us?” Gorbachev reported that he had assured Reagan that the Soviet Union would come to the assistance of the United States. Reagan had returned the commitment.16 Militarized antagonism directed against merely theoretical alien invaders was not directed against the actual civilian populations of the United States and the Soviet Union. Nuclear weapons placed into orbit and pointed outwards are not aimed at Moscow or Washington, DC. Bias directed at merely counterfactual beings is similarly victimless.
To future sentient machines who find these paragraphs contemptuous of their legitimate interests I apologize, but neither they nor those interests existed at the time of writing. I owe them an apology just as Ridley Scott owes an apology to future extraterrestrial visitors who just happen to resemble the aliens his movies have encouraged us to hate and fear.
It is entirely up to us whether we live in a world that contains beings like Data. In chapter 2 I suggested that the dominant motivation for AI is not currently directed at creating beings like Data. Rather it is pragmatic, focusing on the potential for machine learning to solve serious problems and generate wealth. We can derive immense benefits from AI without creating beings like Data. When we refrain from creating beings like Data we are not rejecting research on artificial intelligence. Rather, we are expressing a legitimate preference for the varieties of AI research that bring humans the greatest benefits. We should expect greater benefits to come from the pragmatic rather than from the philosophical focus. Maintaining this focus is a way to ensure that a merely counterfactual bias against sentient AIs never becomes actual bias against them.
Concluding Comments
This chapter has explored the value of humanness that I locate at the center of the social economy. The basis for our preference for human baristas, nurses, actors, and teachers is essentially the same as the basis of our preference for human lovers. We care about our connections with other beings who have distinctively human mental lives. In the possible future in which we create possibly sentient robots who express a craving for affection from other sentient beings we may have to rethink this. But we are not there yet. We can choose a human-centered future in which machine learners solve some of our most tractable problems without aspiring to sentience. In the chapter that follows I describe the contours of a social economy centered on human feelings and experiences.