The Digital Revolution is transforming human lives. Here I define the Digital Revolution as the widespread and rapid replacement of mechanical and analog electronic technologies by digital technologies. Ground zero for these upheavals and transformations is the digital computer. But the revolution’s effects reach far beyond the stereotypical desktop word processor. Digital technologies are radically changing the ways that we share information, travel, treat disease, and party. An observable acceleration in the power of computers suggests that the transformations and dislocations we are currently experiencing are just the beginning.
Technological revolutions are, in the account presented in this book, more than just interesting events in human history. They are history’s motors. The Renaissance jurist, statesman, and advocate of science, Francis Bacon, made an emphatic statement about the significance of technological advances to human affairs. Writing about the significance of his own age’s big technological innovations—“printing, gunpowder, and the magnet”—he said “no empire, no sect, no star seems to have exerted greater power and influence in human affairs than these mechanical discoveries.”1 The Digital Revolution is influencing human affairs and seems set to have even greater influence. Thrilling gadgets and apps are merely its most visible manifestations.
We have a choice about how to view the Digital Revolution. This choice arises from the fact that it is a complex, multi-strand event that comprises many individuals and groups of individuals, and many technologies and categories of technology. The Digital Revolution’s origin is difficult to determine with any precision. We know that its early stages featured the theoretical insights of Alan Turing and the tinkering of various geniuses at Bell Labs and Xerox Park in the middle years of the twentieth century.2 Perhaps its beginning can be traced back as far as Charles Babbage and Ada Lovelace in nineteenth-century England. Babbage was the first person to attempt (unsuccessfully) to build something that we today would acknowledge as a computer and Lovelace may have been the inventor of computer programming. A great deal has happened since these modest beginnings.
This book takes the long view of the Digital Revolution. The long view permits us to address questions about where digital technologies are taking us. It is not overly concerned about the specifics of today’s buzz-worthy digital technologies. In the long view, the Macintosh computer, the Twitter social networking platform, and the Oculus Rift virtual reality headset feature only as undifferentiated parts of the onrushing blur of progress in digital technologies. An enhanced sense of the broad meaning of technological change compensates for the long view’s lack of detail. It offers an awareness of the grand sweep of technological change and its implications for human affairs. No fact about how integrated circuits process information describes their lasting effects on human experience and on the arrangements of the societies that host them. In the long view the Digital Revolution is the latest event in a historical sequence that begins with the Neolithic Revolution—which brought about farming, permanent settlements, and social hierarchies—and progresses to the Industrial Revolution, which brought on mechanization, mass production, and globalization. When we take the long view, we hope to see what is truly lasting about the Digital Revolution. Once we have gotten used to, or recovered from, the serial shocks of all these digital novelties, what effects will the Digital Revolution have on humanity?
The focus of the long view on trends means that it has little to say about some of our more immediate concerns. Suppose that you hear someone shouting “Help, I’m being assaulted!” It’s not particularly useful to respond “Don’t worry! Crime in this area is down 80 percent.” Information about crime trends does nothing to address the immediate concerns of someone who is currently being mugged. Similarly, pointing to long-term trends does not offer much to someone who has just been put out of work by automation. The long view nevertheless offers something missing from accounts that focus on immediate effects. It directs attention away from the digital trees to gain a better understanding of the digital forest. A metaphorical squinting of the eyes brings more general truths into focus.
In pictures of Earth taken from space we can see the largest human-made objects—the Great Wall of China, the Pyramids of Giza, and Dubai’s Palm Island. Also apparent are some of the patterns of human activity—the intense night-time lights of the cities of North America and Europe and the eerie absence of lighting in North Korea’s night sky. At this scale, however, individual humans are invisible. Something similar happens in the long view of technological change. We lose sight of individuals. A history of the Industrial Revolution may tell us about the 204 miners who died in the Hartley Colliery disaster in 1862 when they were trapped below after an accident with the pit’s pump. We may have some abstract appreciation of the suffering caused to them and their families. But for people of the early twenty-first century interested in the long view of technological change, that disaster is mainly interesting for the lasting reforms it prompted. It led to legislation requiring coal mines to have at least two independent paths of escape. Viewed from the second decade of the twenty-first century, the miners of Hartley Colliery are dead and gone whether the disaster occurred or not.
If we are interested in what is truly lasting and unprecedented about the Digital Revolution, we must look beyond the consternation and excitement that are predictable consequences of technological churn. We recover from some technological shocks promptly and fully. The Harrods department store offered brandy and smelling salts to help people to recover from the shock of a ride on its first escalators, installed in 1898. Some of today’s escalator travelers might enjoy a brandy upon arrival but it’s not something we need to recover from a ride to a department store’s upper floor. My principal focus is on a challenge to human agency. I argue that the Digital Revolution poses a threat to humans as doers, as authors of our own destinies. We make significant choices about ourselves and the world, in large part because we reason in certain ways.
There are two ways to think about this challenge to agency. First, there is a threat to human agency writ small. Here the economic value of human agency is the principal target. Can humans keep their jobs if forced to compete against machines capable of performing every job-related task better and more cheaply? Fears about technological unemployment have led to forecasts of the abolition of a disparate collection of jobs.3 On one such list, waiters and waitresses face a 90 percent risk of finding their work automated by 2035.4 Technological unemployment resulting from the Digital Revolution is no respecter of social station. The threat for chartered and certified accountants over the same period is still greater at 95 percent. Twenty years may be long enough for many to reach retirement age, but they should think twice about urging daughters or sons to follow them into the family trade. Pointing to a favored few professions that seem to be immune to replacement by digital technologies does not respond to this threat. Perhaps humans will remain supreme in the production of abstract art or the performance of stand-up comedy. But many human workers seem both eminently and imminently replaceable by digital machines.
Much of the threat to human agency emerges from developments in artificial intelligence and more specifically in machine learning, a field that aims at producing machines capable of learning without explicit instruction from human programmers. We see precursors of a future in which human agency has lost much of its value in the autopilots that take an increasing responsibility for flying our passenger jets, the driverless cars that are likely to be soon traversing our highways more efficiently and safely than cars driven by humans, and the computers that we entrust with finding patterns in data about who does and who doesn’t get melanoma that would be beyond the perception of the most dedicated and perceptive human medical researchers. Much of the disruption caused by the Industrial Revolution came from its automation of muscle power. A power loom run by a comparatively unskilled operator did the work of many skilled handloom weavers. The Digital Revolution is automating human mind work. Perhaps we will conclude that the computers that displace human mind workers are themselves mindless. But even so, they mindlessly do human mind work. The future promises digital machines that do this work both to a higher standard and more cheaply.
Mind work is not an all-or-nothing category. All work that humans do requires the use of our minds. A worker whose sole task is stacking bricks would be completely unable to do her job if she could not understand the instructions given her about where to stack them. But there are some jobs whose intellectual content is higher than others. Mental labor makes a smaller contribution to brick-stacking than it does to accounting or investigative journalism. The Digital Revolution poses a threat to jobs whose intellectual content is high, jobs which typically demand prolonged educations and provide high rates of pay.
Perhaps there will always be some things that humans can do that computers cannot. Humans can not only work but also whistle while they work, whereas computers, famed for their multitasking, may never dual task in precisely this way. The real threat to the human job applicants of the Digital Age concerns the economic value of that whistling. We are talking about potential futures in which employers choose to forgo the whistling to get more work done faster and more cheaply.
Second, there is the threat to human agency writ large. What’s at issue here is our control over our collective destiny. The civilization that emerged from the Industrial Revolution was, like the one that preceded it, one in which humans made the key choices. Technological change redirected human agency but did not make it irrelevant. Advances in artificial intelligence seem to lead to a progressive erosion of human agency. We seem to face a future in which control over human societies and lives is incrementally and inexorably surrendered to digital technologies with manifestly superior powers of decision-making. The decisions we make about how to get to an unfamiliar destination will be restricted to the decision to speak its name into the navigation system of a driverless car. Perhaps the automated deciders of the Digital Age will respond to today’s violent and competing populisms by solving our big problems for us. Will we address the problem of climate change by complying with the commands of a machine intelligence with access to the totality of data about the global climate system?
Consider some thoughts of Apple co-founder Steve Wozniak. Wozniak was responding to a fear, expressed by the astrophysicist Stephen Hawking and the philosopher Nick Bostrom, that advances in artificial intelligence might turn the plot line of the Terminator movies into reality.5 We would create an AI with the capacity to improve itself. This AI would rapidly become more powerful than all humanity combined. It might decide that the world is a better place without us.6 Wozniak took a rosier attitude toward these imminent digital superintelligences. Speaking in 2015, Wozniak said of these expected future machine intelligences: “They’re going to be smarter than us and if they’re smarter than us then they’ll realize they need us.” In Wozniak’s vision of the future, humans are not incinerated by AI-triggered nuclear exchanges. Rather we become cherished and mollycoddled pets of superintelligent AIs. “We want to be the family pet and be taken care of all the time.”7 The artificial superintelligences that will predictably emerge from the Digital Revolution will cater to this need. Wozniak reflects on the “filet steak and chicken” he feeds his dog, clearly relishing the delights that a future AI might cook up for him.
Wozniak’s vision of our future relationship with artificial superintelligences is sunnier than the Terminator vision. Much better for the humans of the Digital Age to be like dogs dining on filet steak than those deemed more trouble than they are worth and administered lethal doses of pentobarbital. However, the Wozniak and Hawking visions are equal affronts to those who hope for a vision of the future in which humans retain authority over the machines and over our own destinies.8 One 2012 estimate placed the global number of dogs at 525 million.9 The countless choices made by these animals have zero consequences for the destiny of global civilization. It seems better to play the role of pampered poodle than to be incinerated in a AI-triggered nuclear exchange, but both are visions of the future in which we surrender authority over our collective destinies. In times of high stress, you may find yourself looking at your contentedly dozing pet and uttering the words “I wish I was a dog!” But in times of greater confidence we fight for our control over our destinies, both as individuals, and collectively.
These forecasts of a radically disempowered humanity may seem the stuff of science fiction. Yet they are predictable consequences of the development of digital technologies. The long view of the Digital Revolution directs attention away from today’s limited digital deciders. It focuses on what they will predictably become. It warns against a dehumanized future dominated by the value of efficiency in which we realize that we do better without other humans. It’s effectively an indolent path to extinction in which we incrementally cede our places to our robotic betters.
If we are to come to terms with the threat to human agency from computers, then we must avoid a bias in our assessment of future threats. Humans have a built-in tendency to suppose that things will continue as they are now. We tend to be overly influenced by the evidence presented directly to our senses. The experts tell us that we face a civilizational threat from climate change. But our beach houses are not yet under water and the shelves of our supermarkets are laden with fresh vegetables. The climate is supposed to be warming. But this morning was very chilly indeed. Even those who intellectually accept the danger of climate change fail to muster the kind of response the problem demands. We find it difficult to accept that the future could be so bad when everything now seems just fine.
This present bias leads us to understate the machines’ threat to human agency. Many of today’s artificial intelligences are quite stupid. These are no threat to our jobs. When we take the long view, we compare what humans may become with what machines may become. Humans can improve—our flexible brains permit us to learn new tricks. But the rate of improvement of the machines is especially steep. The failings of present machines should not blind us to the expected capacities of their imminent descendants. Our position in respect of the machines that may take our jobs may be analogous to chess grandmasters in the early 1990s. We should combine arrogance about the capacities of some of today’s computers with humility in the face of the capacities of their descendants.
As we seek to automate familiar human tasks, we are especially aware of the machines’ mishaps. But human drivers are also far from perfect; we are accustomed to their mishaps. The programmed oversights of self-driving cars make world news.10 Unless they kill a princess, the lethal errors of human drivers don’t attract worldwide attention. No digital technology is logically immune to error. But perfection is not a standard we should expect. Perfect safety is unachievable, but the standard of being much safer than human drivers is both achievable and expected. Perhaps some of Deep Blue’s programmers fantasized about a machine that could play perfect chess, a machine that could demand the resignation of its opponent before its first move by presenting a detailed description of her inevitable checkmate. This may not be possible. But, as Deep Blue and its successors have shown, the standard of playing better chess than the best human player certainly is. The machines can’t beat God, the hypothesized perfect chess player, but they can beat any human.
The present bias against the capacities of future machines combines with an exaggeration of our own abilities. I call this bias a belief in human exceptionalism. A sober comparison of today’s humans with today’s computers reveals things that we do with ease that computers are hopeless at. Believers in human exceptionalism accept that computers have already overtaken us in many areas and are fast gaining on us in others. But they insist on a core of human capacities that will remain forever out of the machines’ reach. These will keep us employed in an age of ubiquitous supercomputers.
A belief in human exceptionalism inclines us to prefer quasi mystical names for our most cherished mental capacities. Machine thinkers comply with algorithms. Human thinkers possess genius, they intuit answers, and they demonstrate wisdom. “Genius,” “intuit,” and “wisdom” may be acceptable as brand names for digital products. But we resist the suggestions that computers could manifest these traits for real. If we accept this reasoning, a computer may execute a billion computations per second, but it can never be wise. Popular culture panders to this belief in human exceptionalism. Captain Kirk outwits an ostensibly superior alien intellect by proposing to implement the “Corbomite Maneuver,” a piece of nonsense invented by his human brain that confuses the alien. Kirk tells the aggressor that the Enterprise contains corbomite—an unexplained substance that destroys any attacker. The alien’s logic-limited brain seems to prevent it from seeing through this ruse.
The belief in human exceptionalism purports to construct a barrier that would prevent machines from replicating or mimicking one of the celebrated achievements of the human scientific imagination. The German chemist August Kekulé was trying to figure out the structure of the benzene molecule. He knew that it was composed of six hydrogen atoms and six carbon atoms. He also knew that each hydrogen atom bonded with one other atom and that each carbon atom required three partners. He was initially stumped, but then a daydream about a snake eating its own tail delivered to him benzene’s circular arrangement of atoms. Machines may crunch through all the possible locations of hydrogen and carbon atoms to determine the structure of benzene, but they won’t have daydreams.
A sense of our own specialness leads us to believe that we will retain this advantage into the Digital Age. It’s a bit like the confidence of pre-Copernican astronomers that whatever discovery about creation we made, Earth would remain at its center. I argue that this bias in favor of humans is as unsustainable as pre-Copernican geocentrism. It won’t benefit the human workers of the Digital Age to mumble mysteriously about improving productivity by implementing the Corbomite Maneuver.
The expected advances that are topics of this book put pressure on our belief in human exceptionalism. Some of the achievements of our species of which we are proudest involve the discovery of patterns in complex phenomena. Albert Einstein detected patterns in the universe invisible to his contemporaries. Some believers in human exceptionalism will airily assert that no machine could ever have come up with the general theory of relativity. But pattern-detection is a forte of machine learners. They are built to find patterns in hugely complex sets of data. We may refuse to acclaim them as geniuses when they find an especially obscure pattern in a vast set of data about the genetics and lifestyles of people who acquire autoimmune diseases. But this will not prevent them from finding the patterns. Perhaps judges of the future will accept that genius is as genius does.
Today we look to highly trained humans to advance our understanding of disease. But there’s no rule of the universe that requires that treatments for humanity’s most feared diseases must be within humanity’s powers of inference. Moving into an age in which much of the mind-work about cancer is done by machines could be very good news for our treatment of the disease. We may get therapies beyond the imaginative and logical powers of any human intellect. But it is bad news for our view of human thought and imagination as central to the treatment of our diseases. We will accept the conclusions of the machine learner tasked with treating disease much in the way that the faithful are supposed to accept divine commands. In both cases, ours is not to reason why. We must reflexively make the ceremonial offerings or swallow the pills.
My goal in this book is to describe what must be done to preserve human agency in the Digital Age. How are we to avoid writing ourselves out of our own story? I do not claim to possess a crystal ball that permits me to predict every detail of the Digital Age. But we can predict that the Digital Revolution will radically remake work and redirect human agency. That much we cannot change. We can nevertheless influence how it will remake work, how it will redirect human agency. This book presents a vision of future societies whose human citizens have rejected the path of cosmic irrelevance. The preservation of the human contribution will not require a rejection of the technological wonders brought by the Digital Revolution. Rather it will require careful consideration about the domains of human activity that we surrender to the machines.
Societies emerging from the Digital Revolution should be organized around what I call social-digital economies. These economies feature two different streams of economic activity, centered on two very different kinds of value. The principal value of the digital economy is efficiency. It focuses on outcomes and cares about means only insofar as they are reflected in outcomes. One process might be more efficient than another because it produces more of a valued product, or produces them more cheaply, or requires fewer raw materials. The principal value of the social economy is humanness. It is founded on a preference for beings with minds like ours, a preference to interact with beings with feelings like ours. We enjoy the company of other members of “the mind club,” a phrase I take from psychologists Daniel Wegner and Kurt Gray. Wegner and Gray define the mind club as “that special collection of entities who can think and feel.”11 When we hear that an octopus is conscious we take a special interest in it. We can wonder what it might be like to think octopus thoughts and to feel octopus feelings. Tablet computers are fascinating in many ways, but they lack this kind of appeal for us. We take an extra-special interest in members of the human chapter of the mind club, beings with minds like ours.
This preference for members of our chapter of the mind club operates in the personal domain—it guides our selection of lovers and friends. It also operates in the domain of work. If we think about it, we want our baristas and nurses to have minds like ours too. We will rightly reject the inefficiencies of humans when they stray into parts of the economy that emphasize the skills of the computer. But we should have the courage to reject digital technologies when they trespass on distinctively human activities. We should question the suggestion that advances in artificial intelligence will soon fill our societies with machines that have human feelings and thoughts. Our contract with the machines should be one in which we do the jobs for which feelings matter and they take on many data-intensive tasks for which feelings are irrelevant.
The social-digital economy is a view of humanity’s future informed by aspects of our pre-civilized past. It aims to reinstate some of the social aspects of the foraging lifestyle. Efficient digital technologies will shunt humans out of jobs that don’t require direct contact with other humans. We will be free to take up jobs in a radically expanded social economy.
This expanded social economy could respond to one of the defining ills of our time—social isolation. Many of the riches of our modern age have come from the denial of our social natures. John Cacioppo, a University of Chicago psychologist, and William Patrick call humans “obligatorily gregarious.”12 They explain that a zookeeper asked to create an enclosure for Homo sapiens would “not house a member of the human family in isolation, any more than you house a member of Aptenodytes forsteri (Emperor penguins) in hot desert sand.”13 Cacioppo and Patrick say “As an obligatorily gregarious species, we humans have a need not just to belong in an abstract sense but to actually get together.” This obligatory gregariousness is a consequence of evolution. Before the Neolithic Revolution foraging was a human universal. Foraging is an intensely social existence. Foragers live very much in each other’s lives. Their shelters tend to be temporary, lacking the permanent walls that separate the nuclear family from outsiders. They share food. Isolation is one of the worst things that can happen—an ostracized forager is almost certainly a dead forager.
We carry the emotional and psychological vestiges of this forager gregariousness into our high-tech times. But many of the resulting needs are unmet. Cacioppo and Patrick say “Western societies have demoted human gregariousness from a necessity to an incidental.”14 They suggest that we see the effects of this demotion in statistics on mental health. Today, isolation causes misery and shortened lifespans. Feelings of social exclusion can manifest as anger or violence.
Technological progress has abetted the demotion of human gregariousness. The less sophisticated technologies of foragers mean that their survival depends on how they get on with other human beings. In his influential book on Americans’ increasing retreat from political and social engagement, Bowling Alone, the sociologist Robert Putnam observes “In round numbers the evidence suggests that each additional ten minutes of commuting time cuts involvement in community affairs by 10 percent—fewer public meetings attended, fewer committees chaired, fewer petitions signed, fewer church services attended, less volunteering, and so on.”15 The commute is a consequence of the car, one of the central wonders of the Second Industrial Revolution. Workers with cars no longer had to live in cramped physical proximity to their places of work. They could spread out into the suburbs and commute to work. At the time of the suburb’s invention, it wasn’t obvious how much time people would soon be spending sitting alone, wedged into slow-moving rush-hour traffic.
The names we’ve given to some Digital Revolution technologies suggest that it might restore some of this gregariousness. Social networking technologies are called “social” for a reason. Facebook is all about connecting and sharing. Enhanced connectedness is one of the primary rationales for the Internet. But the varieties of connectedness offered by the Digital Revolution do not give us what foragers get out of their face-to-face, in-your-face, social interactions. Technological mediation makes those connections less direct. It purges many of the foragers’ trappings of sociality. A smiley face emoji is not the same as a grin indicating assent to a proposal. There is no opportunity to place a hand on the shoulder of someone whose facial expressions indicate doubt and concern. The accumulation of Facebook friends and Twitter followers brings few of the benefits offered by forager bandmates. When foragers want something difficult done, they must make direct face-to-face connections with bandmates. As they present a plan they interpret facial expressions to determine how likely verbal assurances are to be followed by actual help. Enlisting additional help involves much more than writing an email and wondering “Should I Cc Malika in?” Foragers would not tolerate the anonymized bullying and stalking behaviors that proliferate on the Internet.
We seem to be playing the roles of zoo animals depressed by their unstimulating environments. Among the most important evolved behaviors of animals are those whose purpose is finding things to eat while avoiding things that eat them. Zoos give them ample calories and the security of a cage that isolates them from any natural predator. There are zoo breeding programs for animals that keepers deem especially worthy. The bored pacing of tigers and collapsed dorsal fins of adult male orcas send a clear message of psychological malaise despite these pluses. The technological mediation of our relationships and digital substitutions for human interlocutors seem to short-change us in a similar way.
We shouldn’t overstate the implications of this “forward to the past” vision of the human future. There was much that was bad about the lives of pre-Neolithic foragers. They were typically a few failed hunts and a few unproductive days of gathering away from starvation. It would be absurd to say that we should dump our smartphones, cars, and dishwashers and attempt to reinsert ourselves into the ecological niche once occupied by some tens of thousands of pre-Neolithic foragers. For a start, the mathematics don’t work. There are, as of early 2018, 7.6 billion humans. Who among us would get to live to enjoy the splendors—and the horrors—of pre-Neolithic foraging? We should be exceedingly grateful for the gifts of technological progress that separate our lives from theirs. But this gratitude should not prevent us from acknowledging a valuable feature of our former foraging existences that we could seek to reinstate. The Digital Revolution offers an unprecedented opportunity to do this.
We won’t be making do with the forager’s spear and temporary shelter. The digital part of the economy places a premium on the efficiencies brought by powerful digital technologies. We should expect a progressive displacement of human workers from this side of our social-digital economy. We cannot hope to match the efficiencies of the machines in these domains. Human pilots will be unable to compete with the automated flight systems of the future. Machines will perform our keyhole surgeries to a higher standard than any human doctor. We will be free to take up roles in an expanding social economy that replicates some of the social aspects of ancestral human foraging communities. Fabulously efficient digital technologies will be humming away in the background of this recovered gregariousness.
As human workers are displaced from lines of work centered on efficiency, we should be free to take up new varieties of work that meet human social needs. In today’s technologically advanced societies, “social worker” is the name of a job that addresses the most extreme harms caused by social isolation and indifference. In a Digital Age centered on a social-digital economy there should be a great diversity of social work. Human social needs are varied and complex. It is most unfortunate that jobs that place humans into direct contact with other humans are foremost among those that our current emphasis on efficiency is causing us to seek to do without. Automated checkouts and customer service AIs are taking the places of workers who deal directly with other human beings. My purpose in this book is to show that machines will always be poor substitutes for humans in roles that involve direct contact with other humans. Here we value connections, however fleeting, between human minds. We care about what’s going on in the minds of those who provide services in the social economy. Efficiency is a factor in such interactions, but it is not the only consideration. You want the person with whom you lodged an order for a café latte not to forget to make it. But you value the human interaction that occurs as it is handed over. When a drive for increased efficiency causes us to do without human workers, we leave ourselves ill-prepared for the kinds of digital future that we should be seeking. If we want a truly social Digital Age, then these are the roles that we should be preserving and promoting. We should aim for a future in which machines do much of the heavy lifting and hard calculating but humans find work meeting the many social needs of other humans.
Must this socializing be work? Some argue that we should respond to digital advances by offering humans a universal basic income. Martin Ford argues that a principal role for the humans of the Digital Age will be to go shopping.16 We will use our shares of the profits generated by the robots to maintain demand for the stuff they make. I am skeptical of this proposal. It overlooks one of the great benefits of the work norm, the idea that our children should grow up expecting to join the work force. Humans may be obligatorily gregarious, but when left to our own devices that gregariousness tends to be parochial. We seek out people we know or people who resemble us in ways that we care about. We fear strangers. Work requires us to get along with strangers. We must cooperate with them to achieve shared goals. Work is part of the success story of the diverse, multi-ethnic societies of the early twenty-first century. Without the social glue of work, some other way will need to be found to prevent the fracturing of our societies into sub-communities defined by ethnicity, religion, and other socially salient traits.
The idea of a social economy brimming with new jobs may seem fanciful. It doesn’t seem realistic to suppose that the workers of the near future will be told “The bad news is that we’re going to let you go from your job at the supermarket checkout. The good news is that we’re creating much more rewarding job for you in the emerging social economy.” The social-digital economy is not a prediction. Rather, it’s an ideal about how human societies could be in the Digital Age. When Martin Luther King Jr. intoned “I have a dream” it was not appropriate to respond “Yeah right, dream on.” We should reject overconfident forecasts of the societies of the Digital Age. We should nevertheless approach the Digital Age with an attitude of empowered uncertainty. The social-digital economy is an ideal sufficiently attractive to be worth fighting for. There are many ways in which we could fail to realize the ideal of the social-digital economy. We could resign ourselves to one of the many dystopian visions in which most of the humans of the Digital Age have existences that are both pointless and impoverished. In some of these visions, all the wealth generated by digital machines goes to the few who own them. Reflection on the increasing inequality of our age does present this possible future as the path of least resistance. Alternatively, we could seek to create a Digital Age in which we are surrounded by fabulous digital technologies but still manage to enjoy intensely social existences. The route toward the social-digital economy will not be easy. It will require tough choices. We must muster the collective will to reject some of the superficially appealing offers that digital technologies present to us. Were I to place a wager, I would bet against Digital Age societies centered on social-digital economies. I would also bet against our collectively finding an adequate response to human-caused climate change. I find it hard to see in the responses that we have collectively managed thus far, anything indicative of a response sufficient to prevent an ecological catastrophe. But when it comes to the future of the human species I’m not a betting man. In both the case of climate change and the threat from the Digital Revolution to human agency the rewards of success and penalties for failure are so high that they call for our greatest efforts. We must do all we can to create societies centered on social-digital economies.
I am a philosopher and I treat questions about human agency in the Digital Age as philosophical problems. However, my attitude toward philosophical problems is somewhat different from those taken in other philosophy books.
I view philosophy as at its best when it plays an integrative role. An understanding of the threat to human agency posed by the Digital Revolution should draw on many different sources of information. This book makes use of insights from experts on digital technologies and large-scale technological change, social psychologists, economists, evolutionary biologists, and philosophers of mind. Philosophers have the intellectual skills to integrate all these different kinds of information into a coherent approach to the Digital Revolution’s social transformations. The questions addressed by philosophers are noteworthy for their variety. Philosophers are archetypal academic generalists. We generate no data. When we address questions including the nature of art, the existence or nonexistence of subatomic particles, and the possibility of a just state, we are frequently challenged to find proper assessments of the significance of ideas from outside of philosophy and to integrate them with other ideas with different provenances. Philosophers are, in effect, informational and theoretical brokers facilitating exchanges in ideas between different academic disciplines that don’t normally find themselves in contact.
One way to go wrong in dealing with problems like the human consequences of the Digital Revolution is to fail to properly acknowledge an important source of information or to overstate the significance of a favored type of information. In chapter 1, I criticize the economist Robert Gordon because his forecasts rely too heavily on historical economic data and pay insufficient attention to the kinds of trends affecting digital technologies. Gordon displays excellent understanding of the economic data he has gathered but he offers unreliable advice about the future because he does not understand the significance of expected developments in artificial intelligence.
There is another important sense in which this is a philosophy book. My proposal of a social-digital economy makes essential use of the insights of philosophers of mind on the nature of phenomenal consciousness—the “what it’s like” aspect of human thought. I argue that we should not expect machines to meet our need to interact with others who have feelings like ours. My integrative approach leads me to treat philosophical expert evidence about phenomenal consciousness in the same kind of way that I treat the expert evidence of economists, evolutionary biologists, technologists, and social psychologists. We seek an appreciation of the relevance and relationships between of all these different sources of information if we are to achieve an understanding of the human consequences of the Digital Revolution. When I dispute the claims of economists about the future of work I do not pretend to advance our understanding of economic theory. This doesn’t prevent me from disputing the detail of their claims about work in the Digital Age. Nor does my use of ideas from the philosophy of mind aspire to solve deep philosophical problems about the nature of phenomenal consciousness. Rather, I make claims about rational ways for us, as humans, to respond to philosophical doubts about whether machines can have feelings like ours. Instead of solving deep problems in the philosophy of mind, I show how philosophical ideas should influence our approach to the Digital Revolution.
Chapter 1 introduces the long view of the Digital Revolution. I propose to locate it alongside other technological revolutions rightly counted as turning points in human history—the Neolithic and Industrial Revolutions. The second decade of the twenty-first century is not the ideal time to be making assessments of overall historical significance. We are at the height of excitement about all things digital and subject to a tendency to overstate the computer’s significance. The economist Robert Gordon argues that the Digital Revolution will not live up to the expectations of its enthusiasts. He compares the Digital Revolution with the Second Industrial Revolution, which was centered on electricity and the internal combustion engine. Gordon says that advances derived from the Second Industrial Revolution “covered virtually the entire span of human wants and needs, including food, clothing, housing, transportation, entertainment, communication, information, health, medicine, and working conditions.”17 The effects of the Digital Revolution are restricted chiefly to entertainment and information and communication technology. They lack the broad human and economic significance of the earlier technological revolution. I argue that Gordon sells the Digital Revolution short. The expected application of artificial intelligence to vast quantities of data points to important impacts beyond entertainment and information and communication technology.
Artificial intelligence is the focus of chapter 2. The goal of work in artificial intelligence seems easy to state—it involves the attempt to build a machine with a mind. I suggest that work in AI has developed a split personality. We can distinguish a philosophical motivation directed at creating machines with minds from a pragmatic motivation that aims to build machines that do mind work—the things that humans use their minds to do. The philosophical motivation was initially described by Alan Turing, AI’s founding genius. It is an excellent premise for a movie. But the principal focus of work on AI in the early decades of the twenty-first century is pragmatic. Pragmatists are now making machines that do mind work better than humans. Turing’s desire to create a machine capable of authentic thoughts becomes a hobby project when placed alongside the pragmatic goal of building machines capable of exploiting the wealth inherent in data and solving our most challenging problems.
Chapter 3 switches focus from artificial intelligences to the data that is the focus of their mind work. It defends the wisdom in the popular Internet saying “Data is the new oil.”18 Data is the Digital Revolution’s defining form of wealth. Corporations’ holdings of data are coming to matter more in assessments of their market value than do their holdings in varieties of wealth specific to earlier technological revolutions, such as land or oil. I argue that some of us are slower on the uptake in our understanding of this new form of wealth and this leaves us at a disadvantage when dealing with those who have better understanding of data’s value. We renounce control over our data to Google, Facebook, and 23andMe, much in the way early twentieth-century Texas farmers happily accepted paltry sums for the right to prospect for oil on land that wasn’t of much use for ranching. I consider the relevance in the long view of the aphorism “information wants to be free.” Stewart Brand, Kevin Kelly, and Jeremy Rifkin expect a future in which data resists attempts to control and limit access to it. I suggest that we assess such claims in the context of a broader political and economic context in which some people do very well out of asserting exclusive control over data. I consider Jaron Lanier’s suggestion that we should respond by charging a fee—a micropayment—for the privilege of using our data. Streams of micropayments would flow back to the originators of Internet content. I challenge the practicality of this idea.
Chapter 4 presents the Digital Revolution’s threat to human agency. Considered at its most prosaic, the threat to human agency is a threat to our jobs. The superlative mind workers of the Digital Revolution reduce the economic value of human agency. Why pay a human to do a job that can be done better and more cheaply by a machine? We have many precedents for the devaluation of human agency by technological progress. I consider the inductive case for optimism about the Digital Revolution advanced by many economists and commentators on technology. It’s difficult to imagine the jobs that the Digital Revolution will create. But, if the past is a guide, we can assume that new jobs will be created. The grandchildren of accountants and waiters will be relieved that they don’t have to make their livings inserting numbers into spreadsheets or waiting on tables. I reject this optimism. The Digital Revolution brings new economic roles that could be filled by humans. But the protean powers of digital technologies should promptly eliminate any new jobs. The money paid to human workers creates a powerful motivation to build cheaper and more efficient digital substitutes. The higher the economic value of a new role, the stronger the incentive to automate it.
We have a debate between optimists and pessimists about the Digital Age. I argue that we should approach the challenge of automation to work as pessimists. Optimism is a therapeutic way for us as individuals to confront life’s challenges, but it is a bad way for us to collectively confront the challenges of the Digital Revolution. Paying more attention to the forecasts of the pessimists offers better insurance against an uncertain future than does the feel-good message of the economists’ inductive optimism.
Chapters 5 and 6 explore a possible haven for the human workers of the Digital Age. Thus far, we have compared humans and machines in terms of efficiency. An interest in efficiency focuses on outcomes. We focus on means only insofar as they are reflected in outcomes. An interest in increasing efficiency should lead to the progressive elimination of humans from the economy. An alternative value is humanness. The value of humanness directs us to prefer human means. Humanness matters a great deal in our most important relationships. There are many science fiction explorations of scenarios in which romantic partners are replaced by machines that efficiently perform all the actions related to romance but whose mental lives we doubt. Even if not fully acknowledged, we carry this interest in human experiences into the domain of work. We assume that the doctors who tend to our wounds, baristas who make our espressos, and politicians who make decisions about our societies’ minimum pay rates have mental lives like our own. They have feelings like our own. They are members of the human chapter of the mind club. We care about efficiency in these domains too. It’s bad when your barista forgets your order or when your nurse injects the wrong medication into your arm. But we tend to respond to inefficiencies in ways that preserve human contributions. When we learn that human nurses sometimes give patients the wrong medication we don’t seek to replace them with machines. Rather we supplement them with machines that correct these errors. We seek to eliminate the error—thereby improving efficiency—while preserving the distinctively human contribution.
We should be heading toward a social-digital economy. This bifurcated economy should see the progressive replacement of human workers in nonsocial domains. Machines will fly our airplanes and perform our keyhole surgeries. Humans will retain an ongoing superiority in roles for which social contact with other human beings is central. In many cases, they will be assisted by powerful digital technologies. But we will justifiably view the contributions of these digital technologies as having a significance secondary to human contributions. Our gratitude for a service we have received will go to the human part of the team that performed it rather than to the machine.
It’s one thing to assert that the social side of the social-digital economy contains jobs that are best performed by humans. Can we be confident that there will be enough such jobs to absorb the many workers excluded from jobs that do not exploit our social abilities? I suggest that there should be. I don’t predict that there will be. If we don’t create these roles they won’t exist. One of the great problems for the technologically advanced societies of the early twenty-first century is social isolation. Humans are intensely social creatures. The conditions of our evolution involved constant contact with other humans. But the environments we have recently created tend to isolate us from each other. Lonely humans are more miserable and die younger. A social-digital economy would create new jobs defined around our social natures. Humans will find employment in roles designed to meet the many social needs of other humans.
Is a social-digital economy more than just wishful thinking? Chapter 7 offers advice about how to understand this call for the creation of a social-digital economy fit to carry humans into the Digital Age. I do not offer the social-digital economy as a prediction. The path of least resistance directs our species toward a dystopia in which the members of a small elite own all the machines and hence almost all the wealth. The rest of us are subject to a poverty void of meaning. I offer the social-digital economy as an ideal that we can strive to realize even as we recognize that the odds are stacked against it.
One competing ideal comes in the form of a universal basic income (UBI). Perhaps the ideal of an intensely social Digital Age economy is appealing. But must that socializing be work? Some commentators look forward to a future without work. They call for a universal basic income that would redistribute to the workless some of the wealth generated by increasingly efficient machines. This book embraces a future with work. Good work provides social benefits and is therapeutic. We should complain about work that is dirty, degrading, and dull but not about work itself. The new jobs on the social side of the social-digital economy aren’t required to have the unpleasantness of many of today’s jobs. They should engage our social natures and they should also lack the objectionable features of many of the jobs most imminently threatened by increasingly efficient machines.
Chapter 8 offers some practical pointers about how to make the Digital Age more human. What can we do now to stick up for our humanity in an age of superlative digital technologies?
Chapter 9 brings together some intellectual threads. I express a hope that humanity’s next age will be named not for its dominant technological package but instead to affirm the social nature of our collective existences. Humanity could exit the late Industrial Age, pass through the Digital Age, and enter a Social Age.