4 Can Work Be a Norm for Humans in the Digital Age?
The primary goal of artificial intelligence is to make machines capable of mind work. We have explored the powerful combination of AI and data. Data is the Digital Age’s defining variety of wealth made specifically for machine mind workers. In the future, applying AI to data will predictably make greater contributions to our species’ war against cancer than will the reflections of the most brilliant and imaginative human medical researchers.
This chapter shifts focus to the threat from AI to human agency. Writ large, AI poses a threat to humans as authors of our own destinies. Writ small, this is a challenge to human mind work. If machines are getting so much better at doing mind work then how will humans, mind work’s traditional providers, get paid?
Two biases make us complacent about what humans can offer in a future made by AI. First there is a present bias about the capacities of future digital technologies. We tend to be overly influenced by the failings of current machines and insufficiently impressed by their capacity for improvement. We should avoid the error of the chess grandmasters of the early 1990s, who supposed that chess computers of the future would be handicapped by the deficiencies they observed in the machines they handily defeated. Our survey of anticipated advances in machine learning gives some indication of the problem-solving capacities of future digital machines. Second, there is a bias in favor of our own human abilities. A belief in human exceptionalism leads us to describe our mental abilities in terms that make them inimitable by machines. We grant that a machine may out-calculate us, but we insist that it will never be wiser than us. In this view, wisdom simply cannot be rendered as sequences of instructions implemented by a computer. I argue that once these biases are addressed we should be less confident about places for humans in the workplaces of the future. Faith in human wisdom and our capacity to, every now and then, have serendipitous dreams that point the way out of seemingly terminal situations can take us only so far in an age of very efficient data-crunching machines.
The mere fact that humans are paid to do work creates a powerful economic incentive to build machines that do that work better and more cheaply. Human workers come with costs that machines avoid. Many human workers have children whom they wish to feed, clothe, and educate. They like to go on holiday occasionally. Sometimes they get sick. They expect that their wages will cover these expenses. Machines do require maintenance, but they have no such expectations.
This economic argument against human work is not specific to the Digital Revolution. It was a feature of the Industrial Revolution too. James Nasmyth, the inventor of the steam hammer—a large industrial hammer powered by steam—hoped “self-acting machine tools” would help avoid “the untrustworthy efforts of hand labor.” According to Nasmyth, “The machines never got drunk; their hands never shook from excess; they were never absent from work; they did not strike for wages; they were unfailing in their accuracy and regularity, while producing the most delicate or ponderous portions of mechanical structures.”1
The threat to human work from digital technologies is worse than that posed by Nasmyth’s self-acting tools. When human workers faced the challenge of the Industrial Revolution they responded by re-releasing themselves as human workers 2.0. The protean nature of digital machines means that the strategy of turning ourselves into human workers 3.0 will, at best, offer temporary refuges in the economies of the Digital Age. AI is the digital superpower that thwarts traditional human responses to technological unemployment. When the secret sauce of machine learning is applied to large quantities of data, the machines of the Digital Age will possess a flexibility and adaptability lacked by the machines brought by earlier technological revolutions.
Searching for Work that Is Both Productive and Therapeutic in the Digital Age
It’s important to clarify what’s at stake here. I am not defending particular jobs or even particular categories of job, but rather the survival of the work norm into the Digital Age. Work is a norm for humans in the first decades of the twenty-first century. The work norm justifies an expectation that people leave school and find jobs. They will make their livings contributing to their societies. The work norm survives into times of high unemployment. Suppose that 30 percent of available workers in your society are unemployed. Your society’s governing politicians are likely to be facing intense and justified criticism. But yours is nevertheless a society in which work is a norm for humans. Parents who raise children in such a society will experience legitimate fears about their children’s job prospects. But the 70 percent of eligible workers who do have jobs suggest that it is reasonable for them to raise their children with some expectation of finding work. Your society’s schools should provide children with the skills required to enter and thrive in the workforce. The work norm won’t survive into the Digital Age if you have to be Larry David, Oprah, Steven Spielberg, or Meryl Streep to get a job.
If the work norm is to be preserved into the Digital Age, we should expect sufficiently many jobs that are both productive and therapeutic. When I say that these jobs should be productive, I mean that they should not be part of some socially prescribed make-work program for humans in the Digital Age. Human workers must do more than carry clipboards and certify the outputs of machines that are extremely accurate by design. Business owners should be motivated by the economic case for employing humans. They must often consider human-free means of achieving their ends and assess human workers as worth the money. When I say that jobs should be therapeutic, I mean that the jobs of the Digital Age should be conducive to high levels of well-being. They should not be the jobs depicted in digital dystopias in which human workers endure horrible conditions and miserable pay in a despairing attempt to outcompete the machines on price.
The idea that work could be therapeutic may seem opposed to the economists’ presentation of work as possessing disutility that they view as justified by the positive utility brought by a pay packet.2 Mihaly Csikszentmihalyi and Judith LeFevre offer one very appealing account of the value workers derive from doing their jobs. They challenge the simple dichotomy between pleasant leisure and unpleasant work.3 In a study of our attitudes toward leisure and work, Csikszentmihalyi and LeFevre found that boredom was a more prominent feature of leisure than is generally supposed and that pleasure is a more prominent feature of work than suggested by the economists’ characterization. Csikszentmihalyi and LeFevre call this “the paradox of work.” They propose a psychological mechanism that explains the enjoyment of work. Csikszentmihalyi has written extensively about “flow.”4 Flow theory states that experience is “most positive when a person perceives that the environment contains high enough opportunities for action (or challenges), which are matched with the person’s own capacities to act (or skills). When both challenges and skills are high, the person is not only enjoying the moment, but is also stretching his or her capabilities with the likelihood of learning new skills and increasing self-esteem and personal complexity.”5 When one experiences flow, one often has the feeling of losing oneself in the activity being performed. There’s something about the lack of self-awareness that comes from exercising skills under these circumstances that makes flow states especially enjoyable. Obviously, it would be wrong to suppose that all work enables flow. Jobs that involve simple, repetitive tasks are unlikely to promote flow. If what Csikszentmihalyi and LeFevre say about our enjoyment of work is correct, then therapeutic jobs of the Digital Age should not treat human employees as low-cost sorters of paper clips. Jobs for the humans of the Digital Age must not only be economically justifiable, they should promote flow.
Defense of the work norm requires that support for sufficiently many Digital Age jobs that are both productive and therapeutic. The parents of the Digital Age may have little idea of what vocations their children will choose. But they should have a realistic expectation that their children will be able to find productive and therapeutic work in societies that possess even more powerful digital technologies than those that exist at the parents’ time. We must avoid the trap that caught out some of the best human chess players of the early 1990s.
The Inductive Optimism of the Economists
The inductive case for optimism draws on many historical examples of technological progress creating more and better jobs than those it destroys. The fact that technological progress destroys jobs that we know, replacing these with jobs that we can barely guess at, causes anxiety. But we should nevertheless expect that new jobs will materialize. Kevin Kelly notes, “Today, the vast majority of us are doing jobs that no farmer from the 1800s could have imagined.”6 There is a reliable pattern of an incoming technological package creating new, unimagined, and perhaps unimaginable, jobs. Foragers in the Mesopotamia of ten thousand years ago would have despaired at the destruction of their ways of life brought by the spread of farming settlements. We can imagine them peering into Neolithic settlements horrified at their inhabitants’ undignified scraping around in the dirt. But many of the foragers’ grandchildren became contented farmers.
There’s a painful lag between job destruction and job creation. Technological progress brings technological unemployment. In an article published in 1930, John Maynard Keynes defines technological unemployment as “unemployment due to our discovery of means of economizing the use of labor outrunning the pace at which we can find new uses for labor.”7 Keynes goes on to note that this was “only a temporary phase of maladjustment.” Economic growth resulting from the introduction of the new technologies would create new jobs.
To say that new jobs will predictably come is not to gainsay the suffering caused by this “temporary phase of maladjustment.” Someone who has spent a lifetime progressing through the ranks of apprentice weaver, journeyman weaver, and finally to master weaver cannot simply delete the mental files relevant to handloom weaving and download new factory shift supervisor files. But it does place limits on the hardship. The grandchildren of handloom weavers come into the world with fresh minds ready to internalize the knowledge required by the industrial economy. Many transitional states are painful. Immigrants to a new land must leave behind familiar sights, sounds, and tastes. They must learn new ways. They are treated with suspicion by some of the inhabitants of their new home. But they and their descendants often come to view the transition as worth it. The long view places greater emphasis on the long-term benefits than on the short-term suffering. Adolescence is a transitional state in human development replete with humiliations and awkwardness. But most people are happy to have gone through it. We feel pity for the weavers of the Industrial Revolution but most of us nevertheless feel relief that we aren’t seeking work as handloom weavers.
We should be alert to the distortions of ill-informed nostalgia. Most Civil War re-enactors are happy about the lack of authentic grapeshot. Most of today’s hobby handloom weavers are similarly happy that food for their children doesn’t depend on sales of their handiwork. Perhaps societies of the Digital Age will contain people who treat filling in accounting spreadsheets as a hobby, oblivious to the harsh realities of relying on the care and maintenance of spreadsheets as a means of paying rent and schooling children. People worried about their vocational prospects in this age of increasingly capable machines may empathize with Luddite machine breakers. But, if they put serious thought into it, few of us would want government initiatives to recreate a textile industry centered on the handloom.
The inductive argument looks beyond our imaginative failures. We have recent examples of the propensity for technological progress to create jobs out of nowhere. People who lose their jobs to computers may fail to properly imagine these future jobs. Inductive reasoning suggests that they will nevertheless arrive.8 People in the 1990s knew about computers and about the Internet. But they are likely to have been befuddled by the suggestion that companies might offer full-time jobs managing their social media—getting people to “like” things on Facebook and composing tweets.
The optimists expect us to repeat our successful response to the machines of the Industrial Revolution. Theirs is a belief in an eternal frontier for human agency. As the capacities of machines expand, humans will occupy the leading edge of this expansion. Once forced out of old jobs we will find new, more challenging roles. Within a generation, we will look pityingly at those forced to make their livings doing things that machines now do so efficiently. Today we fear the anticipated passing of jobs in customer service and long-distance truck driving, but our grandchildren, doing work that we can barely imagine, will feel intense gratitude that they aren’t forced to deal with tedious complaints about faulty products or spend many hours behind the wheel of an 18-wheeler truck.
Kevin Kelly extracts this advice from the inductive argument:
We need to let robots take over.… Robots will do jobs we have been doing, and do them much better than we can. They will do jobs we can’t do at all. They will do jobs we never imagined even needed to be done. And they will help us discover new jobs for ourselves, new tasks that expand who we are.… It is inevitable. Let the robots take our jobs, and let them help us dream up new work that matters.9
Before we get busy envying our children their fantastic digital economy jobs we should consider whether there is any reason they might not materialize.
The economist David Autor offers the “O-ring production function” to support the inductive case for roles for humans in Digital Age economies.10 Even when we cannot imagine these roles we should be confident of their existence. The O-ring production function, originally described by Michael Kremer, takes its name from components of the space shuttle Challenger. The failure of an O-ring caused the 1986 explosion of Challenger soon after launch. The O-ring production model describes a collaborative production process in which failure of any one step in the chain of production leads the entire production process to fail. If all the other links in the chain are becoming more reliable then there is an increasing value in improvements to the remaining, less reliable link. Autor explains that “when automation or computerization makes some steps in a work process more reliable, cheaper, or faster, this increases the value of the remaining human links in the production chain.”11 The remaining human workers command higher wages as the machines around them get better. We should, according to Autor, expect a process of better wages and conditions for humans with the skills to insert themselves into the production chains of the digital economy. We see evidence for Autor’s pattern in the skills required to operate and maintain the very complex machines of the digital economy. The software engineers of the Digital Revolution command higher wages than the mechanics of the Industrial Revolution. Patching a hacked computer network takes more skill than replacing the broken part of a spinning jenny.
It’s possible that there will be human O-rings in the economies of the Digital Age. What’s less clear is that a human worker’s status as a digital economy O-ring will translate into higher wages for her. We may judge that human contributions are essential to Digital Age production chains but not pay those who provide them very much. Currently, human doctors are essential to the provision of healthcare. The salaries of the doctors are influenced by a general recognition that medical expertise is difficult to acquire. Few people complete the tortuous educations of specialist cardiologists. Andrew McAfee and Erik Brynjolfsson present their prediction of the responsibilities of human O-rings in the medicine of the future. They acknowledge that the medical diagnosis is largely an exercise in pattern matching. “If the world’s best diagnostician in most specialties—radiology, pathology, oncology, and so on—is not already digital, it soon will be.”12 Humans are nevertheless indispensable in the presentation of these diagnoses. McAfee and Brynjolfsson continue “Most patients … don’t want to get their diagnosis from a machine.” Humans will also be required to encourage patients to adhere to challenging treatment regimes. Perhaps humans will be essential to Digital Age medicine. The next question is how much these human O-rings will be paid. They are unlikely to command the salaries of today’s specialist radiologists, pathologists, or oncologists, simply because many people can fill these roles. You don’t need a decade of training to encourage diabetics to perform more frequent blood sugar tests when a very accurate diagnostic-bot has informed you that their glycemic control is poor. A much greater share of Digital Age health budgets will go to those who provide the technologies than to the humans drafted in to work with them.
The O-ring production model may not show that the human workers of the Digital Age will be paid much even if we judge their contributions to be essential. Perhaps Autor will count the demonstration that humans are essential to the work processes of Digital Age as a success. In the section that follows, I question this assumption.
The Protean Powers of the Digital Package
Maybe there will always be a need for humans in the economies of the Digital Age. I doubt that the O-ring production function offers job security to humans who find ways to be useful to potential employers. Autor’s argument applies to roles in production chains and not to their occupants, be they human or machine. The traditional way to improve a link filled by a human is to pay to improve the skills of the current human worker or to hire a more-skilled human replacement. This leads to better pay for humans who remain in the production chain. But another way to improve that link in the production chain is to replace human workers with more efficient and cheaper machines. It would be wrong to suppose that human workers will be completely eradicated from a production chain. Our ingenuity helps us to find other ways to contribute. But as we demonstrate the value of these new contributions, we provide a powerful economic incentive for our replacement by the protean digital package. We see the difference between the Industrial Revolution and the Digital Revolution. A foreman in a factory of the early 1800s could feel much more secure in his job than does a human worker who finds a new role in a digital economy production chain. No modified power loom threatened to take the foreman’s job.
The principal problem for inductive optimism lies in the capacity of the machines of the Digital Revolution to follow us into new lines of work. Digital technologies are protean in a way that the technologies of the Industrial Revolution were not. A power loom casts the handloom weaver out of a job. But it has no capacity to pursue the weaver’s son into his job as factory foreman. Machines capable of mind work can do this. We can always invent new tasks for human workers—there is no reason to think that the supply of potential human jobs is finite. The problem is inventing new tasks that cannot predictably be better performed by the machines of the Digital Revolution. Advances in machine learning will permit machines to do mind work. The protean nature of machine learning means that though we may think of new jobs that seem better suited to the digital package, it will be difficult to prevent digital machines from doing them better than us.
The optimists find little difference between forecasts of doom about work made at the time of Industrial Revolution and today’s despairing prognoses about the Digital Revolution. But there’s a big difference. The Industrial Age created jobs beyond the imaginative limits of its contemporaries. But the hope that the future would contain many jobs for humans would have found support in the many jobs of those times that could not be done by power looms or any other technology produced by the Industrial Revolution. A sporadically employed handloom weaver would have little difficulty in thinking up jobs that faced no threat from the varieties of automation brought by the Industrial Revolution. His son would be ill-advised to take up handloom weaving. But jobs keeping the books of commercial enterprises, in the army, or in domestic service would predictably continue to be available. No steam-powered machine could do these jobs. The protean nature of computers makes the focus of today’s anxiety about technological change broader. The parents of the early decades of the twenty-first century aren’t so much worried about children being unable to follow them into the family line of work. They struggle to think of any job predictably on offer in the Digital Age. The existence of many jobs during the Industrial Revolution not under threat from steam power and its allied technologies would have supported confidence about future jobs. The protean nature of the digital package speaks against an expectation of jobs for humans in the Digital Age.
We should understand the limited value of economists’ conjectures about the effects of technological progress on current patterns of employment. In chapter 1, we saw that relevant differences between yesterday and tomorrow should reduce confidence in their inductive inferences. The data supporting these conjectures about today’s economic trends are uninformative about the changes brought by the novel combination of data and AI. A very good explanation of trends up until now may fail to take into account changes brought by the Digital Revolution. The economic data about the effects on work of the recent past of automation are uninformative about predictable changes in digital technologies.
Consider the influential account of the effects of technological change on employment from David Autor and David Dorn.13 Autor and Dorn challenge the received view among economists that technological change favors workers who are more skilled and therefore make better use of new technologies. According to them, this hypothesis of “skill-biased technological change” fails to adequately account for an observed polarization in the economies of technologically advanced nations. There have been increases in employment and wages of the most skilled, but there have also been increases in employment and wages in the comparatively unskilled service occupations. Autor and Dorn define service occupations as “jobs that involve assisting or assisting or caring for others, for example, food service workers, security guards, janitors and gardeners, cleaners, home health aides, child care workers, hairdressers and beauticians, and recreation occupations.”14 A label for an explanation that better explains this polarization is “the task-biased technological change hypothesis.”15 Dorn writes “In the TBTC model, computers do not have a differential impact on workers based on their education levels, but based on the task content of their occupations.” Autor and Dorn surmise that automation is responsible for this polarization. Many middle-income jobs are being lost to automation. We’ve seen how machines can do the mind work of accountants both better and more cheaply. But jobs at the top and bottom are less susceptible to automation.
Dorn highlights the features of hotel work that make it difficult to automate.16 The cleaning of hotel rooms is repetitive. But according to Dorn, this repetitiveness does not translate into routines that can be programmed into computers. He says, “For hotel cleaning to be a routine job, it would be necessary that the cleaning of one room would encompass exactly the same work steps and physical movements as the cleaning of the next room. But in practice, every guest will leave her room in a slightly different state. Apart from differences in cleanliness, guests can leave towels, pillows, toiletries, pens and many other objects that belong to the hotel in different spots within the room.” Dorn continues: “It would be very challenging for a robot to find and recognize all of the hotel’s objects, assess their state of cleanliness, and take the appropriate measures of cleaning or replacing them. Compared to humans, robots are often very limited in their physical adaptability, and cannot grip or clean many different types of objects.”17
If digital machines are to do hotel work, then they cannot do it as humans do. Businesses will need to find different ways to get this hotel work done if they want to automate it.
There are many historical precedents for technological advances disrupting familiar human ways of doing work. Before the Industrial Revolution much production was done in workers’ homes. Merchant employers would put out materials to rural producers who would typically work on them at home, bringing finished products back to their commissioners. It would be difficult to see how the Industrial Revolution innovation of the steam engine could have any positive impact on work done at home. Miniaturized steam engines that could be installed into home workshops were beyond the ken of Thomas Newcomen or James Watt. The innovation of the steam engine required that people cease working at home and travel to a factory. We should expect analogous changes to the Digital Age centers of production.
We see significant progress on the dehumanization of work and the workplace to better meet the needs of digital machines in Amazon’s fulfillment centers. Writing in Wired, Marcus Wohlsen notes that Amazon’s fulfilment centers are not organized in ways that make sense to humans. They are not like department stores that subdivide items in ways that meet the expectations of human shoppers. If you want running shorts, then look in sportswear. If you’re after a gift for a child’s birthday, then seek out the toys department. It would difficult to design a robot capable of searching department stores with the same facility as human shoppers. In an Amazon fulfillment center, items are grouped in ways that take little interest in the way humans group like with like. Their codes are stored in the fulfillment center computers. Wohlsen says “… unlike the Dewey Decimal System, the codes don’t signify anything about the category of what’s in the cubby. Items are simply shelved where they fit, with identical copies stowed in spots throughout the warehouse to make them accessible to make it less likely a worker will have to travel far to find one.”18 Amazon’s computers know exactly where everything is—they have no need for the step in locating tennis balls that goes “These are sporting equipment, so they must be in the sports department.” Items’ locations make little sense to humans whose first instinct when searching for men’s shorts is to seek out the menswear section, but that doesn’t matter if Amazon’s AIs treat its workers as biological drones. Amazon’s replacement of these humans by Kiva robots is an obvious next step. Amazon makes progressively less use of the advantages that its human workers have over Kivabots. Kivabots would be flummoxed by the layout of a department store but do just fine with the inhuman layouts of Amazon’s fulfillment centers. The Digital Revolution is turning many workplaces into human-hostile environments much in the way that the warming of the Coral Sea is making it a hostile environment for coral reefs.
The task of programming an industrial robot to respond to the chaos of a recently checked out from hotel room seems truly immense. The challenge for hotels is to change so that their rooms both meet guests’ expectations and suit the cleaning robots that enter when the guest leaves. They will look to indulge guests’ desires to be randomly and sometimes disgustingly messy, but also explore ways to encourage them to do so in ways that cleaner-bots can cope with. They will use machine learning to find useful patterns in the messes made by many millions of hotel guests. Remember that we need not suppose that machines will do all hotel work for there to be major problems for the hotel industry as a source of employment. Imagine that cleaning bots cannot safely navigate the corridors of hotels. Human employees are needed to carry the bots into each hotel room. That leads to fewer human hotel employees and a further blow to the work norm.
Will Humans Always Control the Last Mile of Choice?
We should not place too much confidence in the preservation of human O-rings in the hotel industry. But perhaps we shouldn’t be mourning the abolition of low-skilled jobs if the prospects for human high-skilled workers are favorable. According to Autor and Dorn high-skilled jobs are also resistant to automation. Those who might have missed opportunities to educate themselves will get the message that the future will contain very few jobs driving delivery trucks or cleaning hotel rooms. They will acquire management and professional skills and reap the economic rewards brought by high-skilled occupations. According to Dorn a common feature of high-skilled jobs is that they “draw on human ability to react to new developments and problems, and to come up with new ideas and solutions.” He suggests that computers complement human workers in these areas, but they are not substitutes for them.
This confidence about the prospects for high-skilled workers finds partial support from Pedro Domingos, the advocate for machine learning whose ideas we have explored in chapters 1 and 2. I say “partial support” because we have seen how Domingos imagines machines outdoing the capacity of the most educated and creative humans to come up with novel responses to disease. The news may be better for highly trained managers than it is for professionals whose education involves internalizing knowledge that is more efficiently and completely acquired by machines learners.
Domingos is confident humans will control the “last mile of choice.” He offers the following characterization of the relationship between human deciders and the decisions taken by increasingly powerful machine learners: “The last mile is still yours—choosing from among the options the algorithms present you with—but 99.9 percent of the selection was done by them.”19 The machines will pass their conclusions on to humans who will make the final choice. The role of CEO of choice seems to play toward human strengths. A good CEO lacks the specific skills and expertise of most of her subordinates. These skills and expertise tend to distract from her main role, which is to adopt an organization-wide perspective. The CEO’s expertise lies in making choices about the overall direction of the organization. Tesla Motors CEO Elon Musk needs to know a great deal about how electric cars work and could work in the future, but it would be a mistake for him to busy himself with the minutiae of a new design for windshield wipers. He operates principally in the last mile of choice.
Again, a good explanation for trends up until now should not be relied upon as a forecast of the Digital Age. I argue that the last mile of choice is an especially bad place to locate the humans of the Digital Age. It violates a principle of good collective decision-making according to which it is wrong to give less competent deciders authority over more competent deciders. Whether one party can serve as a useful final decider with authority over the decisions made by another party does not depend purely on facts about the competence of that decider. It depends on an assessment of the relative competence of the candidate final decider and the deciders over which authority is claimed. This relative competence matters more than any objective assessment of the absolute quality of decisions. It is not enough for someone to demonstrate that they can make very good decisions about the dos and don’ts of heart surgery for them to claim the role of final decider over your procedure. They should demonstrate a capacity for good final decisions at least as good as those over whom they exercise authority and other candidates for the role. This has implications for human deciders in the Digital Age. The need to place humans at the summit of the hierarchies of choice in the Digital Age reflects present bias about computers and an unwarranted belief in human exceptionalism. We should assess their long-term eligibility for that role relative to predicted digital deciders. It’s not enough to make better decisions than any other human.
When considered in the long term, the importance of decisions taken in the last mile of choice makes it the worst place for an organization in the Digital Age to locate its remaining human workers. The last mile contains decisions where the advantage of future machines over humans is likely to be especially great. If you had to give human deciders a mile, it should not be the last one. The final mile brings, what is in decision-making terms, the point of no return. Foolish decisions cannot be scrutinized and corrected by a decider that comes later. Suppose Michelangelo had taken on an apprentice sculptor whom he assessed as particularly talentless. He is obliged to offer the apprentice some way to contribute to the forthcoming work. If Michelangelo has any interest in the quality of the sculpture, he should not allocate the last mile of sculpting to the talentless pupil. Mistakes made earlier may not be entirely reparable, but at least il Divino can do something to tidy them up. There is no such opportunity if the hack takes charge of the last mile of sculpting, no opportunity for Michelangelo to tidy up a few misdirected blows, cleverly refashioning a botched heroic sword into a discreet dagger.
The qualities that make one human final decider better than another human candidate for that role are unlikely to guarantee success into the Digital Age. One vice of a human final decider is micromanagement—the allocation of too much attention to the minutiae of tasks performed by subordinates. Human managers who seek to micromanage tend to lose sight of the big picture. Digital final deciders will predictably make better final decisions because their micromanagement can inform their all-things-considered resolutions, and vice versa.
We can describe this capacity in a way that is neutral between the mind work of humans and computers. A computer has active memory that includes tasks that it is currently working on. Humans have something analogous—what psychologists describe as a “mental workspace” in which we locate the tasks we are actively addressing. Computers slow down when their active memory gets clogged with tasks. Human decision-makers become less efficient as their mental workspace fills with tasks.
Effective human final deciders conserve their mental workspace by avoiding cognitively bogging themselves down in the detail of a task assigned to a subordinate. Good human subordinates know how to give a final decider the information she needs to make good decisions. They do not presume to pre-empt those decisions, but they are careful to omit details that hamper a final decider’s understanding of how the processes they are reporting on relate to other processes in the organization.
The danger of clogged mental workspace is starkly demonstrated in tragedies that have occurred in the cockpit. Planes crash when pilots become absorbed with the problem of how to fix an apparently faulty dial and lose situational awareness, failing to notice a fatal loss of altitude. The ideal final decider in the cockpit must allocate some of his mental workspace to proximal dangers but reserve sufficient workspace to an overall awareness of how the aircraft is flying. This is a difficult trade-off for humans to make. It seems to require split attention in which a pilot attends to a new alert without losing a general sense of how the plane is flying.
An effective human final decider zealously guards her mental workspace. Digital final deciders have another strategy available to them. Their designers can simply give them more active memory. If your computer slows down when you instruct it to simultaneously perform too many tasks, you can boost its memory. For digital final deciders, attention to the minutiae of one problem need not lead to poorer big picture choices. We should not overstate the capacities of today’s computers. Computers do not have limitless active memory. Active memory is a precious resource that computer engineers diligently preserve. But we are subject to present bias if we assume that the specific bandwidth limits of today’s machines will limit tomorrow’s machines.
Some people take great pride in their capacity to multitask—to deal simultaneously with many different steams of information that require distinct responses. Rather than genuinely multitasking, humans serially single-task, rapidly switching their focus from one stream of information to another. This process of switching back and forth leads to significant declines in performance. It explains how a pilot might become so absorbed by an aberrant dial that she fails to notice a precipitous decline in altitude. She neglects to switch her attentional focus back to assessing how the plane is flying. Computers are, in contrast, genuine multitaskers. The number of distinct tasks performed by a computer is limited by its processing power. Giving a powerful computer additional tasks shouldn’t lead to any decline in its performance of an existing task so long as sufficient processing power continues to be allocated to it.
Consider the pragmatic trade-offs in decisions about what information to relay to a human final decider in the context of the jetliner cockpit. The sensors of a modern jetliner have access to a great deal of information about what it is doing, and what is going on outside of it. Part of good cockpit design is to work out how to display this information to the pilot in a way that respects human cognitive limits. There’s only so much information available to aircraft’s sensors that can usefully be presented to a human flight crew. The design of aircraft instruments and dials must respect the biological limits of human pilots. There are probabilistic thresholds built into the design of cockpit instrumentation. Good cockpit design omits information assessed as having a low probability of making a difference to the flight. Once we disabuse ourselves of the twin distortions of present bias about digital machines and the belief in human exceptionalism we can appreciate possibilities for final deciders with much greater bandwidth. We should open ourselves to the realization that information that fails to exceed the probabilistic threshold to be worth reflecting in the dials and instruments of a cockpit occupied by a human flight crew could be worth passing on to a final decider with much greater bandwidth. This final decider could make use of data about the angle and velocity of winds that are rightly deemed not worth passing on to human flight crew. A tiny difference in the velocity or angle of a wind is unlikely to make a difference to flight safety. But, every now and then, it will. If there were no limits to the calculations and calibrations that can be introduced into a pilot’s active memory, then this difference should be considered. We are unlikely to ever have a computer with infinite active memory. But Moore’s Law and related generalizations suggest futures in which computers get ever closer to this ideal limit. So long as we allocate the last mile of choice to human pilots there seems to be a limit to how much safer we can make our planes. It’s safer to board a plane whose pilot’s active memory is sufficiently capacious to consider even improbable threats than it is to board a plane whose pilot’s active memory is more limited and has been forced to disregard threats that do not exceed a certain probabilistic threshold. There are many such individually insignificant threats. When these threats are considered in combination, we see the risks human final deciders in the cockpit subject us to.
What goes for the last mile of choice in the cockpit goes also for the last mile of choice elsewhere. Consider the predicament for the human CEOs of the Digital Age. Attributes such as guts, intuition, and insight into the motives of others tend to feature prominently in today’s accounts of business success. Today’s business leaders successfully adapt cognitive hardware designed for the Pleistocene to the combination of opportunities and threats presented by early twenty-first century business environments. Perhaps we should be impressed by the cost-cutting genius of General Electric’s Jack Welch and the iconoclasm of Apple’s Steve Jobs. But we should consider these successes in a context that includes the epic fails in which human business leaders made ruinously bad choices. Jobs has his counterpart in John Sculley, the Apple CEO from 1983 to 1993. Sculley’s poor understanding of Apple’s products prompted him to sack Jobs. Think of the vast quantity of data that a business AI will be able to draw on to make its recommendations. It will draw on the totality of available information about the stock prices, business cycles, historical records of good and bad corporate takeovers, and so on. Machine learners will search for patterns in this data and seek to apply what they discover to future decisions. It seems plausible that it will improve on humans who draw on the small subset of the available data that their “guts” direct them to.
The all-things-considered character of final decisions is likely to be a strength of future AIs when compared with selective attention and imperfect multitasking of human final deciders.
It’s important to not understate the implications of the predicted domination of machines over the last mile of choice. Some writers hope that there will still be a role for humans over this decisive period of choice. For example, McAfee and Brynjolfsson allow that human diagnosticians could have some part to play in the detection of disease in the Digital Age. Speaking of a medical diagnosis offered by a future AI, they say, “It might still be a good idea to have a human expert review this diagnosis, but the computer should take the lead.”20 So long as humans play some part in the final mile then we have some latitude to describe our contributions as the most important. CanceRx is incapable of objecting that some human “expert” who spent much of the time watching it do its thing gets to claim the credit and the Nobel Prize for a radical new approach to leukemia. CanceRx can make a mechanical genuflection toward the need of a human decider to be “the man” or “the woman.” But the point is not merely that, when considered objectively, human contributions will be less important. If the gap between human capacities as final deciders and the capacities of the machines is so great, then humans should play no role whatsoever. It doesn’t matter how many Wikipedia pages on cardiac function you’ve browsed, when you’re watching open heart surgical procedure it’s probably best that you keep your thoughts about which incision to make next to yourself. Your input is likely to make the procedure go worse, not better. The same warnings apply to the human expert reviews of the diagnoses of Digital Age medical AIs. When autopilots get manifestly better at flying planes than human pilots we should not grant humans any opportunity to override the autopilot’s choice. The cockpit doors of Digital Age jetliners should be both terrorist proof and human pilot proof.
There is a defense of human final deciders that acknowledges our inferiority in this role. We could accept heightened risk as part of the price of dealing with organizations controlled by humans. We knowingly engage in many dangerous activities. Reflective passengers understand that they do not board a commercial jet with an absolute guarantee of safe arrival. They understand this risk and judge it acceptable. They might accept that giving employment to a human pilot—the human final decider cockpit—justifies a little bit of additional risk of violent death. If we happily judge the pleasure of seeing Siena sufficient to justify boarding a commercial jet, then why shouldn’t we accept that it’s worth a bit of extra risk to keep a human pilot in gainful employment.
This book draws a distinction between the digital and social economies. The digital economy centers on the value of efficiency. We care about efficiency in the social economy too, but we permit our preference for humans to sometimes lead us to prefer less efficient arrangements involving humans to more efficient arrangements without humans. We are prepared to tolerate inefficiency when we get to enjoy the benefits of interacting with humans. But there are activities in which there don’t seem to be compensating benefits from interacting with humans. We treat occasional errors by our pilots differently from the errors by our baristas. If we got to interact with our pilots we might judge the additional risk of death as warranted. But the threat of terrorism has increasingly isolated them from us. Where there are no compensating benefits from humans we will be increasingly intolerant of the additional risk of dealing with humans.
We should take the long view of our assessments of risk. There is no objective feature of human physiology or psychology that makes a certain level of risk acceptable and a slightly higher level of risk unacceptable. We judge risk relative to other things that we do. Simply put, life has gotten safer over the past centuries. This has led us to find activities that were formerly safe enough to be too risky. Many people today find the risks associated with smoking to be unacceptable. We should expect, that were they offered cigarettes, people in the Middle Ages would make different assessments. They would be able to look at the health statistics on smoking that deter many of us, consider these in the light of features of their daily existence—awaiting the next visitation of the Black Plague, getting forcibly inducted into a neighboring lord’s army, and starving when a crop fails. They would be likely to judge an increased risk of cancer from smoking to be entirely acceptable.
The relativity of assessments of risk has implications for the human final decider in the cockpit. Suppose we continue to make improvements to aspects of our lives outside of flying. Advances in digital technology produce improved treatments for many of our diseases. Today’s travelers happily tolerate the risks associated with human pilots. But tomorrow’s passengers predictably won’t. They will view consenting to travel in a human-piloted jetliner as hideously reckless.
A Conjecture about the Labor Market of the Digital Age
The threat to the work norm from the digital package is sharpened when we give it some economic context. Suppose that humans follow Autor’s advice and seek to turn ourselves into digital economy O-rings. We will attempt to justify high rates of pay by pointing to the great importance of our contributions. These efforts could work in the short term. However, the long view exposes this strategy as self-defeating. What’s essential to the economy is the O-ring role—if these O-rings are truly essential then Digital Age economies without them will crash or explode. But this reasoning says nothing about who or what fills that role. The human worker in a Digital Age production chain should understand that the better she does, the stronger the economic incentive to design the digital machine that will fully or partially replace her. The human mind workers remaining in the production chains of the Digital Age should feel like hunted foxes, fleeing from temporary refuge to temporary refuge just ahead of the pursuing hounds. The economic argument against human work suggests that refuges will be temporary. The only way to feel truly safe is to convince those finding new applications for the digital package that the economic value of what you do for your living is negligible.
Earlier we saw Autor and Dorn’s explanation for the polarization of the Digital Revolution workforce. They explain this by pointing to the differential impacts of automation. Jobs at the top and bottom are in general harder to automate than are jobs in the middle. I suggested that good explanations of the present will fail to predict the future if conditions change. We should expect reorganization of workplaces to make jobs at the bottom easier to automate. The creativity and insight celebrated by business biographies must be judged against the pattern-finding prowess of future machine learners. The same digital superintelligence that we could unleash on cancer will predictably be directed at the challenges of business.
I conjecture that the polarization that Autor and Dorn present as a feature of today’s labor market will characterize the labor market of the future, but for purely economic reasons. The worst remunerated jobs will last longest simply because those who fill them will work for less. Hotel workers are poorly paid. If they were paid at the same rate as accountants, we might expect to see an increase in the pressure on their jobs from automation. Automation is characterized by steep one-off costs associated with introducing the systems that will do the work of human workers, followed by much lower costs of maintaining and occasionally updating the machines. It differs from the high ongoing costs of paying for human employees who expect periodic pay rises. The strategy of working for less reduces these ongoing costs. Poorly paid human workers should last longer in the workforce before improvements in automation make the economic case for replacing them irresistible. Accountants go before hotel workers, but eventually even the poorly paid can no longer compete on price.
I suggested that the digital package can more easily take on the work challenges of those at the top than supposed by Autor and Dorn. Machine learners should improve on the decision making of captains of industry. In late 2015 Mark Zuckerberg stated “My personal challenge for 2016 is to build a simple AI to run my home and help me with my work.” He expected that this AI would be a bit “like Jarvis in Iron Man.”21 The suggestion that Zuckerberg’s version of Jarvis will “help” him seems to arrogate to himself more than his due for the business triumphs of a future Facebook helmed by a combination of Zuckerberg and Facebook-Jarvis.
Zuckerberg is likely to make poorer business choices than will a Facebook-Jarvis trained up on the totality of information about stock prices, business cycles, and historical records of good and bad corporate takeovers. His choices should be viewed as we will predictably view the recommendations of human experts in the genetics of cancer when the alternative is CanceRx. But there is a reason that we should not expect Zuckerberg to yield his position at Facebook’s helm to a machine. He enjoys running the company he founded. There’s a good chance that considering which businesses are potential acquisition targets for Facebook puts him into a flow state. To put this another way, Zuckerberg does not want to become a rentier—an individual who lives off rents yielded by his assets. We know this because he could become a rentier right now. Zuckerberg has amassed sufficient wealth to spend his life flitting from luxury resort to luxury resort. He values his own agency. Even when it is clear that he has nothing of value to add to Facebook-Jarvis’s recommendations he will enjoy exercising control over the last mile of choice over which businesses make suitable Facebook acquisition targets. Zuckerberg and his inheritors have the money to exercise agency in the economies of the Digital Age even when a sober evaluation of their competence compared with the competence of Facebook-Jarvis tells them they should become rentiers and split their time between luxury ski resorts and tropical island paradises.
When a poor person’s job is better done by a machine he finds himself facing penury. Things are different for the proprietor of the machines that render the poor person redundant. Zuckerberg values his own agency. He can pay for the luxury of exercising it when digital deciders are clearly superior. When he does this, he acts like a spoiled medieval princeling who chooses to lead an army into battle when the grizzled low-born veteran would do a better job. He gets to lead the army because he’s the prince.
In chapter 5 I explore the prospects for spreading this celebration of agency more broadly than those who have the money to indulge the illusion that they can out-think the machines. But I conclude this chapter with a philosopher’s take on how best to approach disputes about what the future will bring.
Gaining Philosophical Perspective on the Dispute between Optimists and Pessimists
On one side of the dispute about the role for humans in Digital Age economies, we have the optimism of the economists that places faith in the ingenuity of humans to find work that is both productive and therapeutic. They support this optimism with an impressive inductive argument that points to many other cases in which jobs seemingly materialized out of nowhere. They can cite a history of cases in which despair about the future of work was followed, after a painful delay of technological unemployment, by jobs that we didn’t, and indeed couldn’t, have imagined before their arrival. We compare the jobs we lost with those we gained and consider ourselves ahead on the deal. We should expect to find new ways to make ourselves useful as our machines get more and more powerful. On the other side, we have the pessimists who point to the protean powers of the digital package. When combined with the economic argument against human work we should expect the package to promptly eliminate newly discovered sources of productive and therapeutic work. The idea that we might place ourselves at the top of decision-making hierarchies wrongly arrogates to ourselves a capacity to make better decisions than AIs. The protean powers of the digital package permit it to fill any new economic roles.
Should we be optimists or pessimists about the economic value of human agency in the Digital Age? There is some evidence that a bias toward optimism is beneficial for individuals. People who are clinically depressed tend to have more realistic assessments of their social standing. If optimism bias is the price for navigating the social world with confidence, then it seems a price worth paying. When it comes to the Digital Revolution, however, there are costs in the resolution to “look on the bright side.” We are better advised to approach the Digital Age’s uncertainties as pessimists. It may be good for individuals to have an optimism bias, but a pessimism bias often works better for us as collectives. Collective pessimism is an essential hedge against the Digital Revolution’s turning out worse that we might expect.
What is the rational way to respond to a disagreement about the future? The traditional way is to try to work out who is right. You should earnestly commit yourself to the task of working out which side of this debate has the strongest arguments. This approach works very well for academic seminars. But it doesn’t work as well when participants in those debates emerge from the seminar room and start offering advice to those potentially affected by the topics of their debates. There are dangers in premature resolution of the debate between optimists and pessimists about the work norm in the Digital Age. Neither the optimists nor the pessimists possess crystal balls or De Lorean sports cars modified to permit time travel. Under these circumstances, we should place greater emphasis on an attitude toward the future that best insures us against possible misfortunes.
You should take the same approach that you take to purchasing fire insurance for your dwelling. My house in Wellington has stood for over one hundred years without burning down. I’m quite confident that it won’t burn down in the near future. It has an open fireplace that we scrupulously avoid using even on the coldest winter nights. We nevertheless have an insurance policy against fire and are up to date with our premiums. If my house never suffers fire damage, then this particular expense seems a waste of money. The money would have been better spent on meals out and movie tickets. But this is the wrong way to think about insurance. It can be rational to insure against unlikely events if you think that the occurrence of these events would be a disaster. When I consider purchasing a fire insurance policy I consider how likely a fire is to occur. If I assess the probability of that event at zero, then I shouldn’t purchase. But suppose that I decide that there is some non-negligible probability of a fire. I consider the cost of the policy and how bad a house fire is likely to be. If the premiums are expensive relative to the value of my house I don’t purchase a policy. But if those premiums are sufficiently cheap relative to that value I do.
It’s useful to take this insurance mindset into the debate between the optimists and the pessimists.22 We can view the claims of the pessimists as constituting a variety of insurance against a future in which advances in digital technologies reduce the economic value of human agency to such an extent that humans cannot find work. If the optimists are right, then this imaginative effort may seem wasted. Fantastic jobs that we failed to imagine materialize. Instead of working out how to respond to the threat of Digital Age joblessness, creative people might have invested their imaginative labor elsewhere. We can count the cost of this effort in terms of the symphonies and computer games that could have been created were people not so worried about how to respond to a threat that never materialized.
Consider Aesop’s fable about the boy who cried wolf. The shepherd boy becomes bored watching his sheep, so he decides to amuse himself by crying “Wolf! Wolf!” The villagers dash out of their dwellings to drive the wolf away and find there is no wolf. The boy repeats this a few times. Finally, there is actually a wolf. The boy’s calls of alarm elicit no response. The wolf scatters the herd, leaving the boy in tears.
You can understand the parable’s message emphasizing the great importance of telling the truth. If the shepherd boy didn’t have such a track record of fibbing, people would have listened to him when he warned of a real wolf. But there’s also a message from the parable for those who find themselves cast in the role of the villagers. When dealing with claims about which we cannot be certain don’t place too much confidence on past nonappearances by wolves. The fact that the wolf has not appeared after each earlier warning seems to strengthen the inductive case for the wolf never turning up. But the fact that we feel more confident about the nonarrival of the wolf with each past nonarrival does not obviate the importance of maintaining a look out if it’s possible to do so in ways that do not harm other vital undertakings. Boys should not make up stories about wolves, but equally villagers whose flocks are threatened by wolves should not suppose that the nonappearance of wolves up to this point means that we shouldn’t worry at all about future wolf attacks. Shepherd boys who persist in making stuff up should be scolded. But equally if it’s relatively easy to scare the wolf away, the villagers should rush to defend their precious herds whenever they hear the cry of “wolf.” We could supplement the traditional message of Aesop’s fable—“If you persist in making things up, people won’t believe you in the future” with “If someone warns you that something bad will happen, think about the cost for you of doing something about it. If the cost is low, don’t be overly influenced by inductive arguments against doing anything.”
Some optimists about technological progress point to the “Great Horse Manure Crisis of 1894” to illustrate the foolishness of pessimism about the future.23 Each day in the period leading up to 1900 saw more than 50,000 horses ferrying people and stuff around the streets of London.24 These horses generated huge volumes of manure. A quote widely attributed to the Times of London in 1894 considered how much manure horses generate and made the forecast: “In 50 years, every street in London will be buried under nine feet of manure.”25 We today look back and wonder how the alert observer could have missed the imminent arrival of the automobile, which would radically reduce the number of horses in London and turn the Great Horse Manure Crisis into a problem that required no solution. No one likes to be laughed at. The fact that it’s posterity that’s laughing at you does little to lessen its sting.
The insurance mindset I recommend here suggests a different evaluation of the Great Horse Manure Crisis. The Londoners of the 1890s lacked time-travel equipped Hansom cabs. Perhaps some people particularly well informed about the automotive advances of Karl Benz in the 1880s and 1890s might have ventured conjectures about the potential for his inventions to transform our cities. But they couldn’t have been certain. What if the fumes produced by Benz’s automobiles proved toxic, preventing their introduction to London? The insurance mindset asked how much it would have cost us to do some thinking about what to do about London’s increasing quantities of horse manure. Thinking about ways in which London’s streets could be periodically cleansed seems like cheap insurance against an uncertain future.
We evaluate insurance policies in not as accurate or failed predictions about the future, but instead in terms of their cost. Insurance cover against improbable misfortunes can be worth purchasing if it is cheap. Some imaginative effort seems a cheap premium for insurance against a possible future in which the work norm does not survive into the Digital Age. If the optimists turned out to be right, then we had nothing to worry about. But that doesn’t mean that it was wrong to acquire insurance against the bad outcomes described by the pessimists simply because they didn’t materialize. If we do not put significant thought into how to confront a future without jobs, then we are underinsured for the Digital Age. We can hope that the optimists are right much in the way that someone who forgoes fire insurance hopes that her house will not catch fire.
Suppose the prediction that the Digital Revolution poses a threat to the economic value of human agency not posed by earlier technological revolutions turns out to be false. After a “temporary phase of maladjustment” ends, the new technologies create new challenging and rewarding jobs that could not have been imagined before their arrival. Each new advance demands new varieties of human O-rings. These new jobs better promote flow than the jobs that went out of existence. People who perform them look pityingly at those stuck with the drudgery of categories of work that thankfully no longer exist. Should we regret the doom saying about the death of work and the alternative scenarios that it forced us to consider? Not if we view that thinking as insurance against a future in which each new economic role is better performed by machines. How should I think about the many years of premiums I have paid insuring my house against fire? There’s a sense in which this is wasted money. But it’s not if I view the policy as offering me protection into an indefinite future. This is the way we can think about our preparation for a future in which the Digital Revolution destroys many jobs and creates some new jobs, but too few to sustain the work norm. We can be thankful that the Digital Revolution left the work norm intact, but also be grateful for the preparation for any future technological revolution that does end human work. The work norm might flourish into the Digital Age. But you should be much less confident that this success will be repeated for future technological revolutions. Will the work norm survive into a technological age centered on the immense productive powers of quantum mechanics?
Keeping up with your insurance premiums offers you protection into the indefinite future. So it is with some creative thinking about new possibilities for human work. In the chapters that follow I direct this thinking away from the suggestion that we might be better than machines at the tasks they are designed to do. In chapter 2 I argued that the quest for machines with minds takes a distant second place to the quest for machines that do mind work. Secure work for humans depends on the facts about us of which we are justifiably most proud. We have minds. The most powerful mind workers of the Digital Revolution will not be members of the mind club.
Concluding Comments
This chapter explores the debate between optimists and pessimists about the economic value of human agency. Optimists expect the work norm to survive into the Digital Age. Pessimists allow that the Digital Age will contain jobs done by humans but doubt that these will suffice to maintain work as a norm for humans. The optimists bet on human ingenuity. We’ve always responded to past challenges by finding new ways to be useful. Pessimists counter by pointing to the protean powers of the digital package. I suggest that we should place greater emphasis on what the pessimists say. We should view pessimism as valuable and cheap insurance against a jobless future.
Preparing for a world in which the pessimists are right is relatively cheap insurance. It requires people to think seriously about how to respond to a future in which the Digital Revolution destroys many more jobs than it creates. This pattern places the work norm under threat. If the optimists are right, there will be some wasted imaginative effort. We could have just relaxed and enjoyed fantasizing about the new jobs done by our children and grandchildren. But that cost is trivial compared with what we lose should we march confidently into a Digital Age in which many jobs are destroyed but none of the jobs that we were hoping for ever actually arrives.
In the chapter that follows I consider a different way to value human agency. We should look to the Digital Revolution to create new categories of jobs for humans to fill. We should accept that any new economic roles will be better filled by machines than by humans. We should instead consider precisely what we value about interacting with each other.