5

How AI Put Our Jobs in Jeopardy

AS A KID, Ken Jennings could only pick up one English-language station on his parents’ seventeen-inch Zenith television set in Seoul, South Korea.

It was a US Army TV station, which mainly showed repeats of old shows, but it was enough to remind him of home. Two of Jennings’ favorite shows were the original Star Trek and the American general-knowledge game show Jeopardy!

Aside from TV, Jennings gravitated toward computers. He was part of the first generation of children to have personal computers in the home. He still remembers the feeling of excitement the day his dad brought home an Apple II computer to practice coding on. Jennings was fascinated by the idea that a computer, given the right programming, could demonstrate intelligence. One of his favorite episodes of Star Trek, “Court Martial,” introduced him to the topic of Artificial Intelligence through a sequence in which Spock plays the ship’s super-smart computer at chess.

The interest in machine intelligence stayed with Jennings. In high school, he wrote a term paper about the science fiction of Kurt Vonnegut. Vonnegut’s first novel, Player Piano, tells the story of a near-future society in which mechanization has eliminated the need for human workers. As a result, there is a rift between the wealthy engineers and managers who keep society running and the lower classes, whose jobs have been replaced by machines.

“I thought it was a great story, but light years away from happening,” he says.

Jennings went on to study computer science at Brigham Young University in Utah. He most enjoyed those classes that related to Artificial Intelligence. After he graduated, he got a job as a software engineer for a health care company in Utah, although it failed to live up to his expectations.

“It was pretty dull. I got into computers because it seemed like a way to solve puzzles all day,” he continues. “Instead, I was writing applications trying to sell doctors on moving to New Mexico. The high-end theoretical stuff, the stuff that interested me, was nowhere in sight.”

Jennings’ job as a software engineer bothered him for another reason, too. He quickly realized that he was a pretty mediocre computer programmer. The encyclopedic memory that had always made him great at tests and trivia games turned out not to help too much when it came to writing code for eight hours each day. Jennings was smart, but he couldn’t shake the feeling that writing good computer code was probably a more accurate intelligence test than knowing the name of the baseball player who hit the first home run in All-Star Game history.

Not particularly enjoying his adult vocation, Jennings decided to dive into something he had loved as a kid. On a whim, in the summer of 2003, the twenty-nine-year-old Jennings and a friend drove from Salt Lake City, Utah, to the Jeopardy! studios in Culver City, Los Angeles. The aim of the trip was to let Jennings sit a qualifying exam to be a participant on the show. The test went well. Nine months later, Jennings got a call saying that he had been chosen to be a contestant on TV. Before long, he was back in Los Angeles, under the bright lights of the Jeopardy! television studio.

“Hey there, Utah,” Jennings said in a cheesy intro video played before his appearance. “This is Ken Jennings from Salt Lake City, and I hope the whole Beehive State will be buzzing about my appearance on Jeopardy!

In his first appearance on the show, Jennings eked out a win on a technicality. Nonetheless, he walked away as the new Jeopardy! champion with $37,201. The following episode he won again. And again. And again. As the weeks passed, the game show seemed to get easier for him. The margin between himself as the winner and the other losing contestants grew wider and wider. His defeated opponents began producing T-shirts to commemorate their status as “Jennings’ Roadkill.” Jennings was like a champion boxer who seemed to get stronger, not more fatigued, the more rounds that went by. The public took notice, too. Ratings for Jeopardy! jumped 50 percent compared to the previous year. In July 2004, the game show was America’s second most popular TV program—losing out only to CBS’s crime investigation drama CSI.

And all the time Jennings kept winning, smashing every previous record in Jeopardy! history. From being a no-name software engineer from Salt Lake City, suddenly he had a Hollywood agent and a book deal. One day Jennings’ agent phoned to say he had received offers to appear on both Sesame Street and The Tonight Show.

“It was all totally surreal,” Jennings says. “It had never happened in my lifetime that Americans cared so much about who was on a quiz show.”

Jennings’ streak eventually came to an end following a record seventy-four consecutive shows. He was sad to lose, but Jeopardy! had done him wonders. He was smart, he was in demand, and—thanks to his winnings—he was rich. In all, Jennings’ seventy-four-show streak had netted him an impressive $2,520,700.

Elementary, My Dear Watson

Among the people who watched Ken Jennings’ astonishing Jeopardy! streak was a man named Charles Lickel. Lickel was a senior manager at IBM Research. He wasn’t a regular Jeopardy! viewer by any means, but in the summer of 2004 it was a hard show to ignore. One evening, Lickel and his team were eating dinner at a steakhouse. At seven o’clock on the dot, Lickel was stunned to see the dining room empty as all the other patrons poured into the restaurant’s bar to watch Jeopardy!, leaving their steaks to get cold.

Like a lot of people at IBM, Lickel had been searching for the next big AI Grand Challenge since its chess-playing computer Deep Blue had beaten world champion Garry Kasparov in 1997. With Jeopardy!, he thought he might have found it. Jeopardy! had its downsides, of course. Its lack of scientific rigor made it unattractive to some people in IBM. Jeopardy! was meant as entertainment, not as a serious measure of intelligence, they argued. But the naysayers were overruled.

To those IBM staffers who believed in the idea of a Jeopardy!-playing computer, the task’s imprecise messiness was exactly what made it exciting. Unlike chess, which has rigid rules and a limited board, Jeopardy! was less easily predictable. Questions could be about anything, and routinely relied on complex wordplay. The contestant has to supply the correct “question” to the given clue, so a typical example might be: “As an adjective, it means ‘timely’; in the theatre, it’s to supply an actor with a line.” The correct response is: “What does ‘prompt’ mean?” In order to give an answer, IBM’s computer would have to first decode the complicated clue, often involving puns. Puns are challenging for a computer because they show the inexactness of language: the fact that we will often use the same word in different contexts to mean different things. For a human, this means that we don’t need a language that has billions of unique words. For a computer, it means that it isn’t enough to simply build the quiz show version of Google. A regular search engine can answer around 30 percent of Jeopardy! questions by looking for statistically likely answers based on keywords, but struggles when it comes to the remaining 70 percent. IBM’s computer would need to go further than this.

The raw data the Jeopardy!-playing computer had available to answer its questions was approximately 200 million pages of information, extracted from a variety of sources. All of these had to be stored locally, since IBM’s machine would be unable to access the Internet during the Grand Challenge. To drill down and discover the right answer for whichever question it was asked, IBM used an enormous parallel software architecture (a type of high-performance computation in which a large number of calculations are carried out at the same time) called DeepQA. DeepQA was capable of using natural language processing to find the structured information contained in each Jeopardy! clue. After working out what was meant by a question, DeepQA would next work out a list of possible answers—giving each one a different weighting according to the type of information, its reliability, its chances of being right, and the computer’s own learned experiences. These possible answers were then ranked, and the winning entry became the computer’s official response.

The project began to gain momentum. Inside IBM it was nicknamed Blue J, before being renamed Watson after IBM’s first CEO, Thomas Watson. It became better and better at answering questions. During initial tests in 2006, Watson was given 500 clues from past Jeopardy! episodes. Of these, it managed to get just 15 percent correct. By February 2010, the system had been improved sufficiently that it could defeat human players on a regular basis.

In February 2011, Watson faced off against Ken Jennings and another former Jeopardy! champion named Brad Rutter in a multi-part televised special. Jennings was excited about the possibility. He had been in school when Deep Blue had beaten Garry Kasparov at chess, and in his mind this was his chance to “be Kasparov” at a key moment for AI. Except that he truly believed he would win. “I had been in AI classes and knew that the kind of technology that could beat a human at Jeopardy! was still decades away,” he says. “Or at least I thought that it was.”

In the event, Watson trounced Jennings and Rutter, taking home the $1 million prize money. Although the human players put up a good showing, there was no doubt who was the game show’s new king. Jennings, in particular, was shocked. “It really stung to lose that badly,” he admits.

At the end of the game, the dejected Jennings scribbled a phrase on his answer board and held it up for the cameras. It was a line from The Simpsons, although it seemed strangely appropriate given what had happened.

It read: “I for one welcome our new robot overlords.”

A World of Technological Unemployment

Ken Jennings’ crack was as neat a summary as you could hope for when it comes to dealing with one of the perceived dark sides of Artificial Intelligence. Forget leather jacket-wearing Austrian robots trying to take over the world, the real imminent threat AI systems pose relate to our jobs. The phrase “technological unemployment” was first coined by a British economist named John Maynard Keynes in 1930. In a speculative essay entitled “Economic Possibilities for our Grandchildren,” Keynes predicted that the world was on the brink of a revolution regarding the speed, efficiency and “human effort” involved with a wide variety of industries. “We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come,” Keynes wrote about the rise of labor-saving machines.

Technology has always created unemployment. As new technologies are invented, the number, types and makeup of jobs that exist in society shift to accommodate them. Consider, for example, the comical-sounding job of “knocker-up,” which existed prior to the Industrial Revolution and is unheard of today. A knocker-up was a class of professional whose job involved waking up sleeping people so that they could get to work on time. To do this, he or she used a long stick (usually a bamboo) to tap on the bedroom window of clients, not moving on to the next house until they were positive that the occupant was awake. Needless to say, knocker-ups were permanently disadvantaged when the French inventor Antoine Redier patented an adjustable mechanical alarm clock in 1847.

Not all technological unemployment has been quite so obscure as the lonely death of the knocker-up. The economist Gregory Clark has convincingly argued that the working horse was one of the biggest victims of the invention of the internal combustion engine. According to Clark, there were 3.25 million working horses in England in 1901. By 1924, less than a quarter of a century later, that number had been reduced to fewer than 2 million: a steep drop of 38 percent. While there was still a use for horses plowing fields, pulling wagons and working in pits, the arrival of the internal combustion engine had driven down costs far enough that the wage for this work was so low it often wouldn’t even pay for a horse’s feed.

As machinery became more and more advanced, this trend picked up speed through the twentieth century and beyond. Thanks to dual advances in both Artificial Intelligence and its sibling field of robotics, automation is now sweeping across more industries than ever. In warehouses, robots are increasingly used to select and box up products for shipping. In the service industry, robots are used to prepare food—and even serve it to customers. To whit, the San Francisco startup Momentum Machines, Inc. has built a robot capable of preparing hamburgers. Current models can prepare around 360 per hour, and are capable of doing everything from grinding the meat for the burgers and toasting the buns to adding fresh ingredients such as tomatoes, onions and pickles. Another company, Infinium Robotics, constructs flying robot waiter drones, which navigate around restaurants using infrared sensors, and can carry the equivalent weight of two pints of beer, a couple of glasses of wine and a pizza.

The advantage of these machines is obvious. While research and development costs outstrip those of training a human, once this has been done they cost just a fraction of what a person would charge to carry out the same task. As the BBC assures us about waiter drones, they are “sturdy, reliable, and promise never to call in sick at the last minute.” Alexandros Vardakostas, the cofounder of Momentum Machines, puts it even more bluntly: “Our device isn’t meant to make employees more efficient. It’s meant to completely obviate them.”

The use of smart devices of the type described in chapter three is also having a significant impact on certain types of employment. In the US city of Cleveland, councils have distributed special bins, equipped with radio-frequency identification tags, to residents. Thanks to the technology, city crews are able to see whether residents are putting their garbage and recycling out for pickup. As a result, Cleveland eliminated ten pickup routes and slashed its operating costs by 13 percent. Although this is a net positive for efficiency, fewer pickups also means that fewer garbage collectors are needed.

The most unexpected shift when it comes to AI’s impact on employment, though, is what it means for white-collar jobs that don’t require manual labor. The tasks that today’s machines are getting better at carrying out instead involve cognitive labor, in which it is our brains that are being replaced, not our bodies. This development was forecast by none other than Warren McCulloch—one of the inventors of the neural net—back in 1948. Speaking at an event called the Hixon Symposium on Cerebral Mechanisms in Behavior, at the California Institute of Technology, McCulloch told the assembled audience:

As the Industrial Revolution concludes in bigger and better bombs, an intellectual revolution opens with bigger and better robots. The former revolution replaced muscles by energy, and was limited by the law of the conservation of energy, or of mass-energy. The new revolution threatens us, the thinkers, with technological unemployment, for it will replace brains with machines limited by the law that entropy never decreases. These machines, whose evolution competition will compel us to foster, raise the appropriate question: “Why is the mind in the head?”

McCulloch’s last point is the most pertinent one. The Industrial Age leaders of industry assumed it was their intelligence that would protect them from technological replacement. Manual work reduced men to flesh-and-muscle machines, thereby outdating them the moment superior machinery came along. But smart people? Industrial Age machinery wasn’t likely to displace them any time soon, was it? Today’s reality is somewhat different. As we’ve seen, the past few years have ushered in extraordinary advances concerning what machines are capable of. Machines have become not simply tools to increase the productivity of human workers, but the workers themselves. Computers are still at their best when it comes to dealing with routine tasks in which they follow explicit rules. However, advances in AI mean that the scope of what is considered routine has become far broader.

For example, a little more than ten years ago, driving a car was considered something that a machine would never do. This is because of the unstructured nature of the task, which requires the processing of a constant stream of visual, aural and tactile information from the immediate environment. That all changed on October 9, 2010, when Google published a blog post revealing that it had developed “cars that can drive themselves.” Kitted out with laser range-finders, sonar transmitters, radar, motion detectors, video cameras and GPS receivers—along with some cutting-edge AI software—the cars can negotiate the chaotic complexity of real-world roads. To date, Google’s fleet of Googlemobiles have driven around 1 million miles without causing an accident. The one serious accident they’ve been involved with happened “while a person was manually driving the car.”

What does this mean for taxi drivers and long-distance truckers? One possible future can be glimpsed by looking at air travel. Around half a century ago, the flight deck on an airliner had seats for five highly skilled and well-paid individuals. These included a pair of pilots, a navigator, a radio operator and a flight engineer. Today, just two of those—the pilots—remain. And they may not be around for long, either. “A pilotless airliner is going to come; it’s just a matter of when,” said Boeing executive James Albaugh in 2011. No doubt seeing that as a challenge, Google has already created its Project Wing initiative, which seeks to extend its work on driverless cars to driverless commercial airlines. “Let’s take unmanned all the way,” Project Wing’s leader Dave Vos said during a panel discussion at the annual conference of the Association of Unmanned Vehicle Systems International. “That’s a fantastic future to aim for.”

It’s tough to predict how many other industries will be disrupted thanks to Artificial Intelligence, although we can make an educated guess. In 2013, a study carried out by the Oxford Martin School concluded that 47 percent of jobs in the US are susceptible to automation within the next twenty years. The authors predicted that there would be two main “waves” of this AI takeover. “In the first wave, we find that most workers in transportation and logistics occupations, together with the bulk of office and administrative support workers, and labor in production occupations, are likely to be substituted by computer capital,” they wrote. In the second wave, every task involving finger dexterity, feedback, observation and working in confined spaces will fall prey to AI.

What is likely to surprise people is just how broad some of these categories may prove to be. AI has already made inroads carrying out many of the information-based tasks that are traditionally the domain of high-cognition professionals like doctors or lawyers. Lawyers, for instance, are being squeezed by the arrival of tools like LegalZoom and Wevorce, which use algorithms to guide customers through everything from drawing up contracts to filing for divorce. This kind of automation will particularly affect younger workers, such as junior lawyers, who previously learned their jobs by carrying out routine tasks like “discovery”—referring to the task of gathering documents that will be used as evidence in a court hearing. Thanks to e-discovery firms, this work can be done by machines for a fraction of the cost of paying an army of junior lawyers to do it. As a result, it is likely that many law firms will stop hiring junior and trainee lawyers altogether.

Even high-level executives may have to watch their backs, though. In 2014, a venture capital firm in Hong Kong named Deep Knowledge Ventures announced that it had appointed an AI to its board of directors. Given the same level of influence as human board members, the role of the Artificial Intelligence was to weigh up financial and business decisions regarding investments in biotechnology and regenerative medicine. At least according to its creators, the AI’s strength was its ability to automate the kind of due diligence and historical knowledge of trends that would be difficult for even a human to spot.

Whichever way you slice it, work as we know it is about to change.

The Positives of Techno-Replacement

In 1589, the British inventor William Lee invented a stocking frame knitting machine. According to legend, he did this because the woman he was wooing showed more interest in knitting than she did in him. (Which, of course, begs the question of what kind of an outcome a person hopes for when they try to woo a beloved by putting them out of business?) Looking to protect his invention, Lee traveled to London and, at considerable expense to himself, rented a building with the aim of showing his machine to Queen Elizabeth I. The Queen turned up to the demonstration but refused to grant Lee the patent he was requesting. The words she used to explain her decision have gone down in history: “Thou aimest high, Master Lee. Consider thou what the invention could do to my poor subjects. It would assuredly bring to them ruin by depriving them of employment, thus making them beggars.”

At the time, England had powerful guilds, which eventually had the effect of driving William Lee from the country. As the historian Hermann Kellenbenz has observed, these guilds “defended the interests of their members against outsiders, and these included the inventors who, with their new equipment and techniques, threatened to disturb their members’ economic status.”

It is highly unlikely that any government, regulatory body or (especially) venture capitalist firm in the UK or United States would act in the same manner today. A person applying for a patent will be scrutinized on the originality of his or her idea, not on the long-term impact it is likely to have on society.

However, despite a lack of people willing to behave as Queen Elizabeth I did, it may be the case that the long-term implications of AI are not as bleak for employment as some would have you believe. Yes, Artificial Intelligence’s risk to our livelihood is one of the most pressing issues we need to examine, but there are also plenty of reasons to be optimistic.

Let’s start with what may sound a controversial idea: that there is a moral imperative for getting rid of certain types of work.

To give an example most people can surely agree on, there were more than 1,000 chimney sweeps employed in Victorian London. Unlike the romantic picture presented in movies like Disney’s Mary Poppins, chimney sweeps endured a brutal existence. Children were often used as sweeps because their small frames allowed them to go down the narrowest chimney stacks, where adults could not reach. Child chimney sweeps started working as young as three years old. Since most would literally outgrow the job by nine or ten, some bosses underfed their employees so that they could continue fitting down chimneys. Death could result from chimney sweeps falling down chimney stacks, or getting stuck in them without anyone knowing what had happened—leading to death from exposure, smoke inhalation or even burning. Many children suffered irreversible lung damage from constantly breathing in soot.

Regardless of our views on youth unemployment, not too many would be in favor of bringing back child chimney sweeps today. Technology has replaced our need for people to perform this role, both through smarter power-sweeping brushes and, more important, through the replacement of coal and wood burners by gas and electric heating as our primary means of staying warm. Despite putting a number of people out of work, it is fair to say that this type of technological unemployment was a positive one. This isn’t an entirely new realization. Writing in 1891, at the very height of the Victorian era, Oscar Wilde argued: “All unintellectual labor, all monotonous, dull labor, all labor that deals with dreadful things, and involves unpleasant conditions, must be done by machinery. On mechanical slavery, on the slavery of the machine, the future of the world depends.”

Today, an equivalent type of “monotonous . . . labor” that takes place in “unpleasant conditions” might be the manufacturing of devices like our smartphones and tablets, which are regularly carried out in places like China and India. The pristine white boxes our iPhones arrive in, complete with the sunny slogan “Designed by Apple in California,” make it easy to forget that what we are holding are, in essence, Industrial Age products pieced together in Eastern factories under sometimes tough and unpleasant conditions.

One of Apple’s largest manufacturers is a Taiwanese company called Foxconn. Foxconn operates on a scale that is unimaginable to many of us in the West. As the single largest private sector employer in China, it employs around 1.4 million people: roughly equal to the total population of Glasgow. Foxconn’s factories are more like giant campuses than they are factories as we might think of them. The factory workers live and work there, sleeping in multi-person dormitories before trudging to work to spend hours on a conveyor belt line. Foxconn has frequently come under fire for its treatment of human workers. In 2012, living conditions at a company factory in Taiyuan, in China’s northern Shanxi province, were reportedly so poor that they sparked a riot. There have also been multiple instances of suicides among Foxconn workers, which has led to suicide nets being erected outside the factories and dorms.

If we could replace this work with automation, should we be morally obligated to do so? Perhaps so. And perhaps we will. In 2011, Foxconn’s CEO Terry Gou announced plans to replace 1 million of Foxconn’s factory workers with manufacturing robots, known as “Foxbots.” Like many of the predictions we’ve seen about the speed at which such breakthroughs are possible, Gou’s initial estimates were off. Having suggested that the robot replacements would be complete by the close of 2014, at the end of that year Foxconn was still hiring humans while reporting problems with the production accuracy of its Foxbots. Terry Gou has now revised his estimates to 2016 until the Foxbot army is ready to take over manufacturing on devices like the iPhone. Although Foxconn is likely not developing Foxbots for ethical reasons so much as financial ones, the net result is still an ethical one in terms of rendering unpleasant jobs obsolete, even if it then opens up the problem of what to do with the newly unemployed workers (see here).

There are other illustrations, too, referring to areas we may not even currently view as fraught with ethical challenges. At present, an average of 43,000 people die in the United States each year due to traffic collisions. That’s a higher figure than those killed by firearms (31,940), sexually transmitted diseases (20,000), drug abuse (17,000) and other leading causes of death. Advances in AI and automation will certainly help to cut down on these deaths. Tesla chief executive Elon Musk has argued that, once we reach the point where self-driving cars are widespread, it would be unethical to continue letting humans drive vehicles. “It’s too dangerous. You can’t have a person driving a two-ton death machine,” he said during an appearance at an annual developers conference for Nvidia, a Silicon Valley company which specializes in computer vision. Musk thinks the transition will take some time due to the number of cars already on the road, but feels that it could happen within the next two decades. The toll on taxi or truck drivers might be a negative in the short term, but getting as many human drivers as possible off the roads may turn out to be a positive move in the end.

Out with the Old, in with the New

Of course, ethical concerns don’t mean anything if the choice is between doing a dangerous or unpleasant job and not being able to feed yourself and your family. It wouldn’t be enough to ban child chimney sweeps in Victorian England if the government wasn’t also going to provide children with free education and a chance to better their employment opportunities. Getting rid of undesirable jobs is only good if we can replace them with something else. Fortunately, AI can be of service here, too. Although it is certainly true that technological advances have classically displaced certain types of work, they have also created them.

For instance, the invention of the horse-replacing internal combustion engine sparked a shift that transformed countries like the US from agrarian economies—based on the farming of crops and cattle—to industrial ones. Two centuries ago, 70 percent of American workers lived on farms. Today, automation has eliminated all except 1 percent of these jobs, with machines taking the rest of the work. Those workers didn’t become members of the long-term unemployed, though. Instead they moved to rapidly expanding cities and got jobs in factories.

This is what economists call the “capitalization effect,” in which companies enter areas of industry in which both demand and productivity are high. The result is new quantities of previously unimaginable employment, able to offset the destructive effects of economic shifts. There is no compelling reason to believe that we won’t see a similar transition in the modern AI age. As with the shift from agrarian to industrial economy, we will witness a similar number of current jobs disappear within most of our lifetimes. However, digital technologies will also create a variety of new job categories, many of which were unimaginable just a few decades ago.

Consider the meteoric rise of content generators who make a living thanks to YouTube. In 2014, the popular YouTube star Felix Kjellberg—better known by his online moniker PewDiePie—earned $7 million from gaming commentary videos. With a subscriber count in excess of 37 million, PewDiePie is the star player in a growing “vlogger” job category, which didn’t exist until 2000, and only took off in 2005.

PewDiePie’s success is exceptional, but it’s part of a bigger story. More than 1,500 new types of occupation have all appeared as official job categories since 1990. These include roles such as software engineers, search engine optimization experts and database administrators. The use of AI within video games has meanwhile inspired millions of fans to seek out work as professional game developers. Like “vloggers,” the job of video game designer was not the dream of a single person 200 years ago, although today the video game industry is among the world’s most valuable entertainment industries. The launch of Grand Theft Auto V in September 2013 achieved worldwide sales of more than $634 million—becoming the biggest launch of any entertainment product in history. By 2017, it is estimated that the video game industry will be valued at $82 billion globally.

In 2014, 6 percent of the UK workforce was employed in one of these new job categories. This concentration is at its highest in major cities. In central London, such roles accounted for 8.6 percent of all jobs in 2004, increasing to 9.8 percent a decade later. As with many new consumer technologies, there is evidence that these new job categories start out with early adopters and entrepreneurs in cities, before diffusing to other regions as they become established.

The Revenge of the Mechanical Turk

These jobs don’t just involve building bigger and better AI systems, but also working alongside them. The latter roles are sometimes called Mechanical Turk jobs, named after a chess-playing automaton called “The Turk,” built in the eighteenth century by the inventor Wolfgang von Kempelen. The Turk toured Europe, where it beat talented chess players, including Napoleon Bonaparte and Benjamin Franklin. However, it was later revealed that the Turk was not really a machine at all, but rather a human chess master controlling the operations of a puppet-like “robot.” Much the same is true of today’s AI tools, which appear to be examples of 100 percent machine intelligence but are, in fact, a sort of hybrid intelligence requiring the input of both humans and machines at every stage. A Mechanical Turk job applies to any jobs that are assigned to humans because machines are not yet capable of carrying them out. As a result, this is sometimes described as “Artificial Artificial Intelligence.”

Many companies have experimented with AAI. The best known of these is Amazon’s MTurk platform, which allows individuals and businesses to crowdsource humans to carry out what are known as HITs, standing for Human Intelligence Tasks. This could be anything from labeling the objects found in an image to make searching easier to transcribing audio.

Amazon is far from alone. In August 2015, Facebook launched “M,” a text-based AI assistant, similar to the technologies described in chapter four. Unlike Siri, Google Now and Microsoft’s Cortana, M uses a combination of both human and Artificial Intelligence to answer user queries. If the AI is unable to respond to a question satisfactorily, humans can take over the conversation.

Twitter also employs a large number of contract employees, called judges, whose job it is to interpret the meaning of different search terms that trend on the microblogging service. For instance, at 6:00 p.m. on October 3, 2012, Twitter experienced a sudden spike in US searches for the phrase “Big Bird.” Using its human judges, Twitter was able to determine that this was a reference to Mitt Romney (who was talking about government funding for public broadcasting) and not an explicit search for Sesame Street. Why were humans better than machines for this job? Because we understand oblique references more easily than machines do.

As Twitter engineers explained in a blog post: “After a response from a judge is received, we push the information to our backend systems, so that the next time a user searches for a query, our machine learning models will make use of the additional information.”

The need for these Mechanical Turk roles will only increase as companies invest in bigger and better AI systems. Amazon’s MTurk system, for example, was described in 2011 as having an active user base of “more than 500,000 workers from 190 countries.” It is likely that this number is significantly higher today.

The main criticism of Mechanical Turk systems is that, in many cases, the work is compensated very poorly. Even those Mechanical Turkers who live in the US currently make only around $1.60 per hour, with no worker protection or benefits. This is because the Human Intelligence Tasks—despite being hard enough to baffle many machines—are generally unskilled by human standards and therefore they are jobs that the majority of people are more than capable of performing. Because of this, the potential global supply of workers available to do the work is high, which drives down the cost. The net result is what at least one critic has labeled a “Digital Sweatshop.”

That is, of course, if workers making today’s AI systems smarter get paid at all. As we’ve seen so far, many of today’s most successful AI applications rely on crunching millions or even billions of pieces of data generated by humans. The unwritten user agreement is that companies give their products away for “free” on the condition that they then get to use the resultant data to sell ads or make their AI systems smarter. For example, as with all of the above illustrations, Google’s online translation service appears to be 100 percent machine intelligence. In reality it works based on data provided by human users, taking individual words and phrases that have been matched up previously in human translations and applying this knowledge to entire bodies of text. Next time you use Google Translate, consider for a second that some of the Mechanical Turks who make it possible are highly skilled human translators, often with PhDs in various languages.

Unlike the people who voluntarily sign up for MTurk tasks, these translators will never get paid anything for their contribution—other than the sum they were paid for carrying out the original contracted work, that is. Hanna Lützen may get paid by the Gyldendal publishing house for translating the Harry Potter books into Danish, but Google pays her nothing if those combined 1 million-plus words then help its system translate a love letter from your girlfriend in Denmark.

This differs from the legal issues surrounding similar “sampling” in the real world. For instance, in the world of hip-hop, music artists regularly chop up and reuse samples of songs by other musicians. When they do this, they have to pay for these samples to be cleared. If they fail to do so, legal action can follow, as it did in 2006 when a judge ordered that sales of the Notorious B.I.G.’s album Ready to Die be stopped because it used an excerpt from a 1972 song, entitled “Singing in the Morning,” without the proper permission. The German electronica band Kraftwerk has successfully argued in court that even the smallest samples of sounds—such as a few bars of a drum beat—are protected by copyright.

What does this have to do with the legal use of data? All of us are now Mechanical Turkers to some extent, since the data we help generate on a daily basis is what makes AI systems smarter. Whether it’s uploading photos to Facebook or typing in a block of twisted letters to prove our humanity to a CAPTCHA, we’re all helping to train the robot successors who are after our jobs. At some point in the near future, a serious conversation needs to be had about the value we place on data. If, as is often said, data is the oil of the digital economy, then we need to place a proper valuation on it.

Virtual reality pioneer Jaron Lanier has suggested one way to do this would be a universal micropayment system. Lanier has given a few illustrations of how this might work. Imagine, he suggests, that you sign up for an online dating service where the data you provide to refine your own romantic matches also helps the company perfect its algorithms for attracting other users. Or if Facebook uses your profile picture in an ad to target a page to one of your friends. Another example might be Netflix using your viewing preferences to help commission a show like its Emmy award–winning House of Cards, which was created entirely on the basis of Netflix user data.* In cases like this, a formula could be established to determine both where data originated and how important the data was in shaping certain decisions. This calculation would then necessitate a micropayment being made to users in the same way that a royalty rate is paid to a musician whose work is sampled by another artist.

The idea sounds far-fetched, but the law is still catching up on many of the technological shifts we’ve seen in the past decade. Precedents like the European Union’s “right to be forgotten” ruling against Google show how laws are still catching up with the realities of new digital technology. At some point, the question of data ownership is sure to come under scrutiny. To return to the music sampling analogy, a large number of cases of illegal sampling went under the radar in the early days. It was only later on, when the technique became part of mainstream music, that artists suddenly found themselves in court facing multimillion-dollar fines for copyright infringement. Similarly, as the AI-driven shift in employment makes job categories like Mechanical Turkers more prevalent, conversations need to be had about who owns the data driving AI systems. Implemented correctly, there’s no reason this shouldn’t aid companies as well as individuals. The real value in many twenty-first-century businesses is the analyzable data they hold. If users were financially compensated for feeding data into these businesses, it would add an extra incentive for using them. If the kind of universal micropayments Jaron Lanier describes were applied to every piece of data we generate, it is not unthinkable that Mechanical Turkers could go from making $1.60 per hour to earning an amount closer to the UK minimum wage of $10.72, or even more. This would be a key step in establishing a digital framework in which AI systems get smarter, but humans are able to share in the wealth created.

The Human Element

Mechanical Turk jobs involve humans working behind the scenes in AAI roles which are often hidden from view. However, as AI becomes a larger part of all our lives, a number of companies have started emphasizing—rather than downplaying—the role humans have to play in their systems. Like Google, Facebook and other tech companies, Apple has competed fiercely to hire AI experts in recent years. According to a former Apple employee, the company’s number of machine learning experts has tripled or quadrupled in the past several years. As with these other companies, Apple uses humans as part of its largely AI-driven services. However, unlike the other companies, Apple presents its human workers as a selling point, not simply as a stand-in for the bits of its technology that don’t quite work properly yet. When Apple introduced its much-anticipated Apple Music streaming service in June 2015, one of its most heavily advertised features was its reliance on humans with specialist music knowledge to curate playlists. “Algorithms are really great, of course, but they need a bit of a human touch in them, helping form the right sequence,” executive Jimmy Iovine told the Guardian newspaper shortly after Apple Music’s launch. “You have to humanize it a bit, because it’s a real art to telling you what song comes next. Algorithms can’t do it alone. They’re very handy, and you can’t do something of this scale without ’em, but you need a strong human element.”

In reality, algorithms can sequence music in a way that is palatable to many users. AI tools are able to generate playlists based on genre, era, artist, tempo, or countless other metrics. A number of companies (Apple included) have even explored technology such as mood-detecting headphones, so that music tracks can be selected for users based on whether they happen to be out jogging or lazing on the sofa. But what Apple astutely noticed was that humans enjoy interacting with other humans. Apple Music’s human curators are not invisible parts of the algorithm process, but flesh-and-blood experts with the goal of helping you discover music an algorithm would have been unlikely to recommend. An AI can only recommend music to you based on the data it has about your previous favorite songs or what is popular with other listeners. A human expert, on the other hand, can do more than that.

Apple’s expert “tastemakers” include names like popular DJ Zane Lowe, who left his high-paying job at BBC Radio 1 for a starring role in Apple’s new streaming music service. Others include former NWA rapper Dr. Dre and pop star Elton John—none of whom are low profile, or (presumably) working for $1.60 per hour. Depending on how big Apple Music gets, it is astonishing to think that a tech company could wind up being one of the big employers of human DJs on the planet.

This new focus on human traits like creativity and social intelligence will only become more important as AI gets smarter. Although Artificial Intelligence is becoming better at communicating in a humanlike way and is proving surprisingly creative in certain applications (as we shall see in the next chapter), these are skills that will remain prized in humans.

Observing this transition, Harvard University’s economics professor Lawrence Katz has coined the term “artisan economy.” Artisans are skilled workers who often carry out their work by hand. During the Industrial Revolution, artisans were increasingly replaced as automation took over. For example, mechanical looms took jobs away from artisans skilled at handcrafted artisanal weaving. Today, there is evidence that trend is being reversed.

When Katz talks about an artisan economy, he doesn’t just mean weaving, of course. “Artisan economy” means the return of products that are not machine-driven and homogenous, but rather rely on human creativity and interaction. For instance, the carpenter who sells and fits standardized products will struggle with the rise of technologies like 3-D printing. However, the carpenter who is able to assess their customer—working out what it is that they’re going to want to use a new cabinet or desk for and adapting their work to suit—will fare much better. Similarly, care workers who are emotionally checked-out and little more than babysitters could conceivably be replaced by robots. An artisanal dementia coach or home health aide—full of bright ideas to keep clients engaged—has the potential to flourish, particularly in a market with a growing elderly population. Much the same can be said for inspirational human personal trainers going up against smart wearable devices, human taxi drivers with insider knowledge of good places to visit going up against self-driving cars, and empathetic lawyers going up against services like Wevorce.

These artisan economy jobs are likely to be overwhelmingly “high-touch,” meaning that they rely on personal contact. This makes it tougher for them to be replaced by outsourcing, robots, or the right algorithm. Unlike the artisans of the Industrial Revolution, though, today’s workers in the artisan economy can use technology to augment, rather than replace, their employment opportunities. Scaling a business to reach millions, or even billions of people, is possible in a way that it never was before the digital age. In 2014, a story appeared on Business Insider about an SAT tutor who charges $1,500 for ninety minutes of one-on-one tutoring—carried out via Skype. Even in an age of educational apps and online learning tools, the tutor was able to command incredibly high prices due to his proven ability to raise test results.

Another example of the artisan economy at work is Etsy, the online marketplace where people can sell handmade or vintage products. Having launched in 2005, Etsy currently offers more than 29 million different pieces of handmade jewelry, pottery, clothing and assorted other objets d’art. By 2014, gross merchandise sales for the site had reached $1.93 billion, with sellers taking home the vast majority of this. Trading on the popularity of artisanal goods, some sellers have proven incredibly successful, earning many thousands of dollars each month. Despite the site’s success, the focus on handmade artisan goods remains central. When it was revealed that one Etsy store owner was bringing in upward of $70,000 a month selling headbands and leg warmers that turned out to be mass-produced in China, there was an immediate uproar from the community.

Working out which tasks make lasting business sense in the artisan economy will be a matter of trial and error. It will likely be areas where non-machined irregularities are valued, such as the personal trainer who offers a personalized service to clients, or the executive who does more than just crunch numbers. In other fields, we will prove less willing to hand back tasks previously given to machines, however. No one wants humans instead of machines to build their cars anymore. Irregularities are a lot less appreciated when they happen on the motorway at high speeds.

To cope with this paradigm shift we will also need to do better at training the new generation. Currently, education is stuck in the same Industrial Revolution paradigm it has been in for more than 100 years. In an age in thrall to the factory, it followed that schooling borrowed the same basic conveyor belt metaphor that was then being used to churn out identical Model T Fords. Standardized lesson plans were designed to teach students specific skills for pre-prescribed roles in the workplace. This standardization assumed that the skills students were learning were unchanging ones that they would rely on for the rest of their lives. In today’s world, learned skills routinely become obsolete within the decade they’re learned—meaning that continual learning and assessment is needed throughout people’s lives. In an age in which we have the Internet on every smartphone, we will also need to question the purpose of teaching children to mentally store large amounts of information through uninspired rote learning.

Barring some catastrophic risk, AI will represent an overall net positive for humanity when it comes to employment. Economies will run more smoothly, robots and AIs will take over many of the less desirable jobs and make new ones possible, while humans are freed up to pursue other, more important goals. Artificial Intelligence may be able to do a lot of the jobs we currently do—but humans are far from irrelevant.

After all, several years after Ken Jennings was roundly beaten by IBM’s Watson AI, we’re not yet letting our dinners grow cold to go and watch two AIs battle it out in trivia shows on TV. Despite the braininess on show in an episode of Jeopardy!, it’s the human personalities the audience really wants to see. This drama is ultimately what matters the most.