Notes

1. MIND THE GAP

Marvin Minsky, John McCarthy, and Herb Simon genuinely believed: Minsky, 1967, 2, as quoted in the text. McCarthy: “We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer” in McCarthy, Minsky, Rochester, and Shannon, 1955. Simon, 1965, 96, as quoted in the chapter epigraph.

the problem of artificial intelligence: Minsky, 1967, 2.

surpass native human intelligence: Kurzweil, 2002.

near term AGI: Quoted in Peng, 2018.

not everyone is as bullish: Ford, 2018.

autonomous cars [in] the near future: Vanderbilt, 2012.

revolutionize healthcare: IBM Watson Health, 2016.

cognitive systems [could] understand: Fernandez, 2016.

with [recent advances in] cognitive computing: IBM Watson Health, undated.

IBM aimed to address problems: IBM Watson Health, 2016.

“stop training radiologists”: The Economist, 2018.

M, a chatbot that was supposed: Cade Metz, 2015.

Waymo would shortly have driverless cars: Davies, 2017.

the bravado was gone: Davies, 2018.

widespread recognition that we are at least: Brandom, 2018.

MD Anderson Cancer Center shelved: Herper, 2017.

unsafe and incorrect: Ross, 2018.

A 2016 project to use Watson: BBC Technology, 2016.

the performance was unacceptable: Müller, 2018.

Facebook’s M was quietly canceled: Newton, 2018.

Eric Schmidt, the former CEO of Google: Zogfarharifard, 2016.

definitely going to rocket us up: Diamandis and Kotler, 2012.

AI is one of the most important: Quoted in Goode, 2018.

Google was forced to admit: Simonite, 2019.

Bostrom grappled with the prospect: Bostrom, 2014.

human history might go the way: Kissinger, 2018.

summoning the demon: McFarland, 2014.

worse than nukes: D’Orazio, 2014.

the worst event in the history: Molina, 2017.

rear-ending parked emergency vehicles: Stewart, 2018; Damiani, 2018.

people often overestimate: Zhang and Dafoe, 2019.

Robots Can Now Read Better than Humans: Cuthbertson, 2018.

Computers Are Getting Better than Humans at Reading: Pham, 2018.

Stanford Question Answering Dataset: Rajpurkar, Zhang, Lopyrev, and Liang, 2016.

AI that can read a document: Linn, 2018.

Facebook introduced a bare-bones proof-of-concept program: Weston, Chopra, and Bordes, 2015.

Facebook Thinks It Has Found: Oremus, 2016.

Facebook AI Software Learns: Rachel Metz, 2015.

the public has come to believe: Zhang and Dafoe, 2019.

A startup company we are fond of, Zipline: Lardieri, 2018.

ImageNet, a library: Deng et al., 2009.

chess player AlphaZero: Silver et al., 2018.

Google Duplex: Leviathan, 2018.

diagnosing skin cancer: Estava et al., 2017.

predicting earthquake aftershocks: Vincent, 2018d.

detecting credit card fraud: Roy et al., 2018.

used in art: Lecoutre, Negrevergne, and Yger, 2017.

and music: Briot, Hadjeres, and Pachet, 2017.

deciphering speech: Zhang, Chan, and Jaitly, 2017.

labeling photos: He and Deng, 2017.

organizing people’s news feeds: Hazelwood et al., 2017.

to identify plants: Matchar, 2017.

enhance the sky in your photos: Hall, 2018.

colorize old black-and-white pictures: He et al., 2018.

epic battle for talent: Metz, 2017.

sold out in twelve minutes: Falcon, 2018.

France, Russia, Canada, and China: Fabian, 2018.

China alone is planning: Herman, 2018.

McKinsey Global Institute estimates: Bughin et al., 2018.

performs two types of analysis: Kintsch and van Dijk, 1978; Rayner, Pollatsek, Ashby, and Clifton, 2012.

they can’t understand the news: Marcus and Davis, 2018.

The technology just isn’t mature yet: Romm, 2018.

Taking me from Cambridge: Lippert, Gruley, Inoue, and Coppola, 2018; Romm, 2018; Marshall, 2017.

Google Duplex: Statt, 2018.

could handle just three things: Leviathan, 2018.

nothing but restaurant reservations: Callahan, 2019.

Thirty thousand people a year: Wikipedia, “List of Countries by Traffic-Related Death Rate.”

narrow AI tends to get flummoxed: Marcus, 2018a; Van Horn and Perona, 2017.

central principle of social psychology: Ross, 1977.

a chatbot called Eliza: Weizenbaum, 1966.

People who knew very well: Weizenbaum, 1965, 189–90.

In 2016, a Tesla owner: Levin and Woolf, 2016.

The car appears to have warned him: Fung, 2017.

success in closed-world tasks: McClain, 2011.

typically perform only under: Missy Cummings, email to authors, September 22, 2018.

Dota 2: Vincent, 2018.

Starcraft 2: AlphaStar Team, 2019.

almost none of its training: Vinyals, 2019.

correctly labeled by a highly touted: Vinyals, Toshev, Bengio, and Erhan, 2015.

“refrigerator filled with lots of food and drinks”: Vinyals, Toshev, Bengio, and Erhan, 2015.

Teslas repeatedly crashing into parked fire engines: Stewart, 2018.

mislead people with statistics: Huff, 1954.

suitable for some problems but not others: Müller, 2018.

what he thought AI couldn’t do: Dreyfus, 1979.

2. WHAT’S AT STAKE

A lot can go wrong: O’Neil, 2017.

Xiaoice: Thompson, 2016; Zhou, Gao, Li, and Shum, 2018.

the project was canceled: Bright, 2016. The Tay debacle has been set to verse in Davis, 2016b.

Alexas that spooked their owners: Chokshi, 2018.

iPhone face-recognition systems: Greenberg, 2017.

Poopocalypse: Solon, 2016.

hate speech detectors: Matsakis, 2018.

job candidate systems: Dastin, 2018.

ludicrous conspiracy theories: Porter, 2018; Harwell and Timberg, 2019.

sent a jaywalking ticket: Liao, 2018.

backing out of its owners’ garage: Harwell, 2018.

robotic lawnmowers have maimed: Parker, 2018.

iPhone that autocorrects: http://autocorrectfailness.com/​autocorrect-fail-ness-16-im-gonnacrapholescreenshots/​happy-birthday-dead-papa/​.

A report from the group AI Now: Campolo, 2017.

Flash crashes on Wall Street: Seven, 2014.

an Alexa recorded a conversation: Canales, 2018.

multiple automobile crashes: Evarts, 2016; Fung, 2017.

[T]he scenario: Pinker, 2018.

the Amelia Bedelia problem: Parish, 1963.

seriously, but not always literally: Zito, 2016.

a pedestrian walkway in Miami: Mazzei, Madigan, and Hartocollis, 2016.

Sherry Turkle has pointed out: Turkle, 2017.

To take a more subtle example: Coldewey, 2018.

Machine-translation systems trained on legal documents: Koehn and Knowles, 2017.

Voice-recognition systems: Huang, Baker, and Reddy, 2014.

flamed out when the colors were reversed: Hosseini, Xiao, Jaiswal, and Poovendran, 2017.

there are blue stop signs in Hawaii: Lewis, 2016.

Judy Hoffman has shown: Hoffman, Wang, Yu, and Darrell, 2016.

Latanya Sweeney discovered: Sweeney, 2013.

in 2015, Google Photos mislabeled: Vincent, 2018a.

Professional hair style for work: O’Neil, 2016b.

In 2018, Joy Buolamwini: Buolamwini and Gebru, 2018.

IBM was the first to patch: Vincent, 2018b.

Microsoft swiftly followed suit: Corbett and Vaniar, 2018.

closer to half of professors are women: NCES, 2019.

recruiting system…was so problematic: Dastin, 2018.

The data sets used for training: Lashbrook, 2018. In fairness, the problems with bias in the medical literature toward studies involving white, male subjects long predate AI medicine.

measurably less reliable: Wilson, Hoffman, and Morgenstern, 2019.

significant fraction of the texts on the web: Venugopal, Uszkoreit, Talbot, Och, and Ganitkevitch, 2011.

a lot of allegedly high-quality human-labeled data: Dreyfuss, 2018.

people succeeded in getting Google Images: Hayes, 2018.

Sixteen years earlier: Wilson, 2011.

even if a program is written: O’Neil, 2016a.

the company removed this as a criterion: O’Neil, 2016a, 119.

DeepMind researcher Victoria Krakovna: Krakovna, 2018.

A soccer-playing robot: Ng, Harada, and Russell, 1999.

A robot that was supposed to learn: Amodei, Christiano, and Ray, 2017.

An unambitious AI tasked: Murphy, 2013.

a dairy company hired: Witten and Frank, 2000, 179–80.

Stalkers have begun using: Burns, 2017.

spammers have used: Hines, 2007.

There is little doubt: Efforts by the AI research community to promote a ban on AI-powered autonomous weapons are reviewed in Sample, 2017; and Walsh, 2018. See, for instance, Future of Life Institute, 2015.

When a very efficient technology: Eubanks, 2018, 173.

IBM, for example, managed to fix: Vincent, 2018b.

Google solved its gorilla challenge: Vincent, 2018a.

3. DEEP LEARNING, AND BEYOND

“knowledge-based” approach: Davis and Lenat, 1982; Newell, 1982.

machine learning: Mitchell, 1997.

Frank Rosenblatt built a “neural network”: Rosenblatt, 1958.

reported in The New York Times: New York Times, 1958.

Francis Crick noted: Crick, 1989.

the dark days of the 1990s and 2000s: e.g., Hinton, Sejnowski, and Poggio, 1999; Arbib, 2003.

special piece of hardware known as a GPU: Barlas, 2015.

applied to neural networks since the early 2000s: Oh and Jung, 2004.

A revolution came in 2012: Krizhevsky, Sutskever, and Hinton, 2012.

Hinton’s team scored 84 percent correct: Krizhevsky, Sutskever, and Hinton, 2012.

reached 98 percent: Gershgorn, 2017.

Hinton and some grad students formed a company: McMillan, 2013.

bought a startup called DeepMind: Gibbs, 2014.

subjects of news articles: Joachims, 2002.

structures of proteins: Hua and Sun, 2001.

Probabilistic models: Murphy, 2012.

vital for the success of IBM’s Watson: Ferrucci et al., 2010.

genetic algorithms have been used: Linden, 2002.

playing video games: Wilson, Cussat-Blanc, Lupa, and Miller, 2018, 5.

Pedro Domingos’s book: Domingos, 2015.

traffic routing algorithms used by Waze: Bardin, 2018.

used a mix of classical AI techniques: Ferrucci et al., 2010.

more than $80 billion: Garrahan, 2017.

a set of experiments in the 1950s: Hubel and Wiesel, 1962.

Neocognitron: Fukushima and Miyake, 1982.

Later books by Jeff Hawkins and Ray Kurzweil: Hawkins and Blakeslee, 2004; Kurzweil, 2013.

backbone of deep learning: LeCun, Hinton, and Bengio, 2015.

In their influential 1969 book: Minsky and Papert, 1969.

Over the subsequent two decades: A number of people have been credited with the independent discovery of versions of backpropagation, including Henry Kelly in 1960, Arthur Bryson in 1961, Stuart Dreyfus in 1962, Bryson and Yu-Chi Ho in 1969, Seppo Linnainman in 1970, Paul Werbos in 1974, Yann LeCun in 1984, D. B. Parker in 1985, and David Rumelhart, Geoffrey Hinton, and Ronald Williams in 1986. See Wikipedia, “Backpropagation”; Russell and Norvig, 2010, 761; and LeCun, 2018.

backpropagation: Rumelhart, Hinton, and Williams, 1986.

A technique called convolution: LeCun and Bengio, 1995.

It would have taken: Nandi, 2015.

some important technical tweaks: Srivastava, Hinton, Krizhevsky, Sutskever, and Salakhutdinov, 2014; Glorot, Bordes, and Bengio, 2011.

sometimes more than a hundred: He, Zhang, Ren, and Sun, 2016.

deep learning has radically improved: Lewis-Krauss, 2016.

yielded markedly better translations: Bahdanau, Cho, and Bengio, 2014.

transcribe speech and label photographs: Zhang, Chan, and Jaitly, 2017; He and Deng, 2017.

turn your landscape into a Van Gogh: Gatys, Ecker, and Bethge, 2016.

colorizing old pictures: Iizuka, Simo-Serra, and Ishikawa, 2016.

unsupervised learning: Chintala and LeCun, 2016.

Atari video games: Mnih et al., 2015.

later on Go: Silver et al., 2016; Silver et al., 2017.

If a typical person: Ng, 2016.

his research a dozen years earlier: Marcus, 2001.

Realistically, deep learning: Marcus, 2012b.

three core problems: Marcus, 2018a.

AlphaGo required 30 million games: Silver, 2016. Slide 18.

why neural networks work: Bottou, 2018.

A woman talking on a cell phone: Rohrbach, Hendricks, Burns, Darrell, and Saenko, 2018.

a three-dimensional turtle: Athalye, Engstrom, Ilyas, and Kwok, 2018.

tricked neural nets into thinking: Karmon, Zoran, and Goldberg, 2018.

psychedelic pictures of toasters: Brown, Mané, Roy, Abadi, and Gilmer, 2017.

deliberately altered stop sign: Evtimov et al., 2017.

twelve different tasks: Geirhos et al., 2018.

deep learning has trouble recognizing: Alcorn et al., 2018.

systems that work on the SQuAD task: Jia and Liang, 2017.

Another study showed how easy: Agrawal, Batra, and Parikh, 2016.

translate that from Yoruba: Greg, 2018. This was widely reported, and verified by the authors.

deep learning just ain’t that deep: Marcus, 2018a. Alex Irpan, a software engineer at Google, has made similar points with respect to deep reinforcement learning: Irpan, 2018.

DeepMind’s entire system falls apart: Kansky et al., 2017.

tiny bits of noise shattered performance: Huang, Papernot, Goodfellow, Duan, and Abbeel, 2017.

deep [neural networks] tend to learn: Jo and Bengio, 2017.

nowhere close to being a reality: Wiggers, 2018.

“long tail” problem: Piantadosi, 2014; Russell, Torralba, Murphy, and Freeman, 2008.

this picture: https://pxhere.com/​en/​photo/​1341079.

2013 annual list of breakthrough technologies: Hof, 2013.

In the immortal words of Law 31: Akin’s Laws of Spacecraft Design. https://spacecraft.ssl.umd.edu/​akins_laws.html.

4. IF COMPUTERS ARE SO SMART, HOW COME THEY CAN’T READ?

Google Talk to Books: Kurzweil and Bernstein, 2018.

Google’s astounding new search tool: Quito, 2018.

encoding the meanings of sentences: An earlier technique, Latent Semantic Analysis, also converted natural language expressions into vectors. Deerwester, Dumais, Furnas, Landauer, and Harshman, 1990.

Yet when we asked: Experiment carried out by the authors, April 19, 2018.

Almanzo turned to Mr. Thompson: Wilder, 1933.

deep, broad, and flexible: Dyer, 1983; Mueller, 2006.

Today would have been Ella Fitzgerald’s: Levine, 2017.

getting machines to understand stories: Norvig, 1986.

how machines could use “scripts”: Schank and Abelson, 1977.

PageRank algorithm: Page, Brin, Motwani, and Winograd, 1999.

“What is the capital of Mississippi?”: This and “What is 1.36 euros in rupees?” were experiments carried out by the authors, May 2018.

“Who is currently on the Supreme Court?”: Experiments carried out by the authors, May 2018.

“When was the first bridge ever built?”: Experiment carried out by the authors, August 2018. The passage that Google retrieved is from Ryan, 2001–2009.

off by thousands of years: The Arkadiko bridge, in Greece, built around 1300 BC, is still standing. But that is a sophisticated stone arch bridge; human beings undoubtedly constructed more primitive, and less permanent, bridges for centuries or millennia before that.

directions to the nearest airport: Bushnell, 2018.

On a recent drive that one of us took: Experiment carried out May 2018.

the world’s first computational knowledge engine: WolframAlpha Press Center, 2009. The home page for WolframAlpha is https://www.wolframalpha.com.

But the limits of its understanding: Experiments with WolframAlpha carried out by the authors, May 2018.

titles of Wikipedia pages: Chu-Carroll et al., 2012.

When we looked recently: Search carried out May 2018. The demo of IBM Watson Assistant is at https://watson-assistant-demo.ng.bluemix.net/.

As the New York Times Magazine article: Lewis-Krauss, 2016.

If you give Google Translate the French sentence: Experiment carried out by the authors, August 2018. Ernest Davis maintains a website with a small collection of mistakes made by leading machine translation programs on sentences that should be easy: https://cs.nyu.edu/​faculty/​davise/​papers/​GTFails.html.

We humans know all sorts of things: Hofstadter, 2018.

when we asked Google Translate: Experiment conducted by the authors, August 2018. Google Translate also makes the corresponding mistake in translating the same sentence into German, Spanish, and Italian.

build up a cognitive model: Kintsch and Van Dijk, 1978.

object file: Kahneman, Treisman, and Gibbs, 1992.

5. WHERE’S ROSIE?

sometimes even falling over: IEEE Spectrum, 2015, 0:30.

a robot opening one particular doorknob: Glaser, 2018.

leave a trail of banana peels: Boston Dynamics, 2016, 1:25–1:30.

pet-like home robots: For instance, Sony released an updated version of its robot dog Aibo in spring 2018: Hornyak, 2018.

“driverless” suitcases: Brady, 2018.

tiny amounts of computer hardware: The first generation Roomba, released in 2002, used a computer with 256 bytes of writable memory. That’s not a typo. That’s about one billionth as much memory as an iPhone. Ulanoff, 2002.

robots that can safely wander the halls: Veloso, Biswas, Coltin, and Rosenthal, 2015.

a robotic butler: Lancaster, 2016.

SpotMini, a sort of headless robotic dog: Gibbs, 2018.

Atlas robot: Boston Dynamics, 2017; Boston Dynamics, 2018a; Harridy, 2018.

that parkour video: CNBC, 2018.

WildCat: Boston Dynamics, 2018c.

BigDog: Boston Dynamics, 2018b.

MIT’s Sangbae Kim: Kim, Laschi, and Trimmier, 2013.

Robots from the iRobot company: Brooks, 2017b.

OODA loop: Wikipedia, “OODA Loop.”

accurate to within about ten feet: Kastranakes, 2017.

Simultaneous Localization And Mapping: Thrun, 2007.

devise a complex plan: Mason, 2018.

making good progress on motor control: OpenAI blog, 2018; Berkeley CIR, 2018.

too much automation: Allen, 2018.

PR2 fetching a beer from a refrigerator: Willow Garage, 2010.

Even the fridge was specially arranged: Animesh Garg, email to authors, October 24, 2018.

its first fatal accident: Evarts, 2016.

6. INSIGHTS FROM THE HUMAN MIND

causal entropic forces: Wissner-Gross and Freer, 2013.

walk upright, use tools: Bot Scene, 2013.

broad applications: Wissner-Gross, 2013.

figured out a ‘law’ : Ball, 2013a. Ball somewhat revised his views in a later blog, Ball, 2013b.

TED gave Wissner-Gross a platform: Wissner-Gross, 2013.

In suggesting that causal entropy: Marcus and Davis, 2013.

gone on to other projects: See Wissner-Gross’s website: http://www.alexwg.org.

behaviorism became all the rage: Watson, 1930; Skinner, 1938.

induce precise, mathematical causal laws: Skinner, 1938.

there is no one way the mind works: Firestone and Scholl, 2016.

Humans are flawed in many ways: Marcus, 2008.

roughly 86 billion neurons: Herculano-Houzel, 2016; Marcus and Freeman, 2015.

trillions of synapses: Kandel, Schwartz, and Jessell, 1991.

hundreds of distinct proteins: O’Rourke, Weiler, Micheva, and Smith, 2012.

150 distinctly identifiable brain areas: Amunts and Zilles, 2015.

a vast and intricate web of connections: Felleman and van Essen, 1991; Glassert et al., 2016.

Unfortunately, nature seems unaware: Ramón y Cajal, 1906.

book review written in 1959: Chomsky, 1959.

an effort to explain human language: Skinner, 1957.

hand-crafted machinery: Silver et al., 2016; Marcus, 2018b.

The fundamental challenges: Geman, Bienenstock, and Doursat, 1992.

Kahneman divides human cognitive process: Kahneman, 2011.

the terms reflexive and deliberative: Marcus, 2008.

society of mind: Minsky, 1986, 20.

Howard Gardner’s ideas: Gardner, 1983.

Robert Sternberg’s triarchic theory: Sternberg, 1985.

evolutionary and developmental psychology: Barlow, Cosmides, and Tooby, 1996; Marcus, 2008; Kinzler and Spelke, 2007.

requires a different subset of our brain resources: Braun et al., 2015; Preti, Bolton, and de Ville, 2017.

Nvidia’s 2016 model of driving: Bojarski et al., 2016.

training end-to-end from pixels: Mnih et al., 2015.

could not get a similar approach to work for Go: Silver et al., 2016.

fruit flies of linguistics: Pinker, 1999.

Part of Gary’s PhD work: Marcus et al., 1992.

true intelligence is a lot more: Heath, 2018.

The essence of language, for Chomsky: Chomsky, 1959.

thought vectors: Devlin, 2015.

Word2Vec: Mikolov, Sutskever, Chen, Corrado, and Dean, 2013.

product search on Amazon: Ping et al., 2018.

If you take the vector: Devlin, 2015.

If they can’t capture individual words: These and other limitations of word embeddings are discussed in Levy, in preparation.

You can’t cram the meaning: Mooney is quoted with expletives deleted in Conneau et al., 2018.

Take a look at this picture: Lupyan and Clark, 2015.

One classic experiment: Carmichael, Hogan, and Walter, 1932.

a vision system was fooled: Vondrick, Khosla, Malisiewicz, and Torralba, 2012.

If we run these by themselves: Experiment carried out by the authors on Amazon Web Services, August 2018.

language tends to be underspecified: Piantadosi, Tily, and Gibson, 2012.

you can easily imagine: Rips, 1989.

a robin is a prototypical bird: Rosch, 1973.

the Yale psychologist Frank Keil: Keil, 1992.

concepts that are embedded in theories: Murphy and Medin, 1985; Carey, 1985.

a rich understanding of causality: Pearl and MacKenzie, 2018.

Vigen compiled a whole book: Vigen, 2015.

But to the deep learning system: Pinker, 1997; Marcus, 2001.

Mendel himself was initially ignored: Judson, 1980.

Individual genes are in fact levers: Marcus, 2004.

Piaget’s questions: Piaget, 1928.

but the answers he proposed: Gelman and Baillargeon, 1983; Baillargeon, Spelke, and Wasserman, 1985.

humans are likely born understanding: Spelke, 1994; Marcus, 2018b.

as Kant argued two centuries earlier: Kant, 1751/1998.

some aspects of language are also: Pinker, 1994.

expectations about what language might sound like: Shultz and Vouloumanos, 2010.

the results haven’t been nearly as impressive: Hermann et al., 2017.

without human knowledge: Silver et al., 2017.

nothing intrinsic to do with deep learning: Marcus, 2018b.

The claim that human knowledge: Marcus, 2018b.

We need a new generation of AI researchers: Darwiche, 2018.

LeCun argued forcefully: LeCun et al., 1989.

7. COMMON SENSE, AND THE PATH TO DEEP UNDERSTANDING

first started calling attention to it: McCarthy, 1959.

ten facts that NELL had recently learned: These results are from a test of NELL carried out by the authors on May 28, 2018.

ConceptNet: Havasi, Pustejovsky, Speer, and Lieberman, 2009.

The English sentences are then automatically converted: Singh et al., 2002.

Artificial Intelligence Meets Natural Stupidity: McDermott, 1976.

VirtualHome: Puig et al., 2018. The VirtualHome project can be found at https://www.csail.mit.edu/​research/​virtualhome-representing-activities-programs.

Schank’s work on scripts: Schank and Abelson, 1977.

but not in others: Similar objections were raised in Dreyfus, 1979.

The work has been both painstaking and difficult: Davis, 2017 is a recent survey of this work. Davis, 1990 and van Harmelen, Lifschitz, and Porter, 2008 are earlier book-length studies.

The largest effort in the field: The CYC project was announced in Lenat, Prakash, and Shepherd, 1985. A book-length progress report was published in 1990: Lenat and Guha, 1990. No comprehensive account has been published since.

millions of carefully encoded facts: Matuszek et al., 2005.

External articles written about it: Conesa, Storey, and Sugumaran, 2010.

taxonomy, the kind of categorization: Collins and Quillian, 1969.

WordNet: Miller, 1995.

the medical taxonomy SNOMED: Schulz, Suntisrivaraporn, Baader, and Boeker, 2009.

many other taxonomies are not: One attempt to deal with vague entities and relations is fuzzy logic, developed by Lotfi Zadeh: Zadeh, 1987.

it’s hard to define: Wittgenstein, 1953.

a lot of what you need to know: Woods, 1975; McDermott, 1976.

some alternative that does similar work: For example, it is possible to define variants of semantic networks whose meaning is as precisely defined as logical notation. Brachman and Schmolze, 1989; Borgida and Sowa, 1991.

frameworks for many different aspects: Davis, 2017.

time, space, and causality are fundamental: Kant, 1751/1998. Steven Pinker argues for a similar view in The Stuff of Thought: Pinker, 2007.

The ruthless attorney Rosalind Shays: Pinker, 1997, 314.

the evolution of galaxies: Benger, 2008.

the flow of blood cells: Rahimian et al., 2010.

the aerodynamics of helicopters: Padfield, 2008.

simulations just won’t work: The limits of simulation for AI is discussed at length in Davis and Marcus, 2016.

SpotMini: Boston Dynamics, 2016.

reality gap”: Mouret and Chatzilygeroudis, 2017.

not every inference: Sperber and Wilson, 1986.

the frame problem: Pylyshyn, 1987.

automated reasoning: Lifschitz, Morgenstern, and Plaisted, 2008.

All human knowledge is uncertain: Russell, 1948, 307.

“core” systems that Spelke has emphasized: Davis, 1990; van Harmelen, Lifschitz, and Porter, 2008.

8. TRUST

as long as any one of the five was still running: Tomayko, 1988, 100.

Elon Musk claimed for years: Hawkins, 2018.

San Francisco’s cable cars: Cable Car museum, undated.

“white hat hackers” were able: Greenberg, 2015.

Yet it is fairly easy to block or spoof: Tullis, 2018.

the Russian government has hacked: Sciutto, 2018.

a perfect target for cybercriminals: Mahairas and Beshar, 2018.

The problem of [combining] traditional software: Léon Bottou email to the authors, July 19, 2018.

the Turing test: Turing, 1950.

not particularly useful: Hayes and Ford, 1995.

alternatives to the Turing test: Marcus, Rossi, and Veloso, 2016. See also Reddy, Chen, and Manning, 2018; Wang et al., 2018; and the Allen Institute for AI website, https://allenai.org/.

language comprehension: Levesque, Davis, and Morgenstern, 2012.

inferring physical and mental states: Rashkin, Chap, Allaway, Smith, and Choi, 2018.

understanding YouTube videos: Paritosh and Marcus, 2016.

elementary science: Schoenik, Clark, Tafjord, Turney, and Etzioni, 2016; Davis, 2016a.

robotic abilities: Ortiz, 2016.

transfer those skills to other games: Chaplot, Lample, Sathyendra, and Salakhudinov, 2016.

Static Driver Verifier: Wikipedia, “Driver Verifier.”

computerized control program for the Airbus: Souyris, Wiels, Delmas, and Delseny, 2009.

verify that the collision avoidance programs: Jeannin et al., 2015.

two fatal accidents involving the Boeing 737 Max: Levin and Suhartono, 2019.

move fast and break things”: Baer, 2014.

technical debt”: Sculley et al., 2014.

hundreds of millions of numerical parameters: See, for example, Vaswani et al., 2017, table 3, or Canziani, Culurciello, and Paszke, 2017, figure 2.

explainable AI: Gunning, 2017; Lipton, 2016.

Three Laws of Robotics: Asimov, 1942.

What kind of harm or injury: Leben, 2018.

moral dilemmas: Wallach and Allen, 2010.

the one that Gary introduced: Marcus, 2012a.

No general code of ethics: Sartre, 1957.

Bostrom’s widely discussed paper-clip example: This was introduced in Bostrom, 2003. It has since then been extensively discussed by many writers, in particular by Nick Bostrom, Eliezer Yudkowsky, and their collaborators. Our discussion here is based primarily on Bostrom, 2014; Yudkowsky, 2011; Bostrom and Yudkowsky, 2014; and Soares, Fallenstein, Armstrong, and Yudkowsky, 2015.

The AI does not hate you: Yudkowsky, 2011.

summoning the demon: McFarland, 2014.

unaware of the consequences of its actions: Similar arguments are presented in Pinker, 2018, and in Brooks, 2017c.

people who expect AIs to be harmless: Yudkowsky, 2011.

EPILOGUE

using deep learning to track wildlife: Norouzzadeh et al., 2018.

predict aftershocks of earthquakes: Vincent, 2018d.

at a crucial moment Rick Deckard: Harford, 2018.

PowerPoint annoyances: Swinford, 2006.

if Peter Diamandis is right: Diamandis and Kotler, 2012.

the vision of Oscar Wilde: Wilde, 1891.

SUGGESTED READINGS

Weka Data Mining Software: https://www.cs.waikato.ac.nz/​ml/​weka/.

Pytorch: https://pytorch.org.

fast.ai: https://www.fast.ai/.

TensorFlow: https://www.tensorflow.org/.

Zach Lipton’s interactive Jupyter notebooks: https://github.com/​zackchase/​mxnet-the-straight-dope.

Andrew Ng’s popular machine-learning course: https://www.coursera.org/​learn/​machine-learning.