Preface
1Natural languages are the languages that people speak, and the term is intended to contrast those languages with computer programming languages.
Chapter 1
1D. A. Ferrucci (2012). Introduction to “This is Watson.” In IBM Journal of Research and Development (Vol. 56, Issues 3–4, pp. 1:1-1:15). doi.org/10.1147/JRD.2012.2184356.
2web.archive.org/web/20170809113829/www.ibm.com/watson/.
3www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat.
4www.theverge.com/2017/7/17/15980954/elon-musk-ai-regulation-existential-threat.
5N. Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford: OUP Oxford, 2014).
6See www.amazon.com/Artificial-General-Intelligence-Cognitive-Technologies/dp/354023733X/ref=sr_1_3?keywords=artificial+general+intelligence&qid=1578398066&sr=8-3 for a more in-depth definition of and explanation of AGI, and see content.sciendo.com/view/journals/jagi/10/2/article-p1.xml for an explanation of why a precise definition is so difficult. Another term for AGI is human level AI, which is defined as “one that can carry out most human professions at least as well as a typical human.” Some prognosticators, including Nick Bostrom, also refer to superintelligence, which is a level of intelligence that exceeds AGI/HLAI (N. Bostrom, Superintelligence: Paths, Dangers, Strategies.
Oxford: Oxford University Press, 2014). Stewart Russell, who wrote one of the most prominent AI textbooks, presents several arguments for the position that, once AGI is achieved, then superintelligence will follow (S. Russell, Human
Compatible: Artificial Intelligence and the Problem of Control. New York: Viking Press, 2019). One reason is because an AGI machine could read 150 million books in a few hours. However, since this book explains why AGI/HLAI is extremely unlikely to occur, there will be no further discussion of superintelligence in this book.
7Defining what constitutes a single “task” can be difficult. There is considerable research on multitask learning that I argue still fits the definition of narrow AI. See www.AIPerspectives.com/tl for a discussion of this and related research topics.
8In 1979, University of Michigan professor Ben Kuipers defined commonsense knowledge as “knowledge about the structure of the external world that is acquired and applied without concentrated effort by any normal human that allows him or her to meet the everyday demands of the physical, spatial, temporal and social environment with a reasonable degree of success.” B. J. Kuipers, On Representing Commonsense Knowledge. In N. V. Findler (Ed.), Associative Networks: The Representation and Use of Knowledge by Computers (New York: Academic Press, 1979).
9A. Koestler, Ghost in the Machine (New York: Random House, 1982).
10www.analyticsinsight.net/autonomous-ai-feared-yes-say-60-brits-survey/.
Chapter 2
1U. Farooq, “The second drone age: How Turkey defied the U.S. and became a killer drone power,” The Intercept, May 14, 2019. theintercept.com/2019/05/14/turkey-second-drone-age/.
2STM. KARGU - Autonomous Tactical Multi-Rotor Attack UAV. (Apr. 28, 2018).
Accessed: March 18, 2020 [online video]. Available: www.youtube.com/watch?time_continue=1&v=Oqv9yaPLhEk.
3M. Weisgerber, “US is moving too slowly to harness drones and AI, former SOCOM commander says,” Defense One, November 14, 2019. www.defenseone.com/technology/2019/11/us-moving-too-slowly-harness-drones-and-ai-former-socom-commander-says/161306/?oref=d-channelriver.
4J. Keller, “Should some smart munitions be classified as unmanned aerial vehicles (UAVs)?” Military & Aerospace Electronics, March 24, 2015. www.militaryaerospace.com/computers/article/16713739/should-some-smart-munitions-be-classified-as-unmanned-aerial-vehicles-uavs.
5Missile Defense Project, “Terminal High Altitude Area Defense (THAAD),” Missile Threat, Center for Strategic and International Studies, June 14, 2018. missilethreat.csis.org/system/thaad/;
Missile Defense Project, “Aegis Ballistic Missile Defense,” Missile Threat, Center for Strategic and International Studies, June 14, 2018. missilethreat.csis.org/system/aegis/.
6L. Pascu, “Turkey adds autonomous facial recognition kamikaze drones to military portfolio,” Biometric Update, November 11, 2019. www.biometricupdate.com/201911/turkey-adds-autonomous-facial-recognition-kamikaze-drones-to-military-portfolio.
7Future of Life Institute, “Autonomous weapons: An open letter from AI & robotics researchers,” July 28, 2015. futureoflife.org/open-letter-autonomous-weapons/?cn-reloaded=1.
8www.youtube.com/watch?v=9CO6M2HsoIA
9P. Scharre, Army of None: Autonomous Weapons and the Future of War, 1st ed. (New York: W. W. Norton & Company, 2019). See also P. Scharre, Autonomous Weapons and Operational Risk (Washington, DC: Center for a New American Security, 2016). s3.amazonaws.com/files.cnas.org/documents/CNAS_Autonomous-weapons-operational-risk.pdf?mtime=20160906080515.
10beta.sam.gov/opp/121247da9c19467a8446f9e1258f9bb0/view?keywords=skyborg&sort=-modifiedDate&index=opp&is_active=true&page=1.
11Congressional Research Service, “Artificial intelligence and national security,” April 26, 2018. www.everycrsreport.com/files/20180426_R45178_27fad5077138df0a45f2bf5dc00f4bb61c9a4e88.pdf.
12P. Tucker, “SecDef: China is exporting killer robots to the Mideast,” Defense One, November 5, 2019. www.defenseone.com/technology/2019/11/secdef-china-exporting-killer-robots-mideast/161100/.
13M. Weisgerber, “The increasingly automated hunt for mobile missile launchers,” Defense One, April 28, 2016. www.defenseone.com/technology/2016/04/increasingly-automated-hunt-mobile-missile-launchers/127864/.
14J. Harper, “Artificial intelligence to sort through ISR data glut,” National Defense, January 16, 2018. www.nationaldefensemagazine.org/articles/2018/1/16/artificial-intelligence-to--sort-through-isr-data-glut.
15M. Weisgerber, “The Pentagon’s new algorithmic warfare cell gets its first mission: Hunt ISIS,” Defense One, May 14, 2017. www.defenseone.com/technology/2017/05/pentagons-new-algorithmic-warfare-cell-gets-its-first-mission-hunt-isis/137833/.
16www.defenseone.com/technology/2019/10/us-army-wants-reinvent-tank-warfare-ai/160720/?oref=d-channelriver.
17P. Tucker, “The Pentagon’s AI ethics draft is actually pretty good,” Defense One, October 31, 2019. www.defenseone.com/technology/2019/10/pentagons-ai-ethics-draft-actually-pretty-good/161005/.
18Russian News Agency, “Russia’s security chief calls for regulating use of new technologies in military sphere,” April 24, 2019. tass.com/defense/1055346.
19See this article for an example of how Bosch has developed a system to prevent adversarial attacks on car cameras. It uses a second camera that has a different angle to confirm the first camera: J. Rundle and J. McCormick, “Bosch deploys AI to prevent attacks on cars’ electronic systems,” Wall Street Journal, January 6, 2020. www.wsj.com/articles/bosch-deploys-ai-to-prevent-attacks-on-cars-electronic-systems-11578306600?mod=searchresults&page=2&pos=6.
20I. Rouf et al., “Security and privacy vulnerabilities of in-car wireless networks: A tire pressure monitoring system case study,” in Proceedings of the 19th USENIX Security Symposium, 2010. See this Intel paper for a discussion of other car vulnerabilities: M. Zhao, “Advanced driver assistant system: Threats, requirements and security solutions,” Intel Labs, White Paper, 2016 [online]. Available: www.semagarage.com/assets/pdf/advanced-driver-assistant-system-paper.pdf.
21M. Weisgerber, “New tech aims to tell pilots when their plane has been hacked,” Defense One, October 4, 2019. www.defenseone.com/business/2019/10/new-app-tells-pilots-when-their-plane-has-been-hacked/160378/?oref=d-channelriver.
22For more information on adversarial attacks on deep learning systems, see www.AIPerspectives.com/aa.
23D. Goodman, H. Xin, W. Yang, X. Junfeng, and Z. Huan, “Advbox: A toolbox to generate adversarial examples that fool neural networks,” AdvBox, arXiv Preprint arXiv:2001.05574v, arxiv.org/pdf/2001.05574.pdf.
24C. Ross and I. Swetlitz, “IBM’s Watson supercomputer recommended ‘unsafe and incorrect’ cancer treatments, internal documents show,” Stat+, July 25, 2018. www.statnews.com/wp-content/uploads/2018/09/IBMs-Watson-recommended-unsafe-and-incorrect-cancer-treatments-STAT.pdf.
25P. G. Neumann, “Forum risks to the public in computers and related systems,” The Risks Digest, vol. 8, no. 75, 1989.
26Center for Homeland Defense and Security (1979). “Report of the President’s Commission on the Accident at Three Mile Island The Need for Change: The Legacy of TMI.” www.hsdl.org/?abstract&did=769775.
27U.S.-Canada Power System Outage Task Force, “Final Report on the August 14, 2003 Blackout in the United States and Canada: Causes and Recommendations,” April 2004. www.energy.gov/sites/prod/files/oeprod/DocumentsandMedia/BlackoutFinal-Web.pdf.
28Commodity Futures Trading Commission and U.S. Securities and Exchange Commission, “Findings Regarding the Market Events of May 6, 2010,” Rep. Staff. CFTC SEC to Jt. Advis. Comm. Emerg. Regul. Issues, 2010.
29S. Gandel, “Why Knight lost $440 million in 45 minutes,” Fortune, August 3, 2012. fortune.com/2012/08/02/why-knight-lost-440-million-in-45-minutes/.
30K. German, “Ethiopia blames 737 Max design in interim crash report,” CNet, February 10, 2020. www.cnet.com/news/boeings-737-max-8-all-about-the-aircraft-flight-ban-and-investigations/.
31There are two basic categories of issues that can occur with machine learning systems known as the inner and outer alignment problems. For more information, see E. Hubinger, C. Van Merwijk, V. Mikulik, J. Skalse, and
S. Garrabrant (2019). Risks from Learned Optimization in Advanced Machine Learning Systems. arXiv: 1906.01820v2: arxiv.org/pdf/1906.01820.pdf.
32C. Murphy, G. E. Kaiser, and M. Arias (2006). A Framework for Quality Assurance of Machine Learning Applications. Columbia University Computer Science Technical Reports, CUCS-034-06, 2011. doi.org/10.7916/D8MP5B4B.
33B. M. Lake, T. D. Ullman, J. B. Tenenbaum, and S. J. Gershman (2017). “Building machines that learn and think like people.” Behavioral and Brain Sciences, 40. doi.org/10.1017/S0140525X16001837.
34See this Scientific American article about concerns that rushing AI technology to market could negatively impact patients: L. Szabo, “Artificial intelligence is rushing into patient care—and could raise risks,” Scientific American, December 24, 2019. www.scientificamerican.com/article/artificial-intelligence-is-rushing-into-patient-care-and-could-raise-risks/.
35www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device.
36This behavior is apparently common: forums.tesla.com/forum/forums/sudden-and-erratic-braking-autopilot.
37Excerpted from www.tesla.com/support/autopilot in March 2020.
38B. Vlasic and N. E. Boudette, “Self-driving Tesla was involved in fatal crash, U.S. says,” New York Times, June 30, 2016. www.nytimes.com/2016/07/01/business/self-driving-tesla-fatal-crash-investigation.html. See also M. R. Dickey, “Tesla model X sped up in autopilot mode seconds before fatal crash, according to NTSB,” TechCrunch, June 8, 2018. techcrunch.com/2018/06/07/tesla-model-x-sped-up-in-autopilot-mode-seconds-before-fatal-crash-according-to-ntsb/ for a scary 2018 fatality report.
39J. Kaplan, R. Glon, S. Edelstein, and L. Chang, “Deadly Uber crash was ‘entirely avoidable’ had the driver not been watching Hulu” Digital Trends, June 22, 2018. www.digitaltrends.com/cars/self-driving-uber-crash-arizona/.
40S. Edelstein, “Uber self-driving cars were reportedly in 37 crashes before fatal incident,” Digital Trends, November 7, 2019. www.digitaltrends.com/cars/uber-self-driving-cars-were-in-37-crashes-before-fatal-incident-report-says/.
41M. McFarland, “Uber death leaves questions about self-driving car liability unanswered,” CNN Business, March 8, 2019. edition.cnn.com/2019/03/08/tech/uber-arizona-death-criminal/index.html. It is possible, however, that the driver will face criminal charges.
42R. Randazzo, “Uber crash death in Tempe: A closer look,” AZCentral March 17, 2019. www.azcentral.com/story/news/local/tempe/2019/03/17/uber-crash-death-who-blame-tempe-arizona-rafaela-vasquez-elaine-herzberg/3157481002/.
43K. -F. Lee, AI Superpowers: China, Silicon Valley, and the New World Order, 1st ed. (Boston: Houghton Mifflin Harcourt, 2018).
44D. Shepardson, “House panel to hold hearing on future of self-driving cars,” Reuters, February 8, 2020. www.reuters.com/article/us-autos-autonomous-congress/house-panel-to-hold-hearing-on-future-of-self-driving-cars-idUSKBN2012K2.
45www.ntsb.gov/news/press-releases/Pages/NR20200225.aspx.
46www.iihs.org/news/detail/automated-systems-need-stronger-safeguards-to-keep-drivers-focused-on-the-road.
47saferoads.org/wp-content/uploads/2020/03/AV-Crash-List-with-Photos-February-2020.pdf.
48https://flsenate.gov/Laws/Statutes/2018/0316.085.
49A. Davies, “Self-driving cars flock to Arizona, land of good weather and no rules,” Wired, August 10, 2017. www.wired.com/story/mobileye-self-driving-cars-arizona/.
50KPMG International, “2019 Autonomous Vehicles Readiness Index,” 2019. assets. kpmg/content/dam/kpmg/xx/pdf/2019/02/2019-autonomous-vehicles-readiness-index.pdf.
51www.daimler.com/documents/innovation/other/safety-first-for-automated-driving.pdf.
52This moral dilemma is known as the Trolley Problem. Several research studies have shown that many people would consider it morally acceptable to redirect a runaway trolley away from five victims and toward a single victim. However, it would not be morally acceptable to push a single person in front of a trolley to stop it from hitting five people: F. Cushman and L. Young, “The psychology of dilemmas and the philosophy of morality,” Ethical Theory Moral Practice, vol. 12, pp. 9–24, 2009, doi: 10.1007/s10677-008-9145-3.
MIT’s Media Lab did a large-scale, 233-country survey on who should be spared (pets versus humans, staying on course versus swerving, passengers versus pedestrians, more versus fewer lives, men versus women, pedestrians crossing legally versus jaywalkers, the fit versus the less fit, and higher versus lower social status). They found several interesting regional differences: E. Awad, S. Dsouza, R. Kim, J. Schulz, J. Henrich, A. Shariff, J. F. Bonnefon, and I. Rahwan (2018). “The Moral Machine experiment.” Nature, 563(7729), 59–64. doi.org/10.1038/s41586-018-0637-6.
Chapter 3
1Synced. (2019). “NeurIPS 2019 | The Numbers.” Retrieved from medium.com/syncedreview/neurips-2019-the-numbers-c1808fba9480.
2B. F. Green, A. K. Wolf, C. Chomsky, and K. Laughery (1961). “BASEBALL: AN AUTOMATIC QUESTION-ANSWERER.” In Paper presented at the May 9-11, 1961, western joint IRE-AIEE-ACM computer conference. Paper presented at the May 9-11, 1961, western joint IRE-AIEE-ACM computer conference.
3D. G. Bobrow (1964). Natural Language Input for a Computer Problem Solving System. PhD dissertation, MIT.
4H. A. Simon (1965). The shape of automation for men and management, (1st ed.). Harper & Row.
5Y. Bar-Hillel (1960). “The Present Status of Automatic Translation of Languages.” Advances in Computers. doi.org/10.1016/S0065-2458(08)60607-5.
6ALPAC. (1966). Language and Machines: Computers in Translation and Linguistics. Washington D.C., Publication 1416, National Academy of Sciences.
7J. Lighthill (1973). Artificial Intelligence: A General Survey. Science Research Council. www.chilton-computing.org.uk/inf/literature/reports/lighthill_report/p001.htm.
8F. Rosenblatt (n.d.). “The Perception: A Probalistic Model For Information Storage and Organization in the Brain.” 1. In Psychological Review (Vol. 65, Issue 6). In 1951, prior to Rosenblatt’s book, Marvin Minsky and Dean Edmonds created what many people consider to be the first neural network at Harvard University. It had three thousand vacuum tubes and simulated a rat searching a maze for food. Their work was based on this 1943 paper that characterized the human brain as a neural network: W. S. McCulloch and W. Pitts (1943). “A logical calculus of the ideas immanent in nervous activity.” The Bulletin of Mathematical Biophysics, 5(4), 115–133. doi.org/10.1007/BF02478259.
9“NEW NAVY DEVICE LEARNS BY DOING; Psychologist Shows Embryo of Computer Designed to Read and Grow Wiser.” New York Times, 1958; www.nytimes.com/1958/07/08/archives/new-navy-device-learns-by-doing-psychologist-shows-embryo-of.html.
10M. Minsky and S. Papert (1969). Perceptrons: An Introduction to Computational Geometry. MIT Press. archive.org/details/Perceptrons/page/n1/mode/2up.
11For details about how these systems worked, see www.AIPerspectives.com/cd.
12Dejong, 1977, 1979; Schank et al., 1980; G. DeJong (1979). “Prediction and substantiation: A new approach to natural language processing.” Cognitive Science. doi. org/10.1016/S0364-0213(79)80009-9.
13R. C. Schank and C. K. Riesbeck (1981). “Inside Computer Understanding: Five Programs Plus Miniatures.” In “Inside Computer Understanding” (1st ed.). Psychology Press. doi.org/10.2307/414141. See also www.AIPerspectives.com/cd for more details.
14B. G. Buchanan and E. H. Shortliffe (1984). Rule-Based Expert Systems: MYCIN. Reading, MA: Addison Wesley. people.dbmi.columbia.edu/~ehs7001/Buchanan-Shortliffe-1984/MYCINBook.htm.
15M. Stefik (1985). “Strategic computing at DARPA: Overview and assessment.” Communications of the ACM, 28(7), 690–704. https://doi.org/10.1145/3894.3896.
16S. Russell (2019). Human compatible: Artificial intelligence and the problem of control. Viking.
17S. Baker (2011). Final Jeopardy: The Story of Watson, the Computer That Will Transform Our World. Mariner Books.
18S. Pham (2017). “China wants to build a $150 billion AI industry.” CNN Business, money.cnn.com/2017/07/21/technology/china-artificial-intelligence-future/index.html?iid=EL.
19R. Gigova (2017). “Who Putin thinks will rule the world,” CNN, edition.cnn.com/2017/09/01/world/putin-artificial-intelligence-will-rule-world/index.html.
20P. Olson (2019). “Nearly Half Of All ‘AI Startups’ Are Cashing In On Hype.” Forbes, www.forbes.com/sites/parmyolson/2019/03/04/nearly-half-of-all-ai-startups-are-cashing-in-on-hype/#14f4709fd022.
21I. Bogost (2017). “‘Artificial Intelligence’ Has Become Meaningless” The Atlantic, www.theatlantic.com/technology/archive/2017/03/what-is-artificial-intelligence/518547/.
22J. Dean, D. Patterson, and C. Young (2018). “A New Golden Age in Computer Architecture: Empowering the Machine-Learning Revolution.” IEEE Micro, 38(2), 21–29. doi.org/10.1109/MM.2018.112130030.
23Synced. (2019). “NeurIPS 2019 | The Numbers.” Retrieved from medium.com/syncedreview/neurips-2019-the-numbers-c1808fba9480.
24A. Cuthbertson (2018). “TECH & SCIENCE.” Newsweek, www.newsweek.com/robots-can-now-read-better-humans-putting-millions-jobs-risk-781393.
1J. Manyika et al., “Jobs lost, jobs gained: What the future of work will mean for jobs, skills, and wages,” McKinsey & Company, November 2017. www.mckinsey.com/featured-insights/future-of-work/jobs-lost-jobs-gained-what-the-future-of-work-will-mean-for-jobs-skills-and-wages.
2W. Buffett, “Warren Buffett shares the secrets to wealth in America,”Time, January 4, 2018. time.com/5087360/warren-buffett-shares-the-secrets-to-wealth-in-america/.
3R. Kurzweil, “Future of Intelligence,” YouTube Video: www.youtube.com/watch?v=9Z06rY3uvGY&index=3&list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4.
4A 2015 Ball State study found that 87.8 percent of manufacturing job losses between 2000 and 2010 were due to productivity gains resulting from robotics and other forms of manufacturing automation: S. Devaraj and M. J. Hicks, “The myth and reality of manufacturing in America,” Ball State University, Center for Business Economic Research, June 2015.
5There were 9,302 retail store closings in 2019, a 59 percent jump from 2018, compared to 4,392 openings according to CNN: N. Meyersohn, “More than 9,300 stores closed in 2019,” CNN Business, December 19, 2019. edition.cnn.com/2019/12/19/business/2019-store-closings-payless-gymboree/index.html. Some of these closings can be attributed to Amazon, but others are due to noncompetitive business models against other physical retailers.
6It’s not just human jobs. The equine population in the US declined from 21 million horses in 1900 to 3 million in 1960: E. R. Kilby, “The Demographics of the U.S. Equine Population,” in The State of the Animals IV: 2007, 1st ed., D. J. Salem and A. N. Rowan, Eds. (Washington, DC: Humane Society Press, 2007) 175–205.
7J. Liang, B. Ramanauskas, and A. Kurenkov, “Job loss due to AI—How bad is it going to be?” Skynet Today, February 4, 2019. www.skynettoday.com/editorials/ai-automation-job-loss.
8Deloitte LLP, “From brawn to brains: The impact of technology on jobs in the UK,” 2015. www2.deloitte.com/uk/en/pages/growth/articles/from-brawn-to-brains--the-impact-of-technology-on-jobs-in-the-u.html.
9P. Scharre, “Killer apps: The real dangers of an AI arms race,” Foreign Affairs, May/June 2019. www.foreignaffairs.com/articles/2019-04-16/killer-apps.
10J. Murawski, “AI adoption fuels demand for data-labeling services,” WSJ, July 29, 2019. www.wsj.com/articles/ai-adoption-fuels-demand-for-data-labeling-services-11564392602.
11R. Perrault et al., “The AI index 2019 annual report,” AI Index Steering Committee, Human-Centered AI Institute, Stanford University, Stanford, CA, December 2019. [Online]. Available: hai.stanford.edu/sites/g/files/sbiybj10986/f/ai_index_2019_report.pdf.
12According to the Bureau of Labor Statistics, in 2019 there were 1.7 million truck drivers.
13This is particularly true for truck drivers, where technology will have to replace the months of training truck drivers receive before they are allowed on the road.
14M. Miller, “AI’s implications for productivity, wages, and employment,” PC Mag, November 21, 2017. sea.pcmag.com/feature/18333/ais-implications-for-productivity-wages-and-employment.
15Interview with James Manyika in M. Ford (2018). Architects of Intelligence: The truth about AI from the people building it. Packt Publishing.
16Gartner Group, “Gartner says by 2020, artificial intelligence will create more jobs than it eliminates,” December 13, 2017. www.gartner.com/en/newsroom/press-releases/2017-12-13-gartner-says-by-2020-artificial-intelligence-will-create-more-jobs-than-it-eliminates.
17J. Bughin, “Why AI isn’t the death of jobs,” MIT Sloan Management Review, 2018. sloanreview.mit.edu/article/why-ai-isnt-the-death-of-jobs/.
18R. Molla and E. Stewart, “How will 2020 Democrats deal with jobs eliminated by artificial intelligence?” Vox, December 5, 2019. www.vox.com/policy-and-politics/2019/12/3/20965464/2020-presidential-candidates-jobs-automation-ai.
19R. Rubin, “The ‘robot tax’ debate heats up,” Wall Street Journal, January 8, 2020. www.wsj.com/articles/the-robot-tax-debate-heats-up-11578495608?mod=searchresults&page=1&pos=8.
Chapter 5
1See www.AIPerspectives.com/sl for descriptions and discussion of the various supervised learning algorithms.
2The ‘…’ signifies that there are many additional rows in the table.
3Linear regression is a well-known technique that has been around for approximately one hundred years but is now considered supervised learning and AI. There are dozens of other supervised learning techniques. See www.AIPerspectives.com/sl for more information.
4This does not mean that Zillow will necessarily make better sale price predictions than a human real estate agent. Despite the vast number of columns in the Zillow training table, a human agent might consider unique house features that are not in any of the Zillow columns. If these unique house features affect the selling price, the human real estate agent can theoretically make a better sale price prediction than Zillow.
5Again, the rows in the table with “…” are intended to signify that it’s really a much larger table and the “…” notations indicate large numbers of additional rows.
6The learned function is termed a model.
7See www.AIPerspectives.com/co for examples of temperature and house price computer code.
8Z. A. Din, H. Venugopalan, J. Park, A. Li, W. Yin, H. Mai, Y. J. Lee, S. Liu, and S. T. King (2020). Boxer: Preventing fraud by scanning credit cards. Proceedings of the 29th USENIX Security Symposium. https://www.usenix.org/conference/usenixsecurity20/presentation/din.
9Technically, most retailers store customer data in multiple tables not just in a single table. Then they join the data together into one training table to apply supervised learning analyses. They also use unsupervised learning techniques such as the ones I will discuss in the next chapter.
10Supervised learning techniques for credit card fraud detection are often supplemented by unsupervised learning techniques (see Chapter 7) that have some success in detecting anything out of the ordinary and hence work to some degree for sniffing out new fraud patterns.
11D. Murray and K. Durrell (2000). “Inferring demographic attributes of anonymous Internet users.” Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 1836, 7–20. doi.org/10.1007/3-540-44934-5_1.
12Y. Bachrach, M. Kosinski, T. Graepel, P. Kohli, and D. Stillwell (2012). “Personality and patterns of Facebook usage.” Proceedings of the 4th Annual ACM Web Science Conference, WebSci’12. doi.org/10.1145/2380718.2380722.
13D. Quercia, M. Kosinski, D. Stillwell, and J. Crowcroft (2011). “Our Twitter profiles, our selves: Predicting personality with Twitter.” Proceedings - 2011 IEEE International Conference on Privacy, Security, Risk and Trust and IEEE International Conference on Social Computing, PASSAT/SocialCom 2011, 180–185. doi.org/10.1109/PASSAT/SocialCom.2011.26.
14J. Hirsh, S. Kang, and G. Bodenhausen (2012). “Personalized Persuasion: Tailoring Persuasive Appeals to Recipients’ Personality Traits.” Psychological Science, 23(6), 578-581. doi.org/:10.1177/0956797611436349. S. C. Matz, M. Kosinski, G. Nave, and D. J. Stillwell (2017). “Psychological targeting as an effective approach to digital mass persuasion.” Proceedings of the National Academy of Sciences of the United States of America, 114(48), 12714–12719. doi.org/10.1073/pnas.1710966114.
15C. Wylie (2018). “Why I broke the Facebook data story—and what should happen now.” The Guardian. www.theguardian.com/uk-news/2018/apr/07/christopher-wylie-why-i-broke-the-facebook-data-story-and-what-should-happen-now.
16M. Scott (2019). “Cambridge Analytica did work for Brexit groups, says ex-staffer.” Politico. www.politico.eu/article/cambridge-analytica-leave-eu-ukip-brexit-facebook/.
17“How Trump Consultants Exploited the Facebook Data of Millions,” New York Times (n.d.). Retrieved April 14, 2020, from www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html.
18M. Calabresi (2017). “Inside Russia’s social media war on America.” Time. time. com/4783932/inside-russia-social-media-war-america/. Kosinski, when asked about his role in the CA scandal, said, “This is not my fault. I did not build the bomb. I only showed that it exists.” In other words, like many tools, Kosinski’s technique can be used for good or evil. It is not the tool we have to worry about; it’s the people using the tool.
19statweb.stanford.edu/~tibs/stat315a/glossary.pdf.
20For example, see this Accenture report released in November 2019: www.accenture.com/_acnmedia/Thought-Leadership-Assets/PDF-2/Accenture-Built-to-Scale-PDF-Report.pdf#zoom=50.
1World Economic Forum, “Reports,” reports.weforum.org/global-risks-2013/section-seven-online-only-content/data-explorer/?doing_wp_cron=1584625l44.1610040664672851562500 (accessed March 19, 2020).
2S. Vosoughi, D. Roy, and S. Aral, “The spread of true and false news online,” Science, vol. 359, no. 6380, pp. 1146–1151, 2018, doi:10.1126/science.aap9559.
3R. M. Everett, J. R. C. Nurse, and A. Erola, “The anatomy of online deception: What makes automated text convincing?” in Proceedings of the 31st Annual ACM Symposium on Applied Computing 2016, pp. 1115–1120, doi: 10.1145/2851613.2851813.
4E. Weise, “Russian fake accounts showed posts to 126 million Facebook users,” USA Today, October 30, 2017. www.usatoday.com/story/tech/2017/10/30/russian-fake-accounts-showed-posts-126-million-facebook-users/815342001/.
5J. Murawski, “Lawmakers call for tough punishment for ‘deepfakes,’” Wall Street Journal, June 14, 2019. www.wsj.com/articles/lawmakers-call-for-tough-punishment-for-deepfakes-11560504601?mod=searchresults&page=1&pos=17.
6A. Mosseri, “Working to stop misinformation and false news,” Facebook for Media, April 7, 2017. www.facebook.com/facebookmedia/blog/working-to-stop-misinformation-and-false-news.
7S. Rahman, P. Tully, and L. Foster, “Attention is all they need: Combatting social media information operations with neural language models” FireEye, November 14, 2019. www.fireeye.com/blog/threat-research/2019/11/combatting-social-media-information-operations-neural-language-models.html.
8J. Snow, “Can AI win the war against fake news?” MIT Technology Review, December 13, 2017. www.technologyreview.com/s/609717/can-ai-win-the-war-against-fake-news/.
9R. Gutman, “A web tool that lets people choose their own ‘sources of truth,’” The Atlantic, June 29, 2018. www.theatlantic.com/technology/archive/2018/06/robhat-labs-surfsafe-fake-news-images/564101/.
11L. Calhoun, “Just launched: Google News app uses artificial intelligence to select stories, stop fake news,” Inc., May 23, 2018. www.inc.com/lisa-calhoun/new-google-news-app-uses-ai-to-select-stories-stop-fake-news.html.
12J. Conditt, “Google partners with fact-checking network to fight fake news,” Engadget, October 26, 2017. www.engadget.com/2017/10/26/google-fake-news-international-fact-checking-network/.
13J. O’Malley, “How Microsoft is using AI to tackle fake news,” Gizmodo, May 14, 2018. www.gizmodo.co.uk/2018/05/how-microsoft-is-using-ai-to-tackle-fake-news/.
14T. Schuster, D. Shah, Y. J. S. Yeo, D. Roberto Filizzola Ortiz, E. Santus, and R. Barzilay, “Towards debiasing fact verification models,” Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, November, 2019, doi: 10.18653/v1/d19-1341. T. Schuster, R. Schuster, D. J. Shah, and R. Barzilay, “The limitations of stylometry for detecting machine-generated fake news,” arXiv preprint arXiv:1908.09805, 2020.
16Bloomberg, “How Faking Videos Became Easy—And Why That’s So Scary,” Fortune. com, September 11, 2018, fortune.com/2018/09/11/deep-fakes-obama-video/
17R. Metz, “The number of deepfake videos online is spiking. Most are porn,” CNN Business, October 7, 2019. edition.cnn.com/2019/10/07/tech/deepfake-videos-increase/index.html.
18A. Romano, “Reddit finally bans its forum for creepy fake celebrity porn,”Vox, February 8, 2018. www.vox.com/culture/2018/2/8/16987098/reddit-bans-deepfakes-celebrity-face-swapping-porn.
19J. Damiani, “A voice deepfake was used to scam a CEO out of $243,000,” Forbes, September 3, 2019. www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/#5223a18c2241.
20As they did in the movie Cats.
21M. Kelly, “Facebook begins telling users who try to share distorted Nancy Pelosi video that it’s fake,” The Verge, May 25, 2019. www.theverge.com/2019/5/25/18639754/facebook-nancy-pelosi-video-fake-clip-distorted-deepfake.
22J. Andrews, “Fake news is real — A.I. is going to make it much worse,” CNBC, July 12, 2019. www.cnbc.com/2019/07/12/fake-news-is-real-ai-is-going-to-make-it-much-worse.html.
23N. Dufour and A. Gully, “Contributing data to deepfake detection research,” Google AI Blog, September 24, 2019. ai.googleblog.com/2019/09/contributing-data-to-deepfake-detection.html.
24M. Schroepfer, “Creating a data set and a challenge for deepfakes,” Facebook Artificial Intelligence, September 5, 2019. ai.facebook.com/blog/deepfake-detection-challenge/.
25deepfakedetectionchallenge.ai/.
26A. Rossler, D. Cozzolino, L. Verdoliva, C. Riess, J. Thies, and M. Nießner, “FaceForensics: A large-scale video dataset for forgery detection in human faces,” Visual Computing Group, March 24, 2018.
27M. Turek, “Media Forensics (MediFor),” Defense Advanced Research Projects Agency. www.darpa.mil/program/media-forensics (accessed March 19, 2020).
28For example, T. T. Nguyen, C. M. Nguyen, D. T. Nguyen, D. T. Nguyen, and S. Nahavandi, “Deep learning for deepfakes creation and detection,” arXiv preprint arXiv:1909.11573, 2019;
Y. Li and S. Lyu, “Exposing deepfake videos by detecting face warping artifacts,” arXiv preprint arXiv:1811.00656, 2019.
29R. Mama and S. Shi, “Towards deepfake detection that actually works,” Dessa, November 25, 2019. www.dessa.com/post/deepfake-detection-that-actually-works.
30S. -Y. Wang, O. Wang, R. Zhang, A. Owens, and A. Efros, “Detecting Photoshopped faces by scripting Photoshop,” in Proceedings of the IEEE International Conference on Computer Vision, 2019, doi: 10.1109/iccv.2019.01017.
33www.autoexpress.co.uk/news/352605/deepfake-software-used-aid-self-driving-car-development.
34U.S. House. 116th Congress. (2019, June 12). H. Con. Res. 3230, Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act of 2019. [Online]. Available: www.congress.gov/bill/116th-congress/house-bill/3230.
35T. Hatmaker (2017). “Saudi Arabia bestows citizenship on a robot named Sophia.” TechCrunch. techcrunch.com/2017/10/26/saudi-arabia-robot-citizen-sophia/?renderMode=ie11.
36D. Gershgorn, “Inside the mechanical brain of the world’s first robot citizen,” Quartz, November 12, 2017. qz.com/1121547/how-smart-is-the-first-robot-citizen/.
37Carlquintanilla. (2017, Oct 25). Our @andrewrsorkin, interviewing “Sophia” the robot, of Hanson Robotics. [Twitter Post]. Retrieved from https://twitter.com/carlquintanilla/status/923238264533811200.
38J. Vincent, “Facebook’s head of AI really hates Sophia the Robot (and with good reason),” The Verge, January 18, 2018. www.theverge.com/2018/1/18/16904742/sophia-the-robot-ai-real-fake-yann-lecun-criticism.
39This was shown in a psychology experiment in which human subjects were asked questions on a computer screen either by displaying text or a human face asking the question. When it was a human face, they elected not to answer several personal questions: L. Sproull, M. Subramani, S. Kiesler, J. H. Walker, and K. Waters, “When the interface is a face,” Human-Computer Interact., vol. 11, no. 2, pp. 97–124, 1996, doi: 10.1207/s15327051hci1102_1.
40S. Tibken, “Samsung’s new neon project is finally unveiled: It’s a humanoid AI chatbot,” CNet, January 7, 2020. www.cnet.com/news/samsung-neon-project-finally-unveiled-humanoid-ai-chatbot-artificial-humans/.
41www.katedarling.org/publications.
42J. Hsu, “Real soldiers love their robot brethren,” Live Science, May 21, 2009. www.livescience.com/5432-real-soldiers-love-robot-brethren.html.
44“HitchBOT, the hitchhiking robot, beheaded in the U.S.” CNN Business, August 4, 2015. Accessed: March 20, 2020. [Online Video]. Available: edition.cnn.com/videos/tech/2015/08/04/hitchbot-robot-beheaded-philadelphia-orig-pkg.cnn.
45P. Parke, “Is it cruel to kick a robot dog?” CNN Business, February 13, 2015. edition. cnn.com/2015/02/13/tech/spot-robot-dog-google/index.html.
46J. J. Li, W. Ju, and B. Reeves, “Touching a mechanical body: Tactile contact with body parts of a humanoid robot is physiologically arousing,” J. Human-Robot Interact., 2017, doi: 10.5898/jhri.6.3.li.
47K. Darling, “Extending legal protection to social robots: The effects of anthropomorphism, empathy, and violent behavior towards robotic objects,” in Robot Law, 2016.
48C. de Lange, “Sherry Turkle: ‘We’re losing the raw, human part of being with each other,’” The Guardian, May 5, 2013. www.theguardian.com/science/2013/may/05/rational-heroes-sherry-turkle-mit.
49M. Delvaux, “Report with recommendations to the commission on civil law rules on robotics,” January 27, 2017. www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html.
50For example, P. Sullins, “When is a robot a moral agent?,” Mach. Ethics, vol. 6, pp. 23–30, 2006, doi: 10.1017/CBO9780511978036.010.
51J. J. Bryson, M. E. Diamantis, and T. D. Grant, “Of, for, and by the people: The legal lacuna of synthetic persons,” Artif. Intell. Law, vol. 25, pp. 273–291, 2017, doi:10.1007/s10506-017-9214-9.
Chapter 7
1I have oversimplified here in order to make the example understandable. Real-world biologists would be analyzing data on rare animals, not well-known ones like humans, horses, cats, and spiders. Also, they would likely use technical features with Latin names rather than easily understandable features such as weight and lifespan.
2The way these algorithms work is beyond the scope of this book. However, the general idea is to find the set of clusters that both maximize the distance between the clusters in the one-hundred-dimensional space and that minimize the distance between the observations within a cluster. While the mixing of numeric variables, such as weight and lifespan, with categorical variables, such as color, complicates the mathematics, techniques have been worked out over the years to accommodate mixed numeric and categorical variables.
3A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, “Language models are unsupervised multitask learners,” OpenAI, 2018. d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf.
4You can try it yourself at this site: talktotransformer.com/.
5This technology has other uses besides generation of fake news. For example, Google uses language models to predict what user search queries will be from the first few words the user types. For more information, see P. Nayak, “Understanding searches better than ever before,” Google Blog, October 25, 2019. www.blog.google/products/search/search-language-understanding-bert/.
6They used the Grover language model developed by the Allen AI Institute. grover. allenai.org/.
7thegradient.pub/gpt2-and-the-nature-of-intelligence/. If you are still not convinced, take a look at this New Yorker Magazine article that describes stories generated by GPT-2 after being trained on the magazine’s vast archives: www.newyorker.com/magazine/2019/10/14/can-a-machine-learn-to-write-for-the-new-yorker.
8In fact, when sensitive data such as social security numbers are part of the training data and a user inputs “my social security number is . . .” the resulting language model sometimes completes the sentence with an actual social security number from the training table. This can be a security issue, though it is easily avoided by not exposing sensitive data in language model training tables. For more information, see N. Carlini, C. Liu, Ú. Erlingsson, J. Kos, and D. Song, “The secret sharer: Evaluating and testing unintended memorization in neural networks,” in Proceedings of the 28th USENIX Security Symposium, 2019.
9T. Mikolov, K. Chen, G. Corrado, and J. Dean (n.d.). Efficient Estimation of Word Representations in Vector Space. Retrieved May 8, 2020, from ronan. collobert.com/senna/.
10J. R. Firth (1957). “A Synopsis of Linguistic Theory 1930-1955.” In F. Palmer (Ed.), Selected Papers of J. R. Firth. Longman, Harlow. www.bibsonomy.org/bibtex/20b627387b63b652898cb5ecf03f87356/evabl444.
11It is only close to “queen” and not exact because the space is high dimensional and mostly empty (the technical term is sparse), and the word embeddings only capture some of the meaning of the words.
12L. Lucy and J. Gauthier, “Are distributional representations ready for the real world? Evaluating word vectors for grounded perceptual meaning,” Proceedings of the First Workshop on Language Grounding for Robotics, August 2017, doi: 10.18653/v1/w17-2810.
13J. Zhu, “Bing delivers its largest improvement in search experience using Azure GPUs,” Microsoft Azure, November 18, 2019. azure.microsoft.com/en-us/blog/bing-delivers-its-largest-improvement-in-search-experience-using-azure-gpus/.
14Frameworks Natural Language Processing Team, “Can Global Semantic Context Improve Neural Language Models,” machinelearning.apple.com, September, 2018. https://machinelearning.apple.com/research/can-global-semantic-context-improve-neural-language-models.machinelearning.apple.com/2018/09/27/can-global-semantic-context-improve-neural-language-models.html.
15B. Marr, “Supervised v unsupervised machine learning—what’s the difference?” Forbes, March 16, 2017. www.forbes.com/sites/bernardmarr/2017/03/16/supervised-v-unsupervised-machine-learning-whats-the-difference/#7efb5ed1485d.
Chapter 8
1National Highway Traffic Safety Administration & U.S. Department of Transportation (2015). TRAFFIC SAFETY FACTS Crash • Stats. Critical Reasons for Crashes Investigated in the National Motor Vehicle Crash Causation Survey. crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812115.
2Self-driving trucks are being built by different companies (e.g., Otto, Embark, Ike, Thor Trucks, Kodiak, TuSimple, Peloton Technology, and Pronto.ai) than self-driving cars. While much of the technology needed for self-driving trucks is the same or similar to the technology for self-driving cars, there are significant differences. For example, trucks have bigger blind spots, and cameras/lidar can be mounted at higher points than on cars. Some analysts argue that self-driving trucks will make their appearance even before self-driving cars. There is a shortage of truck drivers, and truck drivers are highly paid, perhaps making the cost of self-driving technology less important for trucks than for cars, which gives makers of self-driving truck technology the ability to use more expensive components such as military- and aerospace-grade sensors that may be too expensive for self-driving cars. All that said, most truck drivers have years of experience that enables them to avoid accidents, and that may make self-driving trucks harder to develop than self-driving cars. See also this article arguing that, even if companies are successful in building autonomous trucks, the number of jobs that will be lost is greatly overstated: onezero.medium.com/self-driving-trucks-wont-kill-millions-of-jobs-ef56ca978f77.
3T. Mogg, “Self-driving baggage tractor is the latest smart tech for airports,” Digital Trends, December 9, 2019. www.digitaltrends.com/cool-tech/self-driving-baggage-tractor-is-the-latest-smart-tech-for-airports/.
4AVs have tremendous potential for improving safety. Over a million people die every year in car accidents worldwide according to the Association for Safe International Road Travel, “Road safety facts—Association for Safe International Road Travel,” Asirt. 2019. www.asirt.org/safe-travel/road-safety-facts/.
5C. Neiger, “5 future car technologies that truly have a chance,” How Stuff Works, December 23, 2011. auto.howstuffworks.com/under-the-hood/trends-innovations/5-future-car-technologies3.htm.
6A. Peters, “It could be 10 times cheaper to take electric robo-taxis than to own a car by 2030,” Fast Company, May 30, 2017. www.fastcompany.com/40424452/it-could-be-10-times-cheaper-to-take-electric-robo-taxis-than-to-own-a-car-by-2030.
7T. Tatarek, J. Kronenberger, and U. Handmann, “Functionality, advantages and limits of the TeslaAutopilot,” Hochschule Ruhr West, University of Applied Sciences, Institut Informatik, Internal Report 17-04, 2017. www.handmann.net/paper/2017_ir4.pdf.
8A. Jackson (1931). “Automatic Parallel Parking System,” United States Patent, Number 1,905,717.
9D. A. Pomerleau, “ALVINN: An autonomous land vehicle in a neural network,” Adv. Neural Inf. Process. Syst., pp. 305–313, 1988.
10www.YouTube.com/watch?v=IaoIqVMd6tc&feature=youtu.be.
11ALVINN was originally the name of the supervised learning neural network software, though most histories assign the name ALVINN to the vehicle itself. The vehicle was one of a series of CMU vehicles built as part of the NavLab project, and the curve warning system was also developed as part of NavLab.
12EUREKA, “Programme for a european traffic system with highest efficiency and unprecedented safety,” 1987. www.eurekanetwork.org/project/id/45.
13PROMETHEUS video: www.YouTube.com/watch?v=JgKd_RcgYv4&feature=youtu.be
14E. Dickmanns, “The 4-d approach to visual control of autonomous systems,” in AIAA/ NASA Conf. on Intelligent Robots in Field Factory Service and Space (CIRFFSS), 1994, pp. 483–493.
15https://en.wikipedia.org/wiki/DARPA_Grand_Challenge.
16A. Davies, “An Oral History of the Darpa Grand Challenge, the Grueling Robot Race That Launched the Self-Driving Car,” Wired, August 3, 2017, https://www.wired.com/story/darpa-grand-challenge-2004-oral-history/.
17S. Thrun, M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron, J. Diebel, P. Fong, J. Gale, M. Halpenny, G. Hoffmann, K. Lau, C. Oakley, M. Palatucci, V. Pratt, P. Stang, S. Strohband, C. Dupont, L. E. Jendrossek, C. Koelen, C., ... P. Mahoney (2006). Stanley: The robot that won the DARPA Grand Challenge. Journal of Field Robotics, 23(9), 661–692. doi.org/10.1002/rob.20147.
18B. Bilger, “Auto Correct,” newyorker.com, November 25, 2013. https://www.newyorker.com/magazine/2013/11/25/auto-correct.
19M. Ramsey and D. MacMillan, “Carnegie Mellon reels after Uber lures away researchers,” The Wall Street Journal, May 31, 2015. www.wsj.com/articles/is-uber-a-friend-or-foe-of-carnegie-mellon-in-robotics-1433084582.
20Accounts of the actual purchase price vary, and some put it as low as $220M. In February 2017, Waymo launched a lawsuit against Uber claiming misappropriation of trade secrets, and in May 2017, Uber fired Levandowski for not cooperating with Uber’s investigation into the lawsuit. Many of the Otto engineers left for other self-driving start-ups. Waymo and Uber settled the lawsuit in February 2018, with Google receiving about $245M of Uber stock.
21https://www.tesla.com/autopilot#:~:text=Eight%20surround%20cameras%20provide%20360,distance%20of%20the%20prior%20system.
22FLIR, “FLIR systems partners with veoneer for first thermal sensor-equipped production self-driving car with a leading global automaker,” October 30, 2019. www.flir.com/news-center/press-releases/flir-systems-partners-with-veoneer-for-first-thermal-sensor-equipped-production-self-driving-car-with-a-leading-global-automaker/.
23Pixels, short for picture elements, are the little dots that make up digital picture images.
24The stealth bombers use a plastic covering that does not conduct electricity to make then invisible to radar.
25J. Stewart, “Why Tesla’s Autopilot Can’t See a Stopped Firetruck,” wired.com, August 27, 2018. www.wired.com/story/tesla-autopilot-why-crash-radar/.
26en.Wikipedia.org/wiki/Dead_reckoning.
27A. Noureldin, T. B. Karamat, and J. Georgy, “Inertial Navigation System,” Fundamentals of Inertial Navigation, Satellite-based Positioning and Their Integration (Berlin: Springer, 2013), 125–166.
28Since it is not unusual to see twenty-year-old cars on the road, even if we get to the point where every new car is outfitted with V2V technology, it could still be over twenty years before every car on the road has the technology. The good news is that despite the unsettled standards issue, many automakers are already starting to build one of the two standards into their cars.
29KPMG International, “2019 Autonomous Vehicles Readiness Index,” 2019. assets. kpmg/content/dam/kpmg/xx/pdf/2019/02/2019-autonomous-vehicles-readiness-index.pdf.
30An explanation of the Tesla technology can be found in two videos of talks by Director of AI Andrej Karpathy. The first video provides an overview of the technology: www.youtube.com/watch?time_continue=40&v=hx7BXih7zx8&feature=emb_logo. The second video is more technical and explains that the networks aren’t completely separate and actually share some layers in what is known as a multitask supervised learning architecture: www.youtube.com/watch?v=IHH47nZ7FZU.
31Karpathy noted that development of even a basic component such as a stop sign detector is difficult. Stop signs can be mounted on a pole, a car, a school bus, a gate, or held by a construction worker. Stop signs can also be occluded by the car in front or by foliage.
33All Tesla vehicles have an internet connection that is used to connect each car to servers at Tesla headquarters and to both update the car software and transfer data from the car to the Tesla servers. It is not required, however, for safe operation of the vehicle.
34For a great blog post on how Lyft’s 3D maps work, see medium.com/lyftlevel5/https-medium-com-lyftlevel5-rethinking-maps-for-self-driving-a147c24758d6.
35www.youtube.com/watch?v=y57wwucbXR8&feature=emb_rel_end.
36lexfridman.com/tesla-autopilot-miles-and-vehicles/.
37www.youtube.com/watch?time_continue=40&v=hx7BXih7zx8&feature=emb_logo.
38The simulator, named Carcraft, is located on the Alphabet campus in Mountain View, California. A. C. Madrigal, “Inside Waymo’s secret world for training self-driving cars: An exclusive look at how alphabet understands its most ambitious artificial intelligence project,” The Atlantic, August 23, 2017. www.theatlantic.com/technology/archive/2017/08/inside-waymos-secret-testing-and-simulation-facilities/537648/.
39Waymo, “Building maps for a self-driving car,” Medium, December 14, 2016. medium. com/waymo/building-maps-for-a-self-driving-car-723b4d9cd3f4.
40C. Urmson et al., “High speed navigation of unrehearsed terrain: Red team technology for grand challenge 2004,” The Robotics Institute, Carnegie Mellon University, June 1, 2004. www.ri.cmu.edu/pub_files/pub4/urmson_christopher_2004_1/urmson_christopher_2004_1.pdf.
J. Levinson and S. Thrun, “Robust vehicle localization in urban environments using probabilistic maps,” in Proceedings - IEEE International Conference on Robotics and Automation, 2010, pp. 4372–4378, doi: 10.1109/ROBOT.2010.5509700.
41As of July 2018, Waymo had logged 8 million self-driving miles. T. Randall and M. Bergen, “Waymo’s self-driving cars are near: Meet the teen who rides one every day,” Bloomberg, August 1, 2018. www.bloomberg.com/news/features/2018-07-31/inside-the-life-of-waymo-s-driverless-test-family.
42A group of MIT researchers have detailed a system for using low-dimensional topographical maps on rural roads: T. Ort, L. Paull, and D. Rus, “Autonomous vehicle navigation in rural environments without detailed prior maps,” in Proceedings - IEEE International Conference on Robotics and Automation, 2018, doi: 10.1109/ICRA.2018.8460519.
43www.youtube.com/watch?time_continue=40&v=hx7BXih7zx8&feature=emb_logo.
44B. Hood, “Hackers Tricked Self-Driving Teslas Into Accelerating 50 MPH With a Piece of Tape,” robbreport.com, February 19, 2020. www.robbreport.com/motors/cars/hackers-manipulate-teslas-50-mph-2900057/.
45T. Huddleston, “These Chinese hackers tricked Tesla’s Autopilot into suddenly switching lanes,” cnbc.com, April 3, 2019. www.cnbc.com/2019/04/03/chinese-hackers-tricked-teslas-autopilot-into-switching-lanes.html.
46G. Rapier, “Why trees have wreaked havoc on Uber’s self-driving program,” markets.businessinsider.com, November 27, 2018. https://markets.businessinsider.com/news/stocks/trees-are-wreaking-havoc-on-uber-self-driving-car-software-2018-11-1027759300.
47J. Kaplan, R. Glon, S. Edelstein, and L. Chang, “Deadly Uber crash was ‘entirely avoidable’ had the driver not been watching Hulu,” digitaltrends.com, June 22, 2018. https://www.digitaltrends.com/cars/self-driving-uber-crash-arizona/.
48C. Sullengerger, Sully: My Search for What Really Matters (New York: HarperCollins, 2016).
49B. Lewis, “Student intentionally causes a car crash, and it may have saved a woman’s life,” wtsp.com, January 17, 2019. https://www.wtsp.com/article/news/clearwater-senior-saves-seizing-driver-with-heroic-car-crash/67-aebb4a34-2eea-4fc2-bb69-4254b804c4a2.
50R. Stumpf, “Autopilot Blamed for Tesla’s Crash Into Overturned Truck,” thedrive.com, June 1, 2020. https://www.thedrive.com/news/33789/autopilot-blamed-for-teslas-crash-into-overturned-truck. Another Tesla fatality occurred in 2019 when a Tesla crashed into a different truck. The crash sheared off the top of the Tesla.
51R. Nazarov, “Russian wake-up call from winter autonomous-vehicle (AV) trials,” Urgent Communications, February 18, 2020. urgentcomm.com/2020/02/18/russian-wake-up-call-from-winter-autonomous-vehicle-av-trials/.
52Insurance Institute for Highway Safety, “Self-driving vehicles could struggle to eliminate most crashes,” iihs.org, June 4, 2020. https://www.iihs.org/news/detail/self-driving-vehicles-could-struggle-to-eliminate-most-crashes.
53D. Newcomb, “Don’t believe the self-driving car crash hype,” PC Mag, March 31, 2017. www.pcmag.com/opinions/dont-believe-the-self-driving-car-crash-hype.
54P. Keating, “Florida Mayo Clinic using autonomous vehicles to transport coronavirus tests,” foxnews.com, April 14, 2020. https://www.foxnews.com/auto/florida-mayo-clinic-autonomous-vehicles-coronavirus.
55Sidewalk delivery robots are a special case of delivery vehicles. From a timeline perspective, these vehicles are more like campus shuttles than delivery vehicles that operate on city streets. For example, Starship Technologies robots are the size of a large cooler, operate no faster than six miles per hour, and have been approved for use on university campuses and in some states in the US.
56A. J. Hawkins, “Waymo’s driverless car: Ghost-riding in the back seat of a robot taxi,” The Verge, December 9, 2019. www.theverge.com/2019/12/9/21000085/waymo-fully-driverless-car-self-driving-ride-hail-service-phoenix-arizona.
58Zoox is testing its autonomous taxis in San Francisco. In early 2020, the company posted an impressive video (venturebeat.com/2020/04/17/watch-zooxs-autonomous-car-drive-around-san-francisco-for-an-hour/) of an autonomous vehicle driving the streets of San Francisco for an hour without the human safety driver taking over at all. The video shows the Zoox car doing the following:
•Driving on Market Street when it was very busy with pedestrians
•Stopping for pedestrians in crosswalks and when making turns
•Avoiding pedestrians who walk out from behind parked cars
•Navigating around double-parked cars and delivery trucks
•Going through a tunnel that blocked the GPS signal
•Waiting for a car to back into a spot
•Driving on hilly Lombard Street
59There is always the possibility that vehicle makers simply relabel Level 2 vehicles as Level 3 vehicles and still require drivers to pay attention to the road. This would not be a true Level 3 capability.
60I am far from the first person to articulate this position. For example, self-driving car pioneer Chris Urmson predicted in 2017 that it would be thirty years or more before we eliminate the need for drivers: www.vox.com/2017/9/8/16278566/transcript-self-driving-car-engineer-chris-urmson-recode-decode.
Chapter 9
1Reinforcement learning has also been used in many other types of applications. For example, two MIT researchers used reinforcement learning to determine how to dose glastoblioma patients with the least amount of toxic anticancer drugs that would still be effective: G. Yauney and P. Shah, “Reinforcement learning with action-derived rewards for chemotherapy and clinical trial dosing regimen selection,” Proc. Mach. Learn. Res, vol. 85, 2018.
Another example is financial trading strategies; e.g., C.-Y. Huang, “Financial trading as a game: A deep reinforcement learning approach,” arXiv preprint arXiv:1807.02787, 2018.
2Applying reinforcement learning to elevator control was first discussed at the 1996 NIPS conference by Robert Crites and Andrew Barto: R. H. Crites and A. G. Barto, “Improving elevator performance using reinforcement learning,” in Neural Information Processing Systems 8, S. Touretzky, M. C. Mozer, and M. E. Hasselmo, Eds. (Cambridge, MA: MIT Press, 1996), pp. 1017–1023.
3A trillion-trillion is a one plus twenty-four zeroes. The number of combinations of 118 buttons is 2118, which is three plus thirty-five zeroes.
4In this case the state would be the location of each elevator, whether each of the thirty-eight hall call buttons is pressed, and whether each of the twenty buttons inside each of the four elevator cars is pressed.
5This is a bit of an oversimplification. There are two primary types of learned functions in reinforcement learning termed policies and Q-functions. For an overview of how each of these are learned, see www.AIPerspectives.com/rl.
6Crites and Bartow found that the function learned by their reinforcement learning system was superior to most industry algorithms at the time, including SECTOR, a sector-based algorithm similar to what was being used in many actual elevator systems: Dynamic Load Balancing, which attempts to equalize the load of all cars; Highest Unanswered Floor First, which gives priority to the highest floor with people waiting; Longest Queue First, which gives priority to the queue with the person who has been waiting for the longest amount of time; ESA, Empty the System Algorithm, which searches for the fastest way to “empty the system” assuming no new passenger arrivals.
7Making a poor choice of reward function can result in unintended behaviors. For example, OpenAI researchers built a reinforcement learning system to learn a game named CoastRunners where the goal of the game is to navigate a boat race faster than the other players and extra points are won by hitting targets along the route. As it turned out, the reward function put too much emphasis on hitting the targets, and the reinforcement learning system learned to drive around in a circle, hitting the same three targets over and over and never finishing the race.
J. Clark, “Faulty reward functions in the wild,” OpenAI, December 21, 2016. openai. com/blog/faulty-reward-functions/.
8Simulators are not a panacea. One big problem with simulators is that the real world is almost always different from the simulator assumptions. People will arrive at elevators in unexpected clumps. Robots that pick up objects and move them can encounter unexpected environmental conditions such as winds, uneven floors, and dogs and people. Self-driving cars can encounter unexpected conditions such as blizzards, fog, construction blockages, and deer crossing the road.
9The mathematics behind learning policy weights for a given reward function is beyond the scope of this book. For an overview of how reinforcement learning policies are learned, see www.AIPerspectives.com/po. For the most prominent textbook on reinforcement learning, see R. S. Sutton, and A. G. Barto, Reinforcement Learning: An Introduction, 2nd ed. (Cambridge, Mass.: MIT Press, 2018).
10Pong, Breakout, Enduro, Beam Rider, Q*bert, Sequest, and Space Invaders.
11V. Mnih et al., “Playing Atari with deep reinforcement learning,” 2013. arxiv.org/pdf/1312.5602.pdf.
V. Mnih et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, pp. 529–541, 2015, doi: 10.1038/nature14236.
12Pong, Breakout, and Enduro.
13D. Silver et al., “Mastering the game of Go with deep neural networks and tree search,” Nature, vol. 529, pp. 484–489, 2016, doi: 10.1038/nature16961.
14D. Silver et al., “Mastering chess and shogi by self-play with a general reinforcement learning algorithm,” Science, 2017, doi: 10.1126/science.aar6404.
15A. Gleave, M. Dennis, C. Wild, N. Kant, S. Levine, and S. Russell (2020). “Adversarial Policies: Attacking Deep Reinforcement Learning.” Eighth International Conference on Learning Representations. adversarialpolicies.github.io/.
16For example, the Sophia (www.hansonrobotics.com/sophia/), Erica (robots.ieee.org/robots/erica/), and the Geminoid line of robots (www.geminoid.jp/en/robots.html).
17www.YouTube.com/watch?v=xb93Z0QItVI.
18See this video of Boston Dynamics robots: www.YouTube.com/watch?v=rVlhMGQgDkY.
19M. Hayes, “The creepy robot dog botched a test run with a bomb squad,” OneZero, February 19, 2020. onezero.medium.com/boston-dynamics-robot-dog-got-stuck-in-sit-mode-during-police-test-emails-reveal-4c8592c7fc2.
20P. Abbeel and A. Y. Ng, “Apprenticeship learning via inverse reinforcement learning,” in Proceedings, Twenty-First International Conference on Machine Learning, ICML 2004, 2004, doi: 10.1145/1015330.1015430.
21As researchers Pieter Abbeel and Andrew Ng (Abbeel and Ng, 2004) point out, it would be very hard to define a reward function for driving because of the number of constraints (e.g., do not hit a pedestrian, stay in your lane, do not speed, obey traffic signals, and on and on).
22There are also other difficult pick-and-place problems. For example, if a robot is trained to pick up a box, ideally that training would generalize to boxes of different sizes and surfaces with different friction coefficients. Otherwise, separate training for each box and surface will be required. More generally, the goal is to train on the strategy observed in the demonstrations and to generalize that strategy so that it can be applied to states that were not observed during training.
24P. Pastor, H. Hoffmann, T. Asfour, and S. Schaal, “Learning and generalization of motor skills by learning from demonstration,” in 2009 IEEE International Conference on Robotics and Automation, pp. 763–768, 2009, doi: 10.1109/robot.2009.5152385.
25K. Muelling, J. Kober, O. Kroemer, and J. Peters, “Learning to select and generalize striking movements in robot table tennis,” in AAAI Fall Symposium - Technical Report, 2012, pp. 38–45.
26M. A. Rana, M. Mukadam, S. R. Ahmadzadeh, S. Chernova, and B. Boots, “Towards robust skill generalization: Unifying learning from demonstration and motion planning,” in Conference on Robot Learning, 2017.
27www.YouTube.com/watch?v=52QTL7v_-vo.
Chapter 10
1G. Orwell, 1984 (New York: Berkley, 2003).
2English translation is “Sharp Eyes.”
3R. Dixon, “China’s new surveillance program aims to cut crime. Some fear it’ll do much more,” Los Angeles Times, October 27, 2018. www.latimes.com/world/asia/la-fg-china-sharp-eyes-20181027-story.htmll. Also, according to Time magazine (Dec 2–9, 2019), the city of Chongqing has 2.58 million surveillance cameras for its 15.35 million people.
4Phys.org, “China shames jaywalkers through facial recognition,” Phys.org, June 20, 2017. phys.org/news/2017-06-china-shames-jaywalkers-facial-recognition.html.
5J. C. Hernandez (n.d.), “China’s High-Tech Tool to Fight Toilet Paper Bandits,” New York Times, March 17, 2017. Retrieved March 23, 2020, from www.nytimes.com/2017/03/20/world/asia/china-toilet-paper-theft.html.
6L. Ong, “In Beijing, ‘Big Brother’ now sees all,” The Epoch Times, October 5, 2015.
7C. Campbell, “What the Chinese Surveillance State Means for the Rest of the World,” time.com, November 21, 2019. https://time.com/5735411/china-surveillance-privacy-issues/.
8D. Alba, “The US government will be scanning your face at 20 top airports, documents show,” Buzzfeed News, March 11, 2019. www.buzzfeednews.com/article/daveyalba/these-documents-reveal-the-governments-detailed-plan-for.
10K. Rector and A. Knezevich, “Maryland’s use of facial recognition software questioned by researchers, civil liberties advocates”, Baltimore Sun, October 18, 2016. www.baltimoresun.com/news/crime/bs-md-facial-recognition-20161017-story.html.
11R. Brandom, “How facial recognition helped police identify the Capital Gazette shooter,” The Verge, June 29, 2018. www.theverge.com/2018/6/29/17518364/facial-recognition-police-identify-capital-gazette-shooter.
12G. L. Goodwin, “Face recognition technology: DOJ and FBI have taken some actions in response to GAO recommendations to ensure privacy and accuracy, but additional work remains,” June 4, 2019. www.gao.gov/assets/700/699489.pdf.
13www.banfacialrecognition.com/.
14D. Harwell, “Both Democrats and Republicans blast facial-recognition technology in a rare bipartisan moment,” Washington Post, May 23, 2019. www.washingtonpost.com/technology/2019/05/22/blasting-facial-recognition-technology-lawmakers-urge-regulation-before-it-gets-out-control/.
15D. Harwell, “Parkland school turns to experimental surveillance software that can flag students as threats,” Washington Post, February 18, 2019. www.washingtonpost.com/technology/2019/02/13/parkland-school-turns-experimental-surveillance-software-that-can-flag-students-threats/.
16S. Bushwick, “How NIST tested facial recognition algorithms for racial bias,” Scientific American, December 27, 2019. www.scientificamerican.com/article/how-nist-tested-facial-recognition-algorithms-for-racial-bias/.
17G. Goswami, N. Ratha, A. Agarwal, R. Singh, and M. Vatsa, “Unravelling robustness of deep learning based face recognition against adversarial attacks,” in 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, 2018.
18Many other companies offer similar software, including Google, Microsoft, and IBM.
19www.banfacialrecognition.com/map.
20www.banfacialrecognition.com.
21J. Snow, “Amazon’s face recognition falsely matched 28 members of Congress with mugshots,” American Civil Liberties Union, 2018.
22California Legislative Information. (2019, Oct. 8). AB-1215 Law Enforcement: Facial Recognition and Other Biometric Surveillance. [Online]. Available: leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB1215.
23A. Pressley, “Reps. Pressley, Clarke & Tlaib Announce Bill Banning Facial Recognition in Public Housing,” Representative Ayanna Pressley, July 25, 2019. pressley.house.gov/media/press-releases/reps-pressley-clarke-tlaib-announce-bill-banning-facial-recognition-public.
24J. Delcker and B. Smith-Meyer, “EU considers temporary ban on facial recognition in public spaces,” Politico, January 16, 2020. www.politico.eu/pro/eu-considers-temporary-ban-on-facial-recognition-in-public-spaces/.
25For an example of how badly facial recognition errors can completely ruin someone’s life, see this article: A. Kofman, “Losing face: How a facial recognition mismatch can ruin your life,” The Intercept, October 13, 2016. theintercept.com/2016/10/13/how-a-facial-recognition-mismatch-can-ruin-your-life/.
26A. M. McDonald and L. F. Cranor, “The cost of reading privacy policies,” I/S A J. Law Policy Inf. Soc., vol. 4, no. 3, pp. 543–568, 2008.
27R. Copeland, D. Mattioli, and M. Evans, “Inside Google’s quest for millions of medical records,” Wall Street Journal, January 11, 2020. www.wsj.com/articles/paging-dr-google-how-the-tech-giant-is-laying-claim-to-health-data-11578719700?mod=searchresults&page=1&pos=1.
28H. Fry, Hello World: Being Human in the Age of Algorithms, 1st ed. (New York: W. W. Norton & Company, 2018).
29K. Hill, “How Target figured out a teen girl was pregnant before her father did,” Forbes, February 16, 2012. www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/#4a304a7c6668.
30J. Mikians, L. Gyarmati, V. Erramilli, and N. Laoutaris, “Detecting price and search discrimination on the Internet,” in Proceedings of the 11th ACM Workshop on Hot Topics in Networks, HotNets-11, pp. 79–84, 2012, doi: 10.1145/2390231.2390245.
A. Hannak, G. Soeller, D. Lazer, A. Mislove, and C. Wilson, “Measuring price discrimination and steering on E-commerce web sites,” in Proceedings of the ACM SIGCOMM Internet Measurement Conference, IMC, 2014, doi:10.1145/2663716.2663744.
31Federal Trade Commission v. Compucredit Corporation and Jefferson Capital Systems, LLC, US Dist. Court Atlanta, 1:08-CV-1976, 2008. [Online]. Available: www.ftc.gov/sites/default/files/documents/cases/2008/06/080610compucreditcmptsigned.pdf.
32“Facebook Says Cambridge Analytica Harvested Data of Up to 87 Million Users,”New York Times (n.d.). Retrieved February 17, 2020, from https://www.nytimes.com/2018/04/04/technology/mark-zuckerberg-testify-congress.html.
33Norwegian Consumer Council, “Deceived by design: How tech companies use dark patterns to discourage us from exercising our rights to privacy,” July 6, 2018. fil.forbrukerradet.no/wp-content/uploads/2018/06/2018-06-27-deceived-by-design-final.pdf.
34https://marketing.acxiom.com/rs/982-LRE-196/images/Acxiom%20Global%20Data.pdf.
35S. Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, 1st ed. (New York: PublicAffairs, 2019).
Chapter 11
1See www.AIPerspectives.com/fe for more information on image feature extraction methods.
2O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei (2015). “ImageNet Large Scale Visual Recognition Challenge.” International Journal of Computer Vision, 115(3), 211–252. https://doi.org/10.1007/s11263-015-0816-y.
3A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM, 2012, doi: 10.1145/3065386.
4It should be noted that the AlexNet top-1 error rate was still around 37 percent.
5There are two types of neural networks. The human brain is a type of biological neural network. The type of neural network that will be discussed in this book is an artificial neural network. I will use the term “neural networks” to refer to “artificial neural networks” in this book.
6T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, ... D. A. Openai (2020). Language Models are Few-Shot Learners. arXiv preprint arXiv: 2005.14165, 2020.
7Mathematical methods, the most popular of which is termed backpropagation using gradient descent (and which is explained at www.AIPerspectives.com/ba), determine how the weights are learned.
8The details of how ConvNets work can be found on www.AIPerspectives.com/cn.
9The discussion in this section will focus on image classification. However, facial recognition uses similar algorithms. For more information on facial recognition see www.AIPerspectives.com/fr.
10Y. Le Cun, L. Bottou, and Y. Bengio, “Reading checks with multilayer graph transformer networks,” in ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, 1997, doi: 10.1109/icassp.1997.599580.
11C. Szegedy, W. Liu, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. arXiv preprint arXiv: 1409.4842, 2014.
12K. He, X. Zhang, S. Ren, and J. Sun (n.d.). Deep Residual Learning for Image Recognition. arXiv preprint arXiv: 1512.03385, 2015.
13C. A. G. Grajales, “The statistics behind Google Translate,” Statistics Views, June 23, 2015. www.statisticsviews.com/details/feature/8065581/The-statistics-behind-Google-Translate.html.
14F. Och, “Statistical machine translation live,” Google AI Blog, April 28, 2006. ai.googleblog.com/2006/04/statistical-machine-translation-live.html.
15Actually, many phrases have more than one possible translation, so it would store all possible phrasal translations.
16The process was more complex because the system needed to handle multiple translations for phrases and account for word order differences between languages. See www.AIPerspectives.com/pb for more details.
17In English, French, and Italian, the word order for a sentence is SVO (SUBJECT VERB OBJECT). Subjects are typically noun phrases that appear before the verb, and objects appear after the verb. In contrast, in German and Japanese the typical word order is SOV (SUBJECT OBJECT VERB). For example, in English, one would say “John hit the ball.” In German, one would say the equivalent of “John has the ball hit” (“John hat den Ball getroffen”). Other languages are OVS, VOS, and VSO, and many languages tolerate multiple word orders.
18B. Turovsky, “Ten years of Google Translate,” Google Blog, April 28, 2016. blog.google/products/translate/ten-years-of-google-translate/.
19B. Turovsky, “Found in translation: More Accurate, Fluent Sentences in Google Translate,” Translate (blog), November 15, 2016, https://blog.google/products/translate/found-translation-more-accurate-fluent-sentences-google-translate/.
20“The Great A.I. Awakening,” New York Times. Retrieved February 17, 2020, from https://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html.
21Unfortunately, as good as Google Translate has become, as of February 2020 it supported only 104 of the world’s more than seven thousand languages (see cloud. google.com/translate/docs/languages for an updated list). Further, this situation might not improve rapidly, as the technology requires large bilingual texts for each language pair.
22The encoder and decoder deep neural networks are termed recurrent neural networks (RNNs), and many use a specialized form of RNN termed a long short-term memory (LSTM) network. See www.AIPerspectives.com/rn for details on RNNs, LSTMs, and the evolution of sequence-to-sequence models.
23How NMT systems works is explained on www.AIPerspectives.com/nm.
24K. H. Davis, R. Biddulph, and S. Balashek, “Automatic recognition of spoken digits,” J. Acoust. Soc. Am., vol. 24, no. 6, pp. 637–642, 1952, doi: 10.1121/1.1906946.
25See chapter 7 and also www.AIPerspectives.com/lm for more information on language models.
26See www.AIPerspectives.com/sr for more information on how earlier speech recognition systems work.
27A. Mohamed, A. Mohamed, G. Dahl, and G. Hinton (2009). “Deep belief networks for phone recognition.” In Proceedings of the NIPS Workshop on Deep Learning for Speech Recognition and Related Applications. citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.587.3829.
28G. E. Dahl, D. Yu, S. Member, L. Deng, and A. Acero (2012). “Context-Dependent Pre-Trained Deep Neural Networks for Large-Vocabulary Speech Recognition.” IEEE Transactions on Audio, Speech, and Language Processing, 20(1). doi.org/10.1109/TASL.2011.2134090.
29A more in-depth explanation of current speech recognition technologies can be found at www.AIPerspectives.com/s2.
30Ibid.
31The deepfake network illustrated here is based on the technology behind the Faceswap app. See forum.faceswap.dev/viewtopic.php?f=6&t=146.
32www.YouTube.com/watch?v=r1jng79a5xc.
33Research has shown it is possible to create a deepfake image using only eight training images of the target person plus one image of another person wearing the desired facial expression. E. Zakharov, A. Shysheya, E. Burkov, and V. Lempitsky, “Few-shot adversarial learning of realistic neural talking head models,” in Proceedings of the IEEE International Conference on Computer Vision, 2019.
34https://www.descript.com/lyrebird.
35J. Wang et al., “Visual concepts and compositional voting,” Ann. Math. Sci. Appl., vol. 3, no. 1, pp. 151–188, 2018, doi:10.4310/amsa.2018.v3.n1.a5.
36J. Su, D. V. Vargas, and K. Sakurai, “One pixel attack for fooling deep neural networks,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2019, doi: 10.1109/TEVC.2019.2890858.
37A. Nguyen, J. Yosinski, and J. Clune, “Deep neural networks are easily fooled: High confidence predictions for unrecognizable images,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2015, doi: 10.1109/CVPR.2015.7298640.
38J. R. Zech, M. A. Badgeley, M. Liu, A. B. Costa, J. J. Titano, and E. K. Oermann (n.d.). Confounding variables can degrade generalization performance of radiological deep learning models. Author summary. arXiv preprint arXiv 1807.00431, 2019
39N. Carlini and D. Wagner, “Audio adversarial examples: Targeted attacks on speech-to-text,” in Proceedings - 2018 IEEE Symposium on Security and Privacy Workshops, SPW 2018, 2018, doi: 10.1109/SPW.2018.00009.
Chapter 12
1M. Minksy, “Why People Think Computers Can’t,” AI Magazine, Fall 1982, p. 5.
2In spoken language, it also implies the ability to interpret vocal tone, body language, and facial expression—all of which can alter the literal meaning of the words, sometimes making the meaning the opposite of what was said, such as with the use of sarcasm.
3A. M. Collins and M. R. Quillian (1972). “How to make a language user,” PsycNET. In E. Tulving and W. Donaldson (Eds.), Organization of Memory. Academic Press. psycnet.apa.org/record/1973-08477-002.
4R. C. Schank and R. Abelson, Scripts, Plans, Goals, and Understanding: An Inquiry into Human Knowledge Structures (Artificial Intelligence Series), 1st ed. (East Sussex, England: Psychology Press, 1977).
5R. C. Schank, “Conceptual dependency: A theory of natural language understanding,” Cogn. Psychol., vol. 3, no. 4, pp. 552–631, 1972, doi: 10.1016/0010-0285(72)90022-9.
6J. Weizenbaum, “ELIZA-A computer program for the study of natural language communication between man and machine,” Commun. ACM, vol. 9, no. 1, pp. 36–45, 1966, doi: 10.1145/365153.365168.
7The natural language processing portion of a personal assistant is just one small part. Developers use conventional programming techniques to create many of the other aspects of a successful personal assistant such as the following:
•Sending and receiving text messages from Slack, Facebook Messenger, Twilio, and other text messaging tools
•Adding images, videos, and other media to messages
•Creating dialogues to collect data from users
•Adding click buttons that give the user choices in a text dialogue
•Storing data input by the user
•Querying knowledge sources (e.g., knowledge bases, FAQs) for data requested by the user or data needed by the bot (e.g., is an appointment available at the time requested by the user?)
•Linking commands to smartphone app actions
•Security and authentication
•Performance monitoring
•Adding speech capabilities
•Management of the dialogue state
•Synchronization across devices such as smartphones and smart speakers (e.g., Google Home and Amazon Alexa)
•Notifications
•Localization
•Chatbot personas
•Payment service interactions
•Vehicle integrations (e.g., Android Auto and Apple CarPlay)
8Third-party developers are programmers who do not work for the company.
9“Amazon Announces 80,000 Alexa Skills Worldwide and Jeff Bezos Earnings Release Quote Focuses Solely on Alexa Momentum,” Voicebot.ai. (n.d.). Retrieved September 17, 2020, from https://voicebot.ai/2019/01/31/amazon-announces-80000-alexa-skills-worldwide-and-jeff-bezos-earnings-release-quote-focuses-solely-on-alexa-momentum/.
10These are proprietary systems, and the vendors do not reveal much in the way of details.
11For example, Apple used deep learning to identify intents in Siri conversations. X. C. Chen, A. Sagar, J. T. Kao, T. Y. Li, C. Klein, S. Pulman, A. Garg, and J. D. Williams (n.d.). Active Learning for Domain Classification in a Commercial Spoken Personal Assistant, Interspeech, September 15-19, 2019.
Similarly, developers of automated online customer support systems can use their question/answer history to train deep networks to map natural language questions to a set of pre-defined questions that have canned answers or suggest support articles based on user questions.
12developer.amazon.com/alexaprize.
13H. Fang, H. Cheng, E. Clark, A. Holtzman, M. Sap, M. Ostendorf, Y. Choi, and N. A. Smith (n.d.). Sounding Board-University of Washington’s Alexa Prize Submission, 1st Proceedings of Alexa Prize, 2017.
14C. Y. Chen, D. Yu, W. Wen, Y. M. Yang, J. Zhang, M. Zhou, K. Jesse, A. Chau, A. Bhowmick, S. Iyer, G. Sreenivasulu, R. Cheng, A. Bhandare, and Z. Yu (n.d.). Gunrock: Building A Human-Like Social Bot By Leveraging Large Scale Real User Data. Retrieved May 12, 2020, from aws.amazon.com/lambda.
15I. V. Serban, C. Sankar, M. Germain, S. Zhang, Z. Lin, S. Subramanian, T. Kim, M. Pieper, S. Chandar, N. R. Ke, S. Rajeswar, A. De Brebisson, J. M. R. Sotelo, D. Suhubdy, V. Michalski, A. Nguyen, J. Pineau, and Y. Bengio (n.d.). A Deep Reinforcement Learning Chatbot (Short Version) 31st Conference on Neural Information Processing Systems, 2017, Long Beach, CA, USA.
16D. Adiwardana, M. T. Luong, D. R. So, J. Hall, N. Fiedel, R. Thoppilan, Z. Yang, A. Kulshreshtha, G. Nemade, Y. Lu Quoc, and V. Le (n.d.). Towards a Human-like Open-Domain Chatbot. arXiv preprint arXiv: 2001.09977.
17S. Roller, E. Dinan, N. Goyal, D. Ju, M. Williamson, Y. Liu, J. Xu, M. Ott, K. Shuster, E. M. Smith, Y. L. Boureau, and J. Weston (n.d.). Recipes for building an open-domain chatbot.arXiv preprint arXiv: 2004.13637.
18S. Baker (2011). Final Jeopardy: man vs. machine and the quest to know everything. Houghton Mifflin Harcourt.
19J. Chu-Carroll, J. Fan, N. Schlaefer, and W. Zadrozny, “Textual resource acquisition and engineering,” IBM Journal of Research and Development, vol. 56, no. 3/4, pp. 1–11, 2012, doi: 10.1147/JRD.2012.2185901.
20Here, “BornIn” is an arbitrary label that represents the relationship connoted by sentences like “Obama was born in Hawaii.”
21See www.AIPerspectives.com/el for a detailed description of entity linking.
22The term “entity” in an NLP context means a word or phrase that refers to a person, place, or thing.
23C. Wang, J. Fan, A. Kalyanpur, and D. Gondek, “Relation extraction with relation topics,” in Proceedings of the EMNLP 2011 - Conference on Empirical Methods in Natural Language Processing, 2011.
24Open source means freely available and collaboratively developed by people all over the world.
25YAGO also contains hierarchical information about concepts. For example, YAGO lists an animal hierarchy (animal->mammal->dog). When the entity and LAT were both available in YAGO, DeepQA used conventionally coded hierarchical rules to determine if the entity matched the LAT concept exactly or matched a subtype or supertype in the YAGO concept hierarchy.
26D. C. Gondek et al., “A framework for merging and ranking of answers in DeepQA,” IBM Journal of Research and Development, vol. 56, no. 3/4, pp. 1–12, 2012, doi:10.1147/JRD.2012.2188760.
27We could consider the conventionally coded rules used to match the time frame and location to be a trivial form of human-like reasoning if one stretches the definition. The same is true for the conventionally coded rules that match the entities and relations in the question to the candidate answers. And the same is also true for the hierarchical concept reasoning. However, a more apt description of the entire system is clever conventional programming.
28P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang, “SQuad: 100,000+ questions for machine comprehension of text,” in EMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings, 2016, doi: 10.18653/v1/d16-1264.
29D. Chen, J. Bolton, and C. D. Manning, “A thorough examination of the CNN/ Daily Mail reading comprehension task,” in 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Long Papers, 2016, doi: 10.18653/v1/p16-1223.
R. Kadlec, M. Schmid, O. Bajgar, and J. Kleindienst, “Text understanding with the attention sum reader network,” in 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Long Papers, 2016, doi: 10.18653/v1/p16-1086.
C. Xiong, V. Zhong, and R. Socher, “Dynamic coattention networks for question answering,” in 5th International Conference on Learning Representations, ICLR 2017 -Conference Track Proceedings, 2017.
M. Seo, A. Kembhavi, A. Farhadi, and H. Hajishirzi, “Bi-directional attention flow for machine comprehension,” in 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings, 2017.
For descriptions of many of these RC systems, see www.AIPerspectives.com/rc.
30A. Linn, “Microsoft creates AI that can read a document and answer questions about it as well as a person,” Microsoft AI Blog, January 15, 2018. blogs.microsoft.com/ai/ microsoft-creates-ai-can-read-document-answer-questions-well-person/.
A. Najberg, “Alibaba AI model tops humans in reading comprehension,” Alibaba, January 24, 2018. www.alibabacloud.com/blog/alibaba-ai-model-tops-humans-in-reading-comprehension_396923.
31Numerous word and sentence alignment techniques were developed in machine translation research. See www.AIPerspectives.com/wa for more information.
32K. M. Hermann et al., “Teaching machines to read and comprehend,” in Advances in Neural Information Processing Systems, 2015.
33D. Chen, J. Bolton, and C. D. Manning, “A thorough examination of the CNN/ Daily Mail reading comprehension task,” in 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Long Papers, 2016, doi: 10.18653/v1/p16-1223.
34R. Jia and P. Liang, “Adversarial examples for evaluating reading comprehension systems,” in EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing, Proceedings, 2017, doi: 10.18653/v1/d17-1215.
35See www.AIPerspectives.com/sp for additional research showing that superficial patterns are also used on other RC tests.
Chapter 13
1S. M. Kosslyn, S. Pinker, G. E. Smith, and S. P. Shwartz, “On the demystification of mental imagery,” Behav. Brain Sci., vol. 2, pp. 535–581, 1979, doi: 10.1017/S0140525X00064268.
2A. M. Turing, COMPUTING MACHINERY AND INTELLIGENCE, Mind, Volume LIX, Issue 236, October 1950, Pages 433–460, doi.org/10.1093/mind/LIX.236.433.
3M. Roemmele, C. A. Bejan, and A. S. Gordon, “Choice of plausible alternatives: An evaluation of commonsense causal reasoning,” in AAAI Spring Symposium - Technical Report, 2011.
H. J. Levesque, E. Davis, and L. Morgenstern, “The Winograd schema challenge,” in Proceedings of the International Workshop on Temporal Representation and Reasoning, 2012.
4See www.AIPerspectives.com/tr for more details.
5R. C. Schank and R. Abelson, Scripts, Plans, Goals, and Understanding: An Inquiry into Human Knowledge Structures (Artificial Intelligence Series), 1st ed. (East Sussex, England: Psychology Press, 1977).
6J. Walker, A. Gupta, and M. Hebert, “Patch to the future: Unsupervised visual prediction,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2014, doi: 10.1109/CVPR.2014.416.
7S. Singh, A. Gupta, and A. A. Efros, “Unsupervised discovery of mid-level discriminative patches,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 1–15, 2012, doi: 10.1007/978-3-642-33709-3_6.
8A. Perfors and J. B. Tenenbaum, “Learning to learn categories,” in Proceedings of the 31st Annual Conference of the Cognitive Science Society (CogSci 2009), 2009, pp. 136–141.
9D. G. T. Barrett, F. Hill, A. Santoro, A. S. Morcos, and T. Lillicrap, “Measuring abstract reasoning in neural networks,” in 35th International Conference on Machine Learning, ICML 2018, 2018.
10J. Piaget, The Construction of Reality in the Child (New York: Basic Books, 1954).
11See, for example, B. M. Lake, T. D. Ullman, J. B. Tenenbaum, and S. J. Gershman, “Building machines that learn and think like people,” Behav. Brain Sci., pp. 1–72, 2017, doi: 10.1017/S0140525X16001837.
12A. Lerer, S. Gross, and R. Fergus, “Learning physical intuition of block towers by example,” in 33rd International Conference on Machine Learning, ICML 2016, 2016.
Chapter 14
1M. Lewis, Moneyball: The Art of Winning an Unfair Game (New York: W.W. Norton & Company, 2004).
2K. T. May, “The moneyball effect: How smart data is transforming criminal justice, healthcare, music, and even government spending,” TED Blog, January 28, 2014. blog.ted.com/the-moneyball-effect-how-smart-data-is-transforming-criminal-justice-healthcare-music-and-even-government-spending/.
3B. Marr, “How Experian is using big data and machine learning to cut mortgage application times to a few days,” Forbes, May 25, 2017. www.forbes.com/sites/bernardmarr/2017/05/25/how-experian-is-using-big-data-and-machine-learning-to-cut-mortgage-application-times-to-a-few-days/#28742440203f.
4Electronic Privacy Information Center, “Algorithms in the Criminal Justice System: Pre-Trial Risk Assessment Tools.” epic.org/algorithmic-transparency/crim-justice/ (accessed March 19, 2020).
5J. Selingo, “How colleges use big data to target the students they want,” The Atlantic, April 11, 2017. www.theatlantic.com/education/archive/2017/04/how-colleges-find-their-students/522516/.
6R. Feloni, “I tried the software that uses AI to scan job applicants for companies like Goldman Sachs and Unilever before meeting them—and it’s not as creepy as it sounds,” Business Insider, August 24, 2017. www.businessinsider.com/hirevue-ai-powered-job-interview-platform-2017-8.
7W. M. Grove and P. E. Meehl, “Comparative efficiency of informal (subjective, impressionistic) and formal (mechanical, algorithmic) prediction procedures: The clinical-statistical controversy,” Psychol. Public Policy, Law, vol. 2, pp. 293–323, 1996, doi: 10.1037/1076-8971.2.2.293.
8P. W. Greenwood and A. Abrahamse, Selective Incapacitation. (Santa Monica, CA: Rand, 1982). www.rand.org/pubs/reports/R2815.html.
9“Opinion: Sentencing, by the Numbers,” New York Times (n.d.). Retrieved February 17, 2020, from https://www.nytimes.com/2014/08/11/opinion/sentencing-by-the-numbers.html.
10M. Bertrand and S. Mullainathan, “Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination,” American Economic Review, 2003.
11R. Bartlett, A. Morse, R. Stanton, N. Wallace, M. Adelino, S. Das, A. DeFusco, A. Fuster, A. Liberman, M. Puri, R. Rau, A. Seru, A. Walther, J. Wolfers, and participants at Berkeley, seminar U. (2019). Consumer-Lending Discrimination in the FinTech Era. faculty.haas.berkeley.edu/morse/research/papers/discrim.pdf.
12A great overview of this issue can be found in the book Weapons of Math Destruction by mathematician Cathy O’Neil (New York: Crown Random House, 2016). The nonprofit group Future of Privacy Forum put out a nice overview of the different ways that automated decision-making can harm both individuals and society as a whole: L. Smith, “Unfairness by algorithm: Distilling the harms of automated decision-making,” Future Privacy Forum, 2017. fpf.org/wp-content/uploads/2017/12/FPF-Automated-Decision-Making-Harms-and-Mitigation-Charts.pdf.
13J. Angwin, J. Larson, S. Mattu, and L. Kirchner, “Machine bias,” ProPublica, May 23, 2016. www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
14R. Richardson, J. M. Schultz, and K. Crawford, “Dirty data, bad predictions: How civil rights violations impact police data, predictive policing systems, and justice,” New York Univ. Law Rev., 2019.
15AdAge, “Apple co-founder Steve Wozniak says Goldman’s Apple Card algorithm discriminates,” November 10, 2019. adage.com/article/digital/apple-co-founder-steve-wozniak-says-goldmans-apple-card-algorithm-discriminates/2214331.
16J. Dastin, “Amazon scraps secret AI recruiting tool that showed bias against women,” Reuters, October 10, 2018. www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.
17M. Evans and A. W. Mathews, “Researchers find racial bias in hospital algorithm,” Wall Street Journal, October 25, 2019. www.wsj.com/articles/researchers-find-racial-bias-in-hospital-algorithm-11571941096.
18R. Speer, “ConceptNet Numberbatch 17.04: Better, less-stereotyped word vectors,” ConceptNet Blog, April 24, 2017. blog.conceptnet.io/posts/2017/conceptnet-numberbatch-17-04-better-less-stereotyped-word-vectors/.
19J. Larson, J. Angwin, L. Kirchner, and S. Mattu, “How we examined racial discrimination in auto insurance prices,” ProPublica, April 5, 2017. www.propublica.org/article/minority-neighborhoods-higher-car-insurance-premiums-methodology.
20J. Guynn, “Google Photos labeled Black people ‘gorillas’,” USA Today, July 1, 2015. www.usatoday.com/story/tech/2015/07/01/google-apologizes-after-photos-identify-black-people-as-gorillas/29567465/.
21J. Buolamwini and T. Gebru, “Gender shades: intersectional accuracy disparities in commercial gender classification,” in Proceeding of Machine Learning Research, Conference on Fairness, Accountability, and Transparency, vol. 81, pp. 77–91, 2018.
22G. McMillan, “It’s not you, it’s it: Voice recognition doesn’t recognize women,” Time, June 1, 2011. techland.time.com/2011/06/01/its-not-you-its-it-voice-recognition-doesnt-recognize-women/.
23A. Koenecke, A. Nam, E. Lake, J. Nudell, M. Quartey, Z. Mengesha, C. Toups, J. R. Rickford, D. Jurafsky, and S. Goel (2020). Racial disparities in automated speech recognition. Proceedings of the National Academy of Sciences of the United States of America, 117(14), 7684–7689. doi.org/10.1073/pnas.1915768117. www.pnas.org/content/117/14/7684.
24Research Next: Research, Scholarship and Creative Activity for a Brighter Future, “Our changing language,” University of Massachusetts Amherst. www.umass.edu/researchnext/feature/our-changing-language.
25See, for example, R. Chowdhury, “Tackling the challenge of ethics in AI,” Accenture, June 6, 2018. www.accenture.com/gb-en/blogs/blogs-cogx-tackling-challenge-ethics-ai.
F. D. P. Calmon, D. Wei, B. Vinzamuri, K. N. Ramamurthy, and K. R. Varshney, “Data pre-processing for discrimination prevention: Information-theoretic optimization and analysis,” IEEE J. Sel. Top. Signal Process., vol. 12, no.5, pp. 1106–1119, 2018, doi:10.1109/JSTSP.2018.2865887.
26See, for example, J. Zhao, Y. Zhou, Z. Li, W. Wang, and K.W. Chang, “Learning gender-neutral word embeddings,” in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 4847–4853, 2018, doi: 10.18653/v1/d18-1521.
T. Bolukbasi, K. W. Chang, J. Zou, V. Saligrama, and A. Kalai, “Man is to computer programmer as woman is to homemaker? Debiasing word embeddings,” in Advances in Neural Information Processing Systems, 2016.
A. Caliskan, J. J. Bryson, and A. Narayanan, “Semantics derived automatically from language corpora contain human-like biases,” Science, vol. 356, no. 6334, pp. 183–186, 2017, doi:10.1126/science.aal4230.
H. Gonen and Y. Goldberg, “Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them,” arXiv preprint arXiv:1903.03862, 2019.
K. Ethayarajh, D. Duvenaud, and G. Hirst, “Understanding undesirable word embedding associations,” in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 1696–1705, 2019, doi: 10.18653/v1/p19-1166.
27R. Jiang, A. Pacchiano, T. Stepleton, H. Jiang, and S. Chiappa, “Wasserstein fair classification,” in 35th Conference on Uncertainty in Artificial Intelligence, UAI 2019, arXiv preprint arXiv:1907.12059, 2019.
29R. Perrault et al., “The AI index 2019 annual report,” AI Index Steering Committee, Human-Centered AI Institute, Stanford University, Stanford, CA, December 2019. [Online]. Available: hai.stanford.edu/sites/g/files/sbiybj10986/f/ai_index_2019_report.pdf.
30One could argue that many bias issues are caused by facial recognition technology. I would argue that the problem is in the data used to train the facial recognition systems. Deep learning systems also tend to be less interpretable, but we should not focus on the underlying technology; we should focus on agreeing on and regulating the degree of interpretability we need in automated decision systems.
31AI techniques like deep learning sometimes result in improved decision-making; however, it is often not significantly better. An MIT study examined the degree to which deep learning techniques improved marketing analytics predictions versus a fifty-year-old linear regression technique. They found that linear regression had a 70 percent accuracy and deep learning had a 74 percent accuracy (G. Urban, A. Timoshenko, P. Dhillon, and J. R. Hauser, “Is Deep learning a game changer for marketing analytics?” MIT Sloan Management Review, November 25, 2019. sloanreview.mit.edu/article/is-deep-learning-a-game-changer-for-marketing-analytics/). If deep learning had never been invented, the older technique could still have been used. Both techniques are subject to concerns about bias, discrimination, and fairness.
32For example, researchers at the University of Texas at Austin have developed CERTIFAI, which tests tools for explainability, nondiscrimination, and robustness to adversarial attacks and can be used by third parties to certify ADS systems. R. Bartlett, A. Morse, R. Stanton, N. Wallace, M. Adelino, S. Das, A. DeFusco, A. Fuster, A. Liberman, M. Puri, R. Rau, A. Seru, A. Walther, J. Wolfers, and participants at Berkeley, seminar U. (2019). Consumer-Lending Discrimination in the Fin Tech Era.
33See www.AIPerspectives.com/in for more information on the technical aspects of interpretability.
34P. Crosman, “Is AI making credit scores better, or more confusing?” American Banker, February 14, 2017. www.americanbanker.com/news/is-ai-making-credit-scores-better-or-more-confusing.
35For a framework for assessing the interpretability of machine learning algorithms, see Z. C. Lipton, “The mythos of model interpretability,” Commun. ACM, 2018, doi: 10.1145/3233231. There is also ongoing research on how to make deep learning systems more interpretable, e.g., L. Hardesty, “Making computers explain themselves,” MIT News, October 27, 2016. news.mit.edu/2016/making-computers-explain-themselves-machine-learning-1028.
36K. Crawford, “The hidden biases in big data,” Harvard Business Review, April 1, 2013. hbr.org/2013/04/the-hidden-biases-in-big-data.
37There have been some interesting court cases on this topic. A Pennsylvania death penalty defendant was denied access to the source code of a forensic software program that produced the key evidence against him due to trade secret laws, and the Wisconsin State Supreme Court ruled that a defendant had no right to know the details of an algorithmic risk assessment used to sentence him: R. Wexler, “Life, liberty, and trade secrets: Intellectual property in the criminal justice system,” Stanford Law Review, vol. 70, pp. 1343–1429, 2018, doi: 10.2139/ssrn.2920883.
38L. Hardesty, “Making computers explain themselves,” MIT News, October 27, 2016. news.mit.edu/2016/making-computers-explain-themselves-machine-learning-1028.
M. T. Ribeiro, S. Singh, and C. Guestrin, “Local Interpretable Model-Agnostic Explanations (LIME): An introduction,” O’Reilly, August 12, 2016. www.oreilly.com/content/introduction-to-local-interpretable-model-agnostic-explanations-lime/.
39T. Frey, “Increasing transparency with Google Cloud Explainable AI,” Google Cloud, November 21, 2019. cloud.google.com/blog/products/ai-machine-learning/google-cloud-ai-explanations-to-increase-fairness-responsibility-and-trust.
40“Model interpretability in Azure Machine Learning,” Microsoft Azure, October 25, 2019. docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability. See also fairlearn.github.io/.
41R. Perrault et al., “The AI index 2019 annual report,” AI Index Steering Committee, Human-Centered AI Institute, Stanford University, Stanford, CA, December 2019. [Online]. Available: hai.stanford.edu/sites/g/files/sbiybj10986/f/ai_index_2019_report.pdf.
42A. Amrein-Beardsley, “Breaking news: A big victory in court in Houston,” VAAMboozled!, May 5, 2017. vamboozled.com/breaking-news-victory-in-court-in-houston/. See also R. Richardson, J. M. Schultz, and V. M. Southerland, “Litigating algorithms 2019 US report: New challenges to government use of algorithmic decision systems,” September 2019. ainowinstitute.org/litigatingalgorithms-2019-us.pdf for a discussion of other court cases.
43L. Hardesty, “Making computers explain themselves,” MIT News, October 27, 2016. news.mit.edu/2016/making-computers-explain-themselves-machine-learning-1028.
M. T. Ribeiro, S. Singh, and C. Guestrin, “Local Interpretable Model-Agnostic Explanations (LIME): An introduction,” O’Reilly, August 12, 2016. www.oreilly.com/content/introduction-to-local-interpretable-model-agnostic-explanations-lime/.
Chapter 15
1B. Zhang and A. Dafoe, “High-Level Machine Intelligence,” in Artificial Intelligence: American Attitudes and Trends. (Oxford, UK: Center for the Governance of AI, Future of Humanity Institute, University of Oxford, 2019).
2There have been many critiques of AI systems over the years like the ones in this chapter. The goal of this chapter is to tie the commentary back to the explanations in the rest of this book concerning how the various AI systems work. Previous critiques include J. A. Fodor and Z. W. Pylyshyn (1988). “Connectionism and cognitive architecture: A critical analysis.” Cognition, 28(1–2), 3–71. doi.org/10.1016/0010-0277(88)90031-5;
S. Pinker and A. Prince, “On language and connectionism: Analysis of a parallel distributed processing model of language acquisition,” Cognition, 28(1–2), 73–193, 1988;
G. Marcus, “Deep learning: A critical appraisal,” arXiv preprint arXiv:1801.00631, 2018;
A. Darwiche, “Human-level intelligence or animal-like abilities?” Commun. ACM, vol. 61, no. 10, pp. 56–57, 2018, doi: 10.1145/3271625;
A. L. Yuille and C. Liu, “Deep nets: What have they ever done for vision?” arXiv preprint arXiv:1805.04025, 2019.
M. Mitchell, Artificial intelligence: A Guide for Thinking Humans (New York: Macmillan, 2019).
3There has been some success in transferring low-level representations such as word embeddings and the initial layers of image classification systems to other tasks. However, the systems still require some training on the new task. Also, word embeddings do not help image classification, and image classification layers do not help natural language processing tasks. Lastly, there has been some progress on multitask learning where systems are trained simultaneously on multiple similar tasks (e.g., multiple natural language processing tasks). This is equivalent to learning a single task with multiple similar subtasks. See www.AIPerspectives.com/ml for more information.
4AI researchers have developed systems that have some ability to transfer from one domain (e.g., recognizing cats) to another (e.g., recognizing dogs), but only for very similar domains. This is known as transfer learning. See www.AIPerspectives.com/tl for more information on this topic.
5S. LeVine, “Artificial intelligence pioneer says we need to start over,” Axios, September 15, 2017. www.axios.com/artificial-intelligence-pioneer-says-we-need-to-start-over-1513305524-f619efbd-9db0-4947-a9b2-7a4c310a28fe.html.
6Y. LeCun, “Learning World Models: The next step towards AI,” in International Joint Conference on Artificial Intelligence, 2018, minute 37.04.
7www.youtube.com/watch?v=OpSmCKe27WE.
8For example, B. M. Lake, T. D. Ullman, J. B. Tenenbaum, and S. J. Gershman, “Building machines that learn and think like people,” Behav. Brain Sci., pp. 1–72, 2017, doi: 10.1017/S0140525X16001837.
9Interview with MIT Professor Joshua Tenenbaum in M. Ford (2018). Architects of Intelligence: The truth about AI from the people building it. Packt Publishing.
10This article in Brain and Behavioral Sciences contains both a proposal for this approach by NYU researcher Brenden Lake, as well as commentary from many other prominent researchers: B. M. Lake, T. D. Ullman, J. B. Tenenbaum, and S. J. Gershman, “Building machines that learn and think like people,” Brain and Behavioral Sciences, pp. 1–72, 2017, doi: 10.1017/S0140525X16001837.
11For example, M. Sap et al., “ATOMIC: An atlas of machine commonsense for if-then reasoning,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2019;
A. Bosselut, H. Rashkin, M. Sap, C. Malaviya, A. Celikyilmaz, and Y. Choi, “COMET: Commonsense transformers for automatic knowledge graph construction,” 2019, doi:10.18653/v1/p19-1470.
12G. Marcus (2018). “Deep learning: A critical appraisal,” arXiv preprint arXiv: 1801.00631.
G. Marcus (2020). “The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence.” arXiv preprint arXiv: 2002.06177.
13G. Marcus and E. Davis, Rebooting AI: Building Artificial Intelligence We Can Trust. (New York: Pantheon, 2019).
14See this April 2019 video interview with Greg Brockman for more information on the OpenAI approach: www.youtube.com/watch?v=bIrEM2FbOLU.
15https://www.aiperspectives.com/gpt-3-does-not-understand-what-it-is-saying/.
16Y. LeCun (2020). “Self-Supervised Learning.” Presentation at the 34th AAAI Conference on Artificial Intelligence. drive.google.com/file/d/1r-mDL4IX_hzZLDBKp8_e8VZqD7fOzBkF/view;
Y. Bengio (2019). “Yoshua Bengio: From System 1 Deep Learning to System 2 Deep Learning.” Presentation at the NeurIPS 2019 Conference. journalismai.com/2019/12/12/yoshua-bengio-from-system-1-deep-learning-to-system-2-deep-learning-neurips-2019/.
17Y. Bengio, T. Deleu, N. Rahaman, N. R. Ke, S. Lachapelle, O. Bilaniuk, A. Goyal, and C. Pal (2019). “A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms.” ArXiv Preprint ArXiv1901.10912v2;
A. Goyal, A. Lamb, J. Hoffmann, S. Sodhani, S. Levine, Y. Bengio, and
B. Schölkopf (2019). “Recurrent Independent Mechanisms.” ArXiv Preprint ArXiv:1909.10893. arxiv.org/pdf/1909.10893.pdf;
Y. Bengio (2019). “The Consciousness Prior.” ArXiv Preprint ArXiv:1709.08568. arxiv.org/pdf/1709.08568.pdf.
18They have also generated an interesting debate between NYU professor Gary Marcus and Yoshua Bengio: montrealartificialintelligence.com/aidebate/ and this article by Gary Marcus: G. Marcus, “The next decade in AI: Four steps towards robust artificial intelligence,” arXiv preprint arXiv:2002.06177, 2020.
19For example, venturebeat.com/2020/05/02/yann-lecun-and-yoshua-bengio-self-supervised-learning-is-the-key-to-human-level-intelligence/.
21S. Makin (2019). The four biggest challenges in brain simulation. In Nature (Vol. 571, Issue 7766, p. S9). Nature Publishing Group. doi.org/10.1038/d41586-019-02209-z. www.nature.com/articles/d41586-019-02209-z.
22S. Levy, “The Brief History of the ENIAC Computer,” Smithsonian Magazine, November 2013, https://www.smithsonianmag.com/history/the-brief-history-of-the-eniac-computer-3889120/.
23T. Everitt, G. Lea, and M. Hutter (2018). “AGI Safety Literature Review,” International Joint Conference on Artificial Intelligence (IJCAI). ArXiv: 1805.01109.
V. Vinge (1993). “The Coming Technological Singularity: How to Survive in the Post-Human Era.” In Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace. National Aeronautics and Space Administration. ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19940022855.pdf.
24S. Pinker, “Tech Luminaries Address the Singularity,” in IEEE Spectrum, Special Report: The Singularity, June 2008. spectrum.ieee.org/static/singularity.
25J. McCarthy, M. L. Minsky, N. Rochester, and C. E. Shannon, “A proposal for the Dartmouth summer research project on artificial intelligence,” August 31, 1955.
26M. Minsky, Computation: Finite and Infinite Machines (New Jersey: Prentice-Hall, 1967) 2.
27D. Michie, “Machines and the theory of intelligence,” Nature, vol. 241, pp. 507–512, 1973, doi: 10.1038/241507a0.
28S. Armstrong and K. Sotala, “How We’re Predicting AI—or Failing to,” in Beyond AI: Artificial Dreams, J. Romportl, P. Ircing, E. Zackova, M. Polak, and R. Schuster, Eds. (New York: Springer, 2015, pp. 52–75).
29K. Grace, J. Salvatier, A. Dafoe, B. Zhang, and O. Evans, “Viewpoint: When will AI exceed human performance? Evidence from AI experts,” Journal of Artificial Intelligence Research, 2018, doi: 10.1613/jair.1.11222.
30M. Ford, Architects Of Intelligence: The Truth about AI from the People Building It. (Birmingham, UK: Packt Publishing, 2018).
Chapter 16
1I am focusing on regulatory activity here. A positive impact on society can also be accomplished by creating standards for the ethical use of AI technology that people follow on a voluntary basis. For example, the Institute of Electrical and Electronics Engineers has published a set of guidelines for its large membership. There are also corporate consortiums, such as The Partnership on AI to Benefit People and Society that was founded by Apple, Amazon, Facebook, Google, IBM, and Microsoft and now has over one hundred members, guidelines from several other nonprofit consortiums, guidelines from centers for AI ethics research created by universities, and from governmental bodies. The comments in this section apply to both regulatory efforts and efforts to create ethical frameworks and best practices.
2Isaac Asimov, in his novel I, Robot, proposed three simple laws to regulate AGI-based robots:
The First Law was “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” The Second Law was “A robot must obey orders given to it by human beings, except where such orders would conflict with the first law.” The Third Law was “A robot must protect its own existence as long as such protection does not conflict with the first or second law.” AI futurists have since identified many flaws in these laws (e.g., R. R. Murphy and D. D. Woods, “Beyond Asimov: The three laws of responsible robotics,” IEEE Intell. Syst., vol. 24, no. 4, pp. 14–20, 2009, doi: 10.1109/ MIS.2009.69. www.researchgate.net/publication/224567023_Beyond_Asimov_The_Three_Laws_of_Responsible_Robotics). See also S. Russell, Human Compatible: Artificial Intelligence and the Problem of Control (New York: Viking Press, 2019).
M. Tegmark, Life 3.0: Being Human in the Age of Artificial Intelligence (New York: Knopf, 2017).
M. Ford, Architects of Intelligence: The Truth about AI from the People Building It (Birmingham, UK: Packt Publishing, 2018).
J. Brockman, Possible Minds: Twenty-Five Ways of Looking at AI (London: Penguin Books, 2019).
For a great summary of the history of humanity’s fear of robots see U. Barthelmess and U. Furbach, “Do we need Asimov’s Laws?”; arXiv preprint arXiv:1405.0961, 2014.