Chapter 20
IN THIS CHAPTER
Comprehending the world
Developing new ideas
Understanding the human condition
Any comprehensive book on AI must consider the ways in which AI has failed to meet expectations. The book discusses this issue in part in other chapters, giving the historical view of the AI winters. However, even with those discussions, you might not grasp that AI hasn’t just failed to meet expectations set by overly enthusiastic proponents; it has failed to meet specific needs and basic requirements. This chapter is about the failures that will keep AI from excelling and performing the tasks we need it to do to fully achieve the successes described in other chapters. AI is currently an evolving technology that is partially successful at best.
Even more important, however, is that people who claim that an AI will eventually take over the world fail to understand that doing so is impossible given current technology. An AI can’t suddenly become self-aware because it lacks any means of expressing the emotion required to become self-aware. As shown in Table 1-1 in Chapter 1, an AI today lacks some of the essential seven kinds of intelligence required to become self-aware. Simply possessing those levels of intelligence wouldn’t be enough, either. Humans have a spark in them —something that scientists don’t understand. Without understanding what that spark is, science can’t recreate it as part of an AI.
The ability to comprehend is innate to humans, but AIs completely lack it. Looking at an apple, a human more than just a series of properties associated with a picture of an object. Humans understand apples through the use of senses, such as color, taste, and feel. We understand that the apple is edible and provides specific nutrients. We have feelings about apples; perhaps we like them and feel that they’re the supreme fruit. The AI sees an object that has properties associated with it — values that the AI doesn’t understand, but only manipulates. The following sections describe how the failure to understand causes AI as a whole to fail to meet expectations.
As stated many times throughout the book, an AI uses algorithms to manipulate incoming data and produce an output. The emphasis is on performing an analysis of the data. However, a human controls the direction of that analysis and must then interpret the results. For example, an AI can perform an analysis of an x-ray showing a potential cancer tumor. The resulting output may emphasize a portion of the x-ray containing a tumor so that the doctor can see it. The doctor might not be able to see the tumor otherwise, so the AI undoubtedly provides an important service. Even so, a doctor must still review the result and determine whether the x-ray does indeed show cancer. As described in several sections of the book, especially with self-driving cars in Chapter 14, an AI is easily fooled at times when even a small artifact appears in the wrong place. Consequently, even though the AI is incredibly helpful in giving the doctor the ability to see something that isn’t apparent to the human eye, the AI also isn’t trustworthy enough to make any sort of a decision.
Interpretation also implies the ability to see beyond the data. It’s not the ability create new data, but to understand that the data may indicate something other than what is apparent. For example, humans can often tell that data is fake or falsified, even though the data itself presents no evidence to indicate these problems. An AI accepts the data as both real and true, while a human knows that it’s neither real nor true. Formalizing precisely how humans achieve this goal is currently impossible because humans don’t actually understand it.
Despite any appearance otherwise, an AI works only with numbers. An AI can’t understand words, for example, which means that when you talk to it, the AI is simply performing pattern matching after converting your speech to numeric form. The substance of what you say is gone. Even if the AI were able to understand words, it couldn’t do so because the words are gone after the tokenization process. The failure of AIs to understand something as basic as words means that an AI’s translation from one language to another will always lack that certain something needed to translate the feeling behind the words, as well as the words themselves. Words express feelings, and an AI can’t do that.
The same conversion process occurs with every sense that humans possess. A computer
translates sight, sound, smell, taste, and touch into numeric representations and
then performs pattern matching to create a data set that simulates the real-world
experience. Further complicating matters, humans often experience things differently
from each other. For example, each person experiences color uniquely (https://www.livescience.com/21275-color-red-blue-scientists.html
). For an AI, every computer sees color in precisely the same way, which means that
an AI can’t experience colors uniquely. In addition, because of the conversion, an
AI doesn’t actually experience color at all.
An AI can analyze data, but it can’t make moral or ethical judgements. If you ask an AI to make a choice, it will always choose the option with the highest probability of success unless you provide some sort of randomizing function as well. The AI will make this choice regardless of the outcome. The “SD cars and the trolley problem” sidebar in Chapter 14 expresses this problem quite clearly. When faced with a choice between allowing either the occupants of a car or pedestrians to die when such a choice is necessary, the AI must have human instructions available to it to make the decision. The AI isn’t capable of considering consequences and is therefore ineligible to be part of the decision-making process.
An AI can interpolate existing knowledge, but it can’t extrapolate existing knowledge to create new knowledge. When an AI encounters a new situation, it usually tries to resolve it as an existing piece of knowledge, rather than accept that it’s something new. In fact, an AI has no method for creating anything new, or seeing it as something unique. These are human expressions that help us discover new things, work with them, devise methods for interacting with them, and create new methods for using them to perform new tasks or augment existing tasks. The following sections describe how an AI’s inability to make discoveries keeps it from fulfilling the expectations that humans have of it.
One of the more common tasks that people perform is extrapolation of data; for example, given A, what is B? Humans use existing knowledge to create new knowledge of a different sort. By knowing one piece of knowledge, a human can make a leap to a new piece of knowledge, outside the domain of the original knowledge, with a high probability of success. Humans make these leaps so often that they become second nature and intuitive in the extreme. Even children can make such predictions with a high rate of success.
Currently, an AI can see patterns in data when they aren’t apparent to humans. The capability to see these patterns is what makes AI so valuable. Data manipulation and analysis is time consuming, complex, and repetitive, but an AI can perform the task with aplomb. However, the data patterns are simply an output and not necessarily a solution. Humans rely on five senses, empathy, creativity, and intuition to see beyond the patterns to a potential solution that resides outside what the data would lead one to believe. Chapter 18 discusses this part of the human condition in more detail.
As humans have become more knowledgeable, they have also become aware of variances
in human senses that don’t actually translate well to an AI because replicating these
senses in hardware isn’t truly possible now. For example, the ability to use multiple
senses to manage a single input (synesthesia; see https://www.mnn.com/health/fitness-well-being/stories/what-is-synesthesia-and-whats-it-like-to-have-it
for details) is beyond an AI.
Describing synesthesia effectively is well beyond most humans. Before they can create
an AI that can mimic some of the truly amazing effects of synesthesia, humans must
first fully describe it and then create sensors that will convert the experience into
numbers that an AI can analyze. However, even then, the AI will see only the effects
of the synesthesia, not the emotional impact. Consequently, an AI will never fully experience or understand synesthesia. (The “Shifting data spectrum” section of Chapter 8 discusses how an AI could augment human perception with a synesthetic-like experience.)
Oddly enough, some studies show that adults can be trained to have synesthetic experiences,
making the need for an AI uncertain (https://www.nature.com/articles/srep07089
).
Computers don’t feel anything. That’s not necessarily a negative, but this chapter views it as a negative. Without the ability to feel, a computer can’t see things from the perspective of a human. It doesn’t understand being happy or sad, so it can’t react to these emotions unless a program creates a method for it to analyze facial expressions and other indicators, and then act appropriately. Even so, such a reaction is a canned response and prone to error. Think about how many decisions you make based on emotional need rather than outright fact. The following sections discuss how the lack of empathy on the part of an AI keeps it from interacting with humans appropriately in many cases.
The idea of walking in some else’s shoes means to view things from another person’s perspective and feel similar to how the
other person feels. No one truly feels precisely the same as someone else, but through
empathy, people can get close. This form of empathy requires strong intrapersonal
intelligence as a starting point, which an AI will never have unless it develops a
sense of self (the singularity as described at https://www.technologyreview.com/s/425733/paul-allen-the-singularity-isnt-near/
). In addition, the AI would need to be able to feel, something that is currently
not possible, and the AI would need to be open to sharing feelings with some other
entity (generally a human, today), which is also impossible. The current state of
AI technology prohibits an AI from feeling or understanding any sort of emotion, which
makes empathy impossible.
An AI builds a picture of you through the data it collects. It then creates patterns from this data and, using specific algorithms, develops output that makes it seem to know you — at least as an acquaintance. However, because the AI doesn’t feel, it can’t appreciate you as a person. It can serve you, should you order it to do so and assuming that the task is within its list of functions, but it can’t have any feeling for you.
When dealing with a relationship, people have to consider both intellectual attachment and feelings. The intellectual attachment often comes from a shared benefit between two entities. Unfortunately, no shared benefit exists between an AI and a human (or any other entity, for that matter). The AI simply processes data using a particular algorithm. Something can’t claim to love something else if an order forces it to make the proclamation. Emotional attachment must carry with it the risk of rejection, which implies self-awareness.
Humans can sometimes change an opinion based on something other than the facts. Even though the odds would say that a particular course of action is prudent, an emotional need makes another course of action preferable. An AI has no preferences. It therefore can’t choose another course of action for any reason other than a change in the probabilities, a constraint (a rule forcing it to make the change), or a requirement to provide random output.
Faith is the belief in something as being true without having proven fact to back up such belief. In many cases, faith takes the form of trust, which is the belief in the sincerity of another person without any proof that the other person is trustworthy. An AI can’t exhibit either faith or trust, which is part of the reason that it can’t extrapolate knowledge. The act of extrapolation often relies on a hunch, based on faith, that something is true, despite a lack of any sort of data to support the hunch. Because an AI lacks this ability, it can’t exhibit insight — a necessary requirement for human-like thought patterns.