Chapter 18
IN THIS CHAPTER
Interacting with humans
Being creative
Using intuition
This book has spent a lot of time telling you about how AI and humans differ and demonstrating that humans have absolutely nothing to worry about. Yes, some jobs will go away, but as described in Chapter 17, the use of AI will actually create a wealth of new jobs — most of them a lot more interesting than working on an assembly line. The new jobs that humans will have rely on the areas of intelligence (as described in Chapter 1) that an AI simply can’t master. In fact, the inability of AI to master so many areas of human thought will keep many people in their current occupations, which is the point of this chapter.
Robots already perform a small amount of human interaction and will likely perform more human interaction tasks in the future. However, if you take a good look at the applications that robots are used in, they’re essentially doing things that are ridiculously boring: performing like a kiosk in directing people where to go; serving as an alarm clock to ensure that the elderly take their medications; and so on. Most human interaction isn’t this simple. The following sections look at some of the more interactive and demanding forms of human interaction — activities that an AI has no possibility whatsoever of mastering.
Spend some time at a grade school and watch the teachers herd the children. You’ll be amazed. Somehow, teachers manage to get all the kids from Point A to Point B with a minimum of fuss, apparently by sheer force of will. Even so, one child will need one level of attention while another child needs another level. When things go wrong, the teacher might end up having to deal with several problems at the same time. All these situations would overwhelm an AI today because an AI relies on cooperative human interaction. Think for a minute about the reaction that Alexa or Siri would have to a stubborn child (or try to simulate such a reaction with your own unit). It simply won’t work. An AI can, however, help a teacher in these areas:
A robot can lift a patient, saving a nurse’s back. However, an AI can’t make a decision about when, where, and how to lift the patient because it can’t judge all the required, nonverbal patient input correctly or understand patient psychology, such as a penchant for telling mistruths (see the “Considering the Five Mistruths in Data” section of Chapter 2). An AI could ask the patient questions, but probably not in a manner best suited to elicit useful answers. A robot can clean up messes, but it’s unlikely to do so in a manner that preserves patient dignity and helps the patient feel cared for. In short, a robot is a good hammer: great for performing hard, coarse tasks, but not particularly gentle or caring.
You may think that your AI is a perfect companion. After all, it never talks back,
is always attentive, and never leaves you for someone else. You can tell it your deepest
thoughts and it won’t laugh. In fact, an AI such as Alexa or Siri may well make the
perfect companion, as depicted in the movies Her (https://www.amazon.com/exec/obidos/ASIN/B00H9HZGQ0/datacservip0f-20/
). The only problem is that an AI doesn’t actually make a very good companion at all.
What it really does is provide a browser application with a voice. Anthropomorphizing
the AI doesn’t make it real.
The problem with having an AI address personal needs is that it doesn’t understand the concept of a personal need. An AI can look for a radio station, find a news article, make product purchases, record an appointment, tell you when it’s time to take medication, and even turn your lights on and off. However, it can’t tell you when a thought is a really bad idea and likely to cause you a great deal of woe. To obtain useful input in situations that offer no rules to follow, and the person talking with you needs real-life experience to present anything approximating an answer, you really need a human. That’s why people like counselors, doctors, nurses, and even that lady you talk with at the coffee shop are necessary. Some of these people are paid monetarily and others just depend on you to listen when they need help in turn. Human interaction is always required when addressing personal needs that truly are personal.
People with special needs require a human touch. Often, the special need turns out
to be a special gift, but only when the caregiver recognizes it as such. Someone with
a special need might be fully functional in all but one way — it takes creativity
and imagination to discover the means to getting over the hurdle. Finding a way to use the special need in a world that doesn’t accept special needs as normal
is even harder. For example, most people wouldn’t consider color blindness (which
is actually color shifting) an asset when creating art. However, someone came along
and turned it into an advantage (https://www.artsy.net/article/artsy-editorial-the-advantages-of-being-a-colorblind-artist
).
An AI might be able to help special-needs people in specific ways. For example, a robot can help someone perform their occupational or physical therapy to become more mobile. The absolute patience of the robot would ensure that the person would receive the same even-handed help every day. However, it would take a human to recognize when the occupational or physical therapy isn’t working and requires a change.
As noted in Table 1-1, robots can’t create. It’s essential to view the act of creating as one of developing new patterns of thought. A good deep-learning application can analyze existing patterns of thought, rely on an AI to turn those patterns into new versions of things that have happened before, and produce what appears to be original thought, but no creativity is involved. What you’re seeing is math and logic at work analyzing what is, rather than defining what could be. With this limitation of AI in mind, the following sections describe the creation of new things — an area where humans will always excel.
When people talk about inventors, they think about people like Thomas Edison, who
held 2,332 patents worldwide (1,093 in the United States alone) for his inventions
(http://www.businessinsider.com/thomas-edisons-inventions-2014-2
). You may still use one of his inventions, the lightbulb, but many of his inventions, such as the phonograph, changed the world. Not everyone is an Edison.
Some people are like Bette Nesmith Graham (http://www.women-inventors.com/Bette-Nesmith-Graham.asp
), who invented Whiteout (also known as Liquid Paper and by other names) in 1956.
At one point, her invention appeared in every desk drawer on the planet as a means
for correcting typing errors. Both of these people did something that an AI can’t
do: create a new thought pattern in the form of a physical entity.
Style and presentation make a Picasso (https://www.pablopicasso.org/
) different from a Monet (https://www.claudemonetgallery.org/
). Humans can tell the difference because we see the patterns in these artists’ methods:
everything from choosing a canvas, to the paint, to the style of presentation, to
the topics displayed. An AI can see these differences, too. In fact, with the precise
manner in which an AI can perform analysis and the greater selection of sensors at
its disposal (in most cases), an AI can probably describe the patterns of artistry
better than a human can, and mimic those patterns in output that the artist never
provided. However, the AI advantage ends here.
Humans constantly extend the envelope of what is real by making the unreal possible.
At one time, no one thought that humans would fly by coming up with heavier-than-air
machines. In fact, experiments tended to support the theory that even attempting to
fly was foolish. Then came the Wright brothers (http://www.history.com/topics/inventions/wright-brothers
). Their flight at Kitty Hawk changed the world. However, it’s important to realize
that the Wright brothers merely made the unreal thoughts of many people (including themselves) real.
An AI would never have an unreal output, much less turn it into reality. Only humans
can do this.
Intuition is a direct perception of a truth, independent of any reasoning process. It’s the truth of illogic, making it incredibly hard to analyze. Humans are adept at intuition, and the most intuitive people usually have a significant advantage over those who aren’t intuitive. AI, which is based on logic and math, lacks intuition. Consequently, an AI usually has to plod through all the available logical solutions and eventually conclude that no solution to a problem exists, even when a human finds one with relative ease. Human intuition and insight often play a huge role in making some occupations work, as described in the following sections.
If you watch fictional crime dramas on television, you know that the investigator often finds one little fact that opens the entire case, making it solvable. Real-world crime-solving works differently. Human detectives rely on fully quantifiable knowledge to perform their task, and sometimes the criminals make the job all too easy as well. Procedures and policies, digging into the facts, and spending hours just looking at all the evidence play important roles in solving crime. However, sometimes a human will make that illogical leap that suddenly makes all the seemingly unrelated pieces fit together.
A detective’s work involves dealing with a wide range of issues. In fact, some of those issues don’t even involve illegal activities. For example, a detective may simply be looking for someone who seems to be missing. Perhaps the person even has a good reason for not wanting to be found. The point is that many of these detections involve looking at the facts in ways that an AI would never think to look because it requires a leap — an extension of intelligence that doesn’t exist for an AI. The phrase, thinking outside the box, comes to mind.
An AI will monitor situations using previous data as a basis for future decisions. In other words, the AI uses patterns to make predictions. Most situations work fine using this pattern, which means that an AI can actually predict what will happen in a particular scenario with a high degree of accuracy. However, sometimes situations occur when the pattern doesn’t fit and the data doesn’t seem to support the conclusion. Perhaps the situation currently lacks supporting data — which happens all the time. In these situations, human intuition is the only fallback. In an emergency, relying only on an AI to work through a scenario is a bad idea. Although the AI does try the tested solution, a human can think outside the box and come up with the alternative idea.
An AI will never be intuitive. Intuition runs counter to every rule that is currently
used to create an AI. Consequently, some people have decided to create Artificial
Intuition (AN) (see http://www.artificial-intuition.com/
as an example). In reading the materials that support AN, it quickly becomes obvious
that there is some sort of magic taking place (that is, the inventors are engaged
in wishful thinking) because the theory simply doesn’t match the proposed implementation.
The second issue is that AI and all computer programs essentially rely on math to perform tasks. The AI understands nothing. The “Considering the Chinese Room argument” section of Chapter 5 discusses just one of the huge problems with the whole idea of an AI’s capacity for understanding. The point is that intuition is illogical, which means that humans don’t even understand the basis for it. Without understanding, humans can’t create a system that mimics intuition in any meaningful way.