Chapter 5

Seeing AI Uses in Computer Applications

IN THIS CHAPTER

check Defining and using AI in applications

check Using AI for corrections and suggestions

check Understanding potential AI errors

You have likely used AI in some form in many of the computer applications you rely on for your work. For example, talking to your smartphone requires the use of a speech recognition AI. Likewise, an AI filters out all that junk mail that could arrive in your Inbox. The first part of this chapter discusses AI application types, many of which will surprise you, and the fields that commonly rely on AI to perform a significant number of tasks. You also discover a source of limitations for creating AI-based applications, which helps you understand why sentient robots may not ever happen — or not with the currently available technology, at least.

However, regardless of whether AI ever achieves sentience, the fact remains that AI does perform a significant number of useful tasks. The two essential ways in which AI currently contributes to human needs is through corrections and suggestions. You don’t want to take the human view of these two terms. A correction isn’t necessarily a response to a mistake. Likewise, a suggestion isn’t necessarily a response to a query. For example, consider a driving-assisted car (one in which the AI assists rather than replaces the driver). As the car moves along, the AI can make small corrections that allow for driving and road conditions, pedestrians, and a wealth of other issues in advance of an actual mistake. The AI takes a proactive approach to an issue that may or may not occur. Likewise, the AI can suggest a certain path to the human driving the car that may present the greatest likelihood of success, only to change the suggestion later based on new conditions. The second part of the chapter considers corrections and suggestions separately.

The third main part of the chapter discusses potential AI errors. An error occurs whenever the result is different from expected. The result may be successful, but it might remain unexpected. Of course, outright errors occur too; an AI may not provide a successful result. Perhaps the result even runs counter to the original goal (possibly causing damage). If you get the idea that AI applications provide gray, rather than black or white, results, you’re well on the road to understanding how AI modifies typical computer applications, which do, in fact, provide either an absolutely correct or absolutely incorrect result.

Introducing Common Application Types

Just as the only thing that limits the kinds of procedural computer application types is the imagination of the programmer, AI applications could appear in any venue for just about any purpose, most of which no one has thought of yet. In fact, the flexibility that AI offers means that some AI applications may appear in places other than those for which the programmer originally defined them. In fact, someday AI software may well write its own next generation (see https://www.technologyreview.com/s/603381/ai-software-learns-to-make-ai-software/ for details). However, to obtain a better idea of just what makes AI useful in applications, it helps to view the most commonly applied uses for AI today (and the potential pitfalls associated with those uses), as described in the sections that follow.

Using AI in typical applications

You might find AI in places where it’s hard to imagine using an AI. For example, your smart thermostat for controlling home temperature could contain an AI if the thermostat is complex enough (see https://www.popsci.com/gadgets/article/2011-12/artificially-intelligent-thermostats-learns-adapt-automatically for details). The use of AI, even in these particularly special applications, really does make sense when the AI is used for things that AI does best, such as tracking preferred temperatures over time to automatically create a temperature schedule. Here are some of the more typical uses for AI that you’ll find in many places:

Realizing AI‘s wide range of fields

Applications define specific kinds of uses for AI. You can also find AI used more generically in specific fields of expertise. The following list contains the fields where AI most commonly makes an appearance:

Considering the Chinese Room argument

In 1980, John Searle write an article entitled “Minds, Brains, and Programs” that was published in Behavioral and Brain Sciences. The emphasis of this article is a refutation of the Turing test, in which a computer can fool a human into thinking that the computer is a human (rather than a computer) using a series of questions (see the article at https://www.abelard.org/turpap/turpap.php for details). The basic assumption is that functionalism, or the capability to simulate specific characteristics of the human mind, isn’t the same as actually thinking.

The Chinese Room argument, as this thought experiment is called, relies on two tests. In the first test, someone creates an AI that can accept Chinese characters, use a set of rules to create a response from those characters, and then output the response using Chinese characters. The question is about a story — the AI must interpret the questions put to it such that the answer reflects actual story content and not just some random response. The AI is so good that no one outside the room can tell that an AI is performing the required tasks. The Chinese speakers are completely fooled into thinking that the AI really can read and understand Chinese.

In the second test, a human who doesn’t speak Chinese is given three items that mimic what the computer does. The first is a script that contains a large number of Chinese characters, the second is a story in Chinese, and the third is a set of rules for correlating the first item to the second. Someone sends in a set of questions, written in Chinese, that the human makes sense of by using the set of rules to find the location in the story containing the answer based on an interpretation of the Chinese characters. The answer is the set of Chinese characters that correlate to the question based on the rules. The human gets so good at this task that no one can perceive the lack of understanding of the Chinese language.

The purpose of the two tests is to demonstrate that the capability to use formal rules to produce a result (syntax) is not the same as actually understanding what someone is doing (semantics). Searle postulated that syntax doesn’t suffice for semantics, yet this is what some people who implement an AI are trying to say when it comes to creating various rule-based engines, such as the Script Applier Mechanism (SAM); see https://eric.ed.gov/?id=ED161024 for details.

The underlying issue pertains to having a strong AI, one that actually understands what it’s trying to do, and a weak AI, one that is simply following the rules. All AI today is weak AI; it doesn’t actually understand anything. What you see is clever programming that simulates thought by using rules (such as those implicit in algorithms). Of course, much controversy arises over the idea that no matter how complex machines become, they won’t actually develop brains, which means that they’ll never understand. The Searle assertion is that AI will remain weak. You can see a discussion of this topic at http://www.iep.utm.edu/chineser/. The arguments and counterarguments are interesting to read because they provide significant insights into what truly comes into play when creating an AI.

Seeing How AI Makes Applications Friendlier

There are a number of different ways in which to view the question of application friendliness addressed by AI. At its most basic level, an AI can provide anticipation of user input. For example, when the user has typed just a few letters of a particular word, the AI guesses the remaining characters. By providing this service, the AI accomplishes several goals:

An AI can also learn from previous user input in reorganizing suggestions in a way that works with the user’s method of performing tasks. This next level of interaction falls within the realm of suggestions described in the “Making Suggestions” section, later in this chapter. Suggestions can also include providing the user with ideas that the user might not have considered otherwise.

warning Even in the area of suggestions, humans may begin to think that the AI is thinking, but it isn’t. The AI is performing an advanced form of pattern matching as well as analysis to determine the probability of the need for a particular input. The “Considering the Chinese Room argument” section, earlier in this chapter, discusses the difference between weak AI, the kind found in every application today, and strong AI, something that applications may eventually achieve.

Using an AI also means that humans can now exercise other kinds of intelligent input. The example of voice is almost overused, but it remains one of the more common methods of intelligent input. However, even if an AI lacks the full range of senses as described in Chapter 4, it can provide a wide variety of nonverbal intelligent inputs. An obvious choice is visual, such as recognizing the face of its owner or a threat based on facial expression. However, the input could include a monitor, possibly checking the user’s vital signs for potential problems. In fact, an AI could use an enormous number of intelligent inputs, most of which aren’t even invented yet.

Currently, applications generally consider just these first three levels of friendliness. As AI intelligence increases, however, it becomes essential for an AI to exhibit Friendly Artificial Intelligence (FAI) behaviors consistent with an Artificial General Intelligence (AGI) that has a positive effect on humanity. AI has goals, but those goals may not align with human ethics, and the potential for misalignment causes angst today. An FAI would include logic to ensure that the AI’s goals remain aligned with humanity’s goals, similar to the three laws found in Isaac Asimov’s books (https://www.auburn.edu/~vestmon/robotics.html), which you find discussed in more detail in Chapter 12. However, many say that the three laws are just a good starting point (http://theconversation.com/after-75-years-isaac-asimovs-three-laws-of-robotics-need-updating-74501) and that we need further safeguards.

tip Of course, all this discussion about laws and ethics could prove quite confusing and difficult to define. A simple example of FAI behavior would be that the FAI would refuse to disclose personal user information unless the recipient had a need to know. In fact, an FAI could go even further by pattern matching human input and locating potential personal information within it, notifying the user of the potential for harm before sending the information anywhere. The point is that an AI can significantly change how humans view applications and interact with them.

Performing Corrections Automatically

Humans constantly correct everything. It isn’t a matter of everything being wrong. Rather, it’s a matter of making everything slightly better (or at least trying to make it better). Even when humans manage to achieve just the right level of rightness at a particular moment, a new experience brings that level of rightness into question because now the person has additional data by which to judge the whole question of what constitutes right in a particular situation. To fully mimic human intelligence, AI must also have this capability to constantly correct the results it provides, even when such results would provide a positive result. The following sections discuss the issue of correctness and examine how automated corrections sometimes fail.

Considering the kinds of corrections

When most people think about AI and correction, they think about the spell checker or grammar checker. A person makes a mistake (or at least the AI thinks so) and the AI corrects this mistake so that the typed document is as accurate as possible. Of course, humans make lots of mistakes, so having an AI to correct them is a good idea.

Corrections can take all sorts of forms and not necessarily mean that an error has occurred or will occur in the future. For example, a car could assist a driver by making constant lane position corrections. The driver might be well within the limits of safe driving, but the AI could provide these micro corrections to help ensure that the driver remains safe.

Taking the whole correction scenario further, the car in front of the car containing the AI makes a sudden stop because of a deer in the road. The driver of the current car hasn’t committed any sort of error. However, the AI can react faster than the driver can and acts to stop the car as quickly and as safely as possible to address the now-stopped car in front of it.

Seeing the benefits of automatic corrections

When an AI sees a need for a correction, it can either ask the human for permission to make the correction or make the change automatically. For example, when someone uses speech recognition to type a document and makes an error in grammar, the AI should ask permission before making a change because the human may have actually meant the word or the AI may have misunderstood what the human meant.

However, sometimes it’s critical that the AI provide a robust enough decision-making process to perform corrections automatically. For example, when considering the braking scenario from the previous section, the AI doesn’t have time to ask permission; it must apply the brake immediately or the human could die from the crash. Automatic corrections have a definite place when working with an AI, assuming that the need for a decision is critical and the AI is robust.

Understanding why automated corrections don’t work

As related in the “Considering the Chinese Room argument” section, earlier in this chapter, an AI can’t actually understand anything. Without understanding, it no capability to compensate for the unforeseen circumstance. In this case, the unforeseen circumstance relates to an unscripted event, one in which the AI can’t accumulate additional data or rely on other mechanical means to solve. A human can solve the problem because a human understands the basis of the problem and usually enough of the surrounding events to define a pattern that can help form a solution. In addition, human innovation and creativity provides solutions where none are obvious through other means. Given that an AI currently lacks both innovation and creativity, the AI is at a disadvantage in solving specific problem domains.

To put this issue into perspective, consider the case of a spelling checker. A human types a perfectly legitimate word that doesn’t appear in the dictionary used by the AI for making corrections. The AI often substitutes a word that looks close to the specified word, but is still incorrect. Even after the human checks the document, retypes the correct word, and then adds it to the dictionary, the AI is still apt to make a mistake. For example, the AI could treat the abbreviation CPU differently from cpu because the former is in uppercase and the latter appears in lowercase. A human would see that the two abbreviations are the same and that, in the second case, the abbreviation is correct but may need to appear in uppercase instead.

Making Suggestions

A suggestion is different from a command. Even though some humans seem to miss the point entirely, a suggestion is simply an idea put forth as a potential solution to a problem. Making a suggestion implies that other solutions could exist and that accepting a suggestion doesn’t mean automatically implementing it. In fact, the suggestion is only an idea; it may not even work. Of course, in a perfect world, all suggestions would be good suggestions — at least possible solutions to a correct output, which is seldom the case in the real world. The following sections describe the nature of suggestions as they apply to an AI.

Getting suggestions based on past actions

The most common way that an AI uses to create a suggestion is by collecting past actions as events and then using those past actions as a dataset for making new suggestions. For example, someone purchases a Half-Baked Widget every month for three months. It makes sense to suggest buying another one at the beginning of the fourth month. In fact, a truly smart AI might make the suggestion at the right time of the month. For example, if the user makes the purchase between the third and the fifth day of the month for the first three months, it pays to start making the suggestion on the third day of the month and then move onto something else after the fifth day.

Humans output an enormous number of clues while performing tasks. Unlike humans, an AI actually pays attention to every one of these clues and can record them in a consistent manner. The consistent collection of action data makes enables an AI to provide suggestions based on past actions with a high degree of accuracy in many cases.

Getting suggestions based on groups

Another common way to make suggestions relies on group membership. In this case, group membership need not be formal. A group could consist of a loose association of people who have some minor need or activity in common. For example, a lumberjack, a store owner, and a dietician could all buy mystery books. Even though they have nothing else in common, not even location, the fact that all three like mysteries makes them part of a group. An AI can easily spot patterns like this that might elude humans, so it can make good buying suggestions based on these rather loose group affiliations.

Groups can include ethereal connections that are temporary at best. For example, all the people who flew flight 1982 out of Houston on a certain day could form a group. Again, no connection whatsoever exists between these people except that they appeared on a specific flight. However, by knowing this information, an AI could perform additional filtering to locate people within the flight who like mysteries. The point is that an AI can provide good suggestions based on group affiliation even when the group is difficult (if not impossible) to identify from a human perspective.

Obtaining the wrong suggestions

Anyone who has spent time shopping online knows that websites often provide suggestions based on various criteria, such as previous purchases. Unfortunately, these suggestions are often wrong because the underlying AI lacks understanding. When someone makes a once-in-a-lifetime purchase of a Super-Wide Widget, a human would likely know that the purchase is indeed once in a lifetime because it’s extremely unlikely that anyone will need two. However, the AI doesn’t understand this fact. So, unless a programmer specifically creates a rule specifying that Super-Wide Widgets are a once-in-a-lifetime purchase, the AI may choose to keep recommending the product because sales are understandably small. In following a secondary rule about promoting products with slower sales, the AI behaves according to the characteristics that the developer provided for it, but the suggestions it makes are outright wrong.

Besides rule-based or logic errors in AIs, suggestions can become corrupted through data issues. For example, a GPS could make a suggestion based on the best possible data for a particular trip. However, road construction might make the suggested path untenable because the road is closed. Of course, many GPS applications do consider road construction, but they sometimes don’t consider other issues, such as a sudden change in the speed limit or weather conditions that make a particular path treacherous. Humans can overcome lacks in data through innovation, such as by using the less traveled road or understanding the meaning of detour signs.

When an AI manages to get past the logic, rule, and data issues, it sometimes still makes bad suggestions because it doesn’t understand the correlation between certain datasets in the same way a human does. For example, the AI may not know to suggest paint after a human purchases a combination of pipe and drywall when making a plumbing repair. The need to paint the drywall and the surrounding area after the repair is obvious to a human because a human has a sense of aesthetics that the AI lacks. The human makes a correlation between various products that isn’t obvious to the AI.

Considering AI-based Errors

An outright error occurs when the result of a process, given specific inputs, isn’t correct in any form. The answer doesn’t provide a suitable response to a query. It isn’t hard to find examples of AI-based errors. For example, a recent BBC News article describes how a single pixel difference in a picture fools a particular AI (see the article at http://www.bbc.com/news/technology-41845878). You can read more about the impact of adversarial attacks on AI at https://blog.openai.com/adversarial-example-research/. The Kasperskey Lab Daily article at https://www.kaspersky.com/blog/ai-fails/18318/ provides additional occurrences of situations in which an AI failed to provide the correct response. The point is that AI still has a high error rate in some circumstances, and the developers working with the AI are usually unsure why the errors even occur.

The sources of errors in AI are many. However, as noted in Chapter 1, AI can’t even emulate all seven forms of human intelligence, so mistakes are not only possible but also unavoidable. Much of the material in Chapter 2 focuses on data and its impact on AI when the data is flawed in some way. In Chapter 3, you also find that even the algorithms that AI uses have limits. Chapter 4 points out that an AI doesn’t have access to the same number or types of human senses. As the TechCrunch article at https://techcrunch.com/2017/07/25/artificial-intelligence-is-not-as-smart-as-you-or-elon-musk-think/ points out, many of the seemingly impossible tasks that AI performs today are the result of using brute-force methods rather than anything even close to actual thinking.

A major problem that’s becoming more and more evident is that corporations often gloss over or even ignore problems with AI. The emphasis is on using an AI to reduce costs and improve productivity, which may not be attainable. The Bloomberg article at https://www.bloomberg.com/news/articles/2017-06-13/the-limits-of-artificial-intelligence discusses this issue in some detail. One of the more interesting recent examples of a corporate entity going too far with an AI is Microsoft’s Tay (see the article at https://www.geekwire.com/2016/microsoft-chatbot-tay-mit-technology-fails/), which was trained to provide racist, sexist, and pornographic remarks in front of a large crowd during a presentation.

remember The valuable nugget of truth to take from this section isn’t that AI is unreliable or unusable. In fact, when coupled with a knowledgeable human, AI can make its human counterpart fast and efficient. AI can enable humans to reduce common or repetitive errors. In some cases, AI mistakes can even provide a bit of humor in the day. However, AI doesn’t think, and it can’t replace humans in many dynamic situations today. AI works best when a human reviews decisions or the environment is so static that good results are predictably high (well, as long as a human doesn’t choose to confuse the AI).