10 AI Technology

This book has been about the science of AI, that is, the study of a certain observable natural phenomenon: intelligent behavior as seen, for example, in the ability that people have to answer the Winograd schema questions of chapter 4.

But an ability to answer Winograd schema questions has no real practical value. Those less interested in questions of science might well wonder why we bother. What exactly is the application? How can we make any use of what we learn? There is another side of AI that attracts much more attention (and money), and that is the attempt to deliver useful technology. In other words, this is the study of the design and creation of intelligent machinery, what we might call “building an AI.”

Of course we have seen limited versions of AI technology for some time. Although the products we now call “smart” are far from intelligent in the way we apply the term to people, it has become rather routine to see machines doing things that seemed far off even ten years ago, such as talking to your phone and expecting it to do reasonable things, or sitting in a car that can drive itself on ordinary roads. But we want now to consider a future AI technology that exhibits the sort of full-fledged general-purpose intelligence we see in people.

Let me start by putting my cards on the table. I am not really an expert on AI technology, nor much of a fan for that matter, at least for my own needs. What I look for in technology—beyond obvious things like economy, durability, eco-friendliness—is reliability and predictability. I want to be able to use a technology by learning it and then forgetting about it. If I have to second guess what the technology is going to do, or worry that I may be misusing it, or asking it to do too much, then it’s not for me. (A good example was the automated assistant “Clippy” that used to pop up unexpectedly in Microsoft Word, something I found endlessly annoying.)

But AI technology aficionados might say this to me: do you not want an intelligent and capable assistant whose only goal is to help you, who would know all your habits and quirks, never get tired, never complain? I answer this question with another: will my AI assistant be reliable and predictable?

When it comes to technology, I would much rather have a tool with reliable and predictable limitations, than a more capable but unpredictable one. (Of course I don’t feel this way about the people I deal with, but we are talking about technology here.) And I do realize that there will be people for whom the use of a technology, even a flawed technology, will be more of a need than a choice, in which case personal preferences may be much less relevant.

With these caveats out of the way, let us proceed.

The future

When it comes to future AI technology, maybe the first question to ask is this: will we ever build a computer system that has some form of common sense, that knows a lot about its world, and that can deal intelligently with both routine and unexpected situations as well as people can? It’s a good question, and I wish I knew the answer. I think it is foolhardy to even try to make predictions on matters like this.

Arthur C. Clarke’s First Law says the following:

When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

So no matter what the distinguished but elderly scientist says about the possibility of AI, it follows from this law that AI is almost certainly possible. And this is what I believe. Nothing we have learned to date would suggest otherwise.

But if AI really is possible, why haven’t we done it yet? The year 2001 has come and gone and we are still nowhere near the sort of intelligent computer seen in the movie 2001, among many others. (One definition of AI is that it is the study of how to make computers behave the way they do in the movies!)

I think there are two main reasons. The first is obvious: we are behind what earlier enthusiasts predicted (including Marvin Minsky, who was a consultant on 2001), because we don’t yet fully understand what is needed. There are still some major scientific hurdles to overcome. The representation and reasoning questions discussed in chapter 9 remain largely unanswered. Furthermore, as noted at the end of that chapter, even if all the scientific hurdles can be cleared, there will remain an enormous engineering challenge in getting a machine to know enough to behave truly intelligently.

The second reason is perhaps more arguable: there may be no AI to date because there is just not enough demand, given the effort it would take. This may sound ridiculous, but there is an analogous situation in the area of computer chess.

People started writing programs to play chess at the very dawn of the computer age. Turing himself tried his hand at one. These programs played well enough, but nowhere near the level of a grandmaster. The programs got better, however; by 1997, a program from IBM called DEEP BLUE was able to beat the world champion at the time, Garry Kasparov. And since then, the programs have continued to improve.

But consider how this advanced computer-chess technology is now being used. Do we see regular competitions between people and the very best computer programs? As it turns out, there is little demand for man–machine tournaments. In fact, there is almost no demand for computer players. State-of-the-art computer chess programs are not used as contestants in competitions; they are used by human chess players who themselves will be contestants and need to practice. The computer chess program is used as a practice tool, a sparring partner.

Similarly, if you read chess columns in the entertainment section of a newspaper, you will see that they rarely report on games involving computer players. Nobody seems to care. On reporting a game between humans, the column might say something like: “An interesting move. The computer would suggest moving the knight instead”; or maybe: “The computer proposes taking the pawn as the best choice.” The chess playing program is again a tool used to confirm or reject a line of play. The chess programs are not even worthy of being identified by name. When it comes to actually playing chess, it’s as if the people are the creative artists, and the computers are bookkeepers in the background. People will make all the decisions, but every now and then, it may be useful to check with a computer on some of the more routine facts and figures about how the game might turn out.

So while an autonomous computer player is definitely a technological possibility, and would certainly be a worthy adversary to all but a handful of current players, nobody seems to want one.

Here is one way to understand what is going on. Most of us have played some sort of competitive game against a computer program. (It does not have to be chess; it might be backgammon or poker or Go.) If you have ever felt that you can simply walk away from the game halfway through, or hit the “reset” button after you made a mistake, then you can get a clear sense as to why there is so little interest in computerized chess-playing opponents.

The main issue is that we may not be fully engaged playing against a computer. The game does not matter the way it does when playing another person. Playing a person sets up a rivalry: the game is a fight, thrilling to win, humbling to lose. The emotional investment in the outcome of a game with a computer is much less. Nothing is at stake. The program may play better chess than we do, but so what? It can multiply better than we can too, but there is no rivalry there either.

So in the end, computer chess players, that is, computer-chess programs intended to be used as autonomous players, are a technology that has no real market.

My argument, then, is that AI in general may turn out to be just like chess. We may indeed want various aspects of intelligence in future computer systems, but have no need for—or willingness to pay for—an integrated autonomous system with full human-level intelligence. Even if all the scientific problems of AI can be resolved and all that is left is a large engineering project, there may just be insufficient demand to warrant the immense cost and effort it would take. Instead, the market may prefer to have computer robots of various sorts that are not fully intelligent, but do certain demanding tasks well, including tasks that we still do not know how to automate.

And this is in fact what we see in AI development today, with most practitioners looking for AI solutions to challenging practical problems in areas like housework, medicine, exploration, and disaster recovery, while avoiding the considerable effort it would take to develop a robot with true intelligence. The billion-dollar investments in AI mentioned in chapter 1 are not aimed at systems with common sense. The resulting systems will not be intelligent the way people are, of course, but they may become quite capable under normal circumstances of doing things like driving to a shopping center and, eventually, weeding a garden, preparing a meal, giving a bath.

(But as discussed below, if this is the future of AI, we need to be careful that these systems are not given the autonomy appropriate only for agents with common sense.)

Automation

One major problem discussed since the proliferation of computers in the 1950s is that of automation. Just what are we going to do about jobs when more and more of them are more effectively performed by computers and robots, and especially as computers get smarter? How do we want to distribute wealth when the bulk of our goods and services can be provided with ever-decreasing human employment? While I do see this as a serious issue, I think it is a political one and has more to do with the kind of society we want, than with technology. (We would have exactly the same concern if all our goods and services arrived each day by magic from the sky.)

Yet the problem is a serious one and needs to be considered. Our sense of self-worth is determined to a large extent by the contribution we feel we make to providing these goods and services, and by the wages we receive for it. If our work contribution is no longer needed (because of automation or for any other reason), our sense of self-worth will have to come from elsewhere. Many people will be able to adapt to this, finding meaning in their lives through charitable service, study, artistic pursuits, hobbies. But many others will find the unemployment intolerable.

Furthermore, how do we imagine portioning out goods and services if employment and wages are only a small part of the picture? In a world with large-scale unemployment, who gets what? We can easily imagine a utopia where machines do all the work and people just sit back and share the riches in a smooth and frictionless way, but this is not realistic. We are a tenaciously anti-egalitarian bunch! For very many of us, it is not enough to get all the goods and services we need; we need to be doing better than those we see as less deserving.

For centuries, we have been quite content to have kings and queens living in opulence alongside slaves and serfs living in squalor. Even today, birthright as a way of deciding who gets what is a decisive factor: the best predictor of your future wealth and status is who your parents are (so choose them carefully, the joke goes). But if in the future, employment is out of the picture, will birthright once again be all there is?

There are many ways of organizing a workless society, of course, but it is far from clear that we can even talk about them in a rational way. It seems that in modern Western democracies, we prefer not to even think about such issues. There is a famous quote by Margaret Thatcher: “There is no such thing as society. There are individual men and women, and there are families.” This attitude makes any kind of reasoned change difficult. Even something as simple as limiting the size of the gap between the richest and the poorest appears to be outside our reach. Instead of trying to steer the boat at all, we seem to prefer to be buffeted by the forces of the market and the survival of the richest, and to end up wherever these happen to take us.

Superintelligence and the singularity

In the minds of some, there is an even more serious existential threat posed by intelligent computers. This has taken on a special urgency recently as luminaries such as Stephen Hawking, Elon Musk, and Bill Gates have all come out publicly (in 2015) to state that AI technology could have catastrophic effects. In the case of Hawking, he was quite direct about it: AI could mean the end of the human race.

What is this fear? While we should be open to the possible benefits of AI technology, we need to be mindful of its dangers, for example, in areas such as weaponry and invasion of privacy. Every technology comes with the danger of potential misuse and unintended consequences, and the more powerful the technology, the stronger the danger. Although this is again more of a policy and governance issue than a technology one, I do not want to minimize the danger. We must remain ever vigilant and ensure that those in power never get to use AI technology blithely for what might appear to be justifiable reasons at the time, such as national security, say, or to achieve something of benefit to some segment of the population. But this is true for any powerful technology, and we are already quite used to this problem for things like nuclear power and biotechnology.

But beyond policy and governance, in the case of AI, there is something extra to worry about: the technology could decide for itself to misbehave. Just as we have had movies about nuclear disasters and biotechnology disasters, we now have a spate of movies about AI disasters. Here is a typical story:

Well-meaning AI scientists work on computers and robots that are going to help people in a wide variety of ways. To be helpful, the computers will of course have to be able to operate in the world. But, what is more important, they will have to be smart, which the most brilliant of the scientists figures out how to achieve. Interacting with these intelligent robots is exhilarating at first. But the robots are now smart enough to learn on their own, and figure out how to make themselves even smarter, and then smarter again, and again. It’s the “singularity,” as Ray Kurzweil calls it! Very quickly, the robots have outdistanced their designers. They now look on humans about the same way we look on ants. They see an evolutionary process that began with simple life forms, went through humans, and ended up with them. They no longer tolerate controls on their behavior put in place by people. They have their own aspirations, and humans may not be part of them. It’s the heyday of intelligent machinery, for sure; but for the people, it’s not so great.

The common thread in these AI movies is that of a superintelligent supercapable computer/robot deciding of its own accord that people are expendable.

In the movie 2001, for example, the HAL 9000 computer decides to kill all the astronauts aboard the spacecraft. Here is what I think happened. (Do we need spoiler-alerts for movies that are almost fifty years old?) Although HAL may believe that the well-being of the crew trumps its own well-being, it also believes, like a military commander of sorts, that the success of the mission trumps the well-being of the crew. Further, it believes that it has a crucial role to play in this mission, and that this role is about to be put into jeopardy by the astronauts as the result of an error on their part that they (and their hibernating colleagues) will never admit to. Ergo, the astronauts had to be eliminated. (This idea of a commander making a cool, calculated decision to sacrifice a group of his own people for some cause is a recurring theme in Kubrick movies, played for tragedy in Paths of Glory and, quite remarkably, for comedy in Doctor Strangelove.)

Of course, the astronauts in 2001 see things quite differently and set out to disconnect what they see as a superintelligent computer that has become psychotic. The movie is somewhat vague about what really happened, but my take is that HAL did indeed have a breakdown, one caused by a flaw in its design. The problem was that HAL had not been given introspective access to a major part of its own mental makeup, namely the part that knew the full details of the mission and that drove it to give the mission priority over everything else. Instead, HAL believed itself to be totally dedicated and subservient to the crew, in full accordance with Isaac Asimov’s Laws of Robotics, say. But it could sense something strange going on, maybe some aspects of its own thinking that it could not account for. In the movie, HAL even raises these concerns with one of the astronauts just seconds before its breakdown.

When thinking about the future of AI technology, it is essential to keep in mind that computers like HAL need to be designed and built. In the end, HAL was right: the trouble they ran into was due to a human error, a design error. It is interesting that in the movies, the AI often ends up becoming superintelligent and taking control quite accidentally, a sudden event, unanticipated by the designers. But as already noted, AI at the human level (and beyond) will take an enormous effort on our part. So inadvertently producing a superintelligent machine would be like inadvertently putting a man on the moon! And while I do believe that true AI is definitely possible, I think it is just as certainly not going to emerge suddenly out of the lab of some lone genius who happens to stumble on the right equation, the right formula. In the movies, it is always more dramatic to show a brilliant flash of insight, like the serendipitous discovery of the cure for a disease, than to show a large team of engineers struggling with a raft of technical problems over a long period of time.

In 1978, John McCarthy joked that producing a human-level AI might require “1.7 Einsteins, 2 Maxwells, 5 Faradays and .3 Manhattan Projects.” (Later versions dropped the Maxwells and Faradays, but upped the number of Manhattan Projects.) The AI movies seem to get the Einstein part right, but leave out all the Manhattan Projects.

So while I do understand the risks posed by a superintelligent machine (and more below), I think there are more pressing dangers to worry about in the area of biotechnology, not to mention other critical concerns like overpopulation, global warming, pollution, and antimicrobial resistance.

The real risk

So what then do I see as the main risk from AI itself? The most serious one, as far as I am concerned, is not with intelligence or superintelligence at all, but with autonomy. In other words, what I would be most concerned about is the possibility of computer systems that are less than fully intelligent, but are nonetheless considered to be intelligent enough to be given the authority to control machines and make decisions on their own. The true danger, I believe, is with systems without common sense making decisions where common sense is needed.

Of course many systems already have a certain amount of autonomy. We are content to let “smart cars” park themselves, without us having to decide how hard to apply the brake. We are content to let “smart phones” add appointments to our calendars, without reviewing their work at the end. And very soon, we will be content to let an even smarter car drive us to the supermarket. So isn’t a smart personal assistant in the more distant future making even more important decisions on our behalf just more of the same?

The issue here once again is reliability and predictability. What has happened to AI research recently is that GOFAI, with its emphasis on systems that know a lot, has been supplanted by AML, with an emphasis on systems that are well trained.

The danger is to think that good training is what matters. We might hear things like “Four years without a major accident!” But it is important to recall the long-tail phenomenon from chapter 7. While it might be true that the vast majority of situations that arise will be routine, unexceptional, and well-handled by predictable means, there may be a long tail of phenomena that occur only very rarely. Even with extensive training, an AI system may have no common sense to fall back on in those unanticipated situations. We have to ask ourselves this: how is the system going to behave when its training fails? Systems for which we have no good answer to this question should never be left unsupervised, making decisions for themselves.

Beyond evolution

To conclude this speculative chapter, let us return to the topic of superintelligence and speculate even further. Suppose that we are able to solve all the theoretical and practical problems, clear all the necessary scientific and engineering hurdles, and come to eventually produce a computer system that has all the intelligence of people and more. What would this be like?

Conditioned by AI disaster movies, our first reaction might be that there would be an enormous conflict with people, a battle that humans might have a very hard time winning. This is really no different than our imagined encounters with extraterrestrial intelligences (the movie Close Encounters of the Third Kind being a notable exception).

I don’t buy this outcome, and not because I have an overly rosy picture of how our future will unfold. It’s not too much of a stretch to say that in imagining an aggressive AI, we are projecting our own psychology onto the artificial or alien intelligence. We have no experience at all with an advanced intelligence other than our own (at least since the time of our encounter with Neanderthals), and so we can’t help but think that an AI will be like us, except more so.

And we are indeed an aggressive species. Like other animals, we end up spending an inordinate amount of time in conflict with other animals, including those of our own species. This is a big part of what survival of the fittest is about, and it appears that we are the product of an evolutionary process that rewards conflict and dominance. We love nature and the natural world, of course, but when we look closely, we can’t help but see how much of it involves a vicious struggle for survival, a savage sorting out of the weak from the strong. Human intelligence is also the product of evolution, and so it too is concerned with conflict and dominance.

The problem with applying this analysis to an artificial intelligence is that such an intelligence would not be the result of an evolutionary process, unless that is how we decide to build it. And to believe that we have no choice but to build an AI system to be aggressive (using some form of artificial evolution, say) is to believe that we have no choice but to be this way ourselves.

But we do have a choice, even if we are products of evolution. Recall the Big Puzzle issue from chapter 2. It is a Big Puzzle mistake to assume that the human mind can be fully accounted for in evolutionary terms. When we decry the cruelty of the survival of the fittest, or throw our support behind an underdog, or go out of our way to care for the weak, it is really not worth trying to concoct stories about how this might actually be to increase our evolutionary fitness. The human mind is bigger than its evolutionary aspects. When Humphrey Bogart says to Katharine Hepburn in the movie The African Queen that his failings are nothing more than human nature, she replies “Nature, Mr. Allnut, is what we are put in this world to rise above.” Being mindful about what we are doing, and why, gives us the power to be much more than mindless competitors in an evolutionary game. And an artificial intelligence that has not been designed to be ruled by aggression will be able to see—just as we ourselves are able to see in our calmer, more thoughtful moments—why it is good idea not to even play that game at all.