Preface

This is a book about the mind from the standpoint of artificial intelligence (AI). But even for someone interested in the workings of the human mind, this might seem a bit odd. What do we expect AI to tell us? AI is part of computer science after all, and so deals with computers, whereas the mind is something that people have. Sure, we may want to talk about “computer minds” someday, just as we sometimes talk about “animal minds.” But isn’t expecting AI to tell us about the human mind somewhat of a category mismatch, like expecting astronomy to tell us about tooth decay?

The answer is that there is not really a mismatch because computer science is not really that much about computers. What computer science is mostly about is computation, a certain kind of process, such as sorting a list of numbers, compressing an audio file, or removing red-eye from a digital picture. The process is typically carried out by an electronic computer of course, but it might also be carried out by a person or by a mechanical device of some sort.

The hypothesis underlying AI—or at least one part of AI—is that ordinary thinking, the kind that people engage in every day, is also a computational process, and one that can be studied without too much regard for who or what is doing the thinking.

It is this hypothesis that is the subject matter of this book.

What makes the story controversial—and more interesting, perhaps—is that there is really not just one kind of AI these days, nor just one hypothesis under investigation. Parts of AI do indeed study thinking in a variety of forms, but other parts of AI are quite content to leave it out of the picture.

To get an idea of why, consider for a moment the act of balancing on one leg. How much do we expect thinking or planning or problem-solving to be involved in this? Does a person have to be knowledgeable to do it well? Do we suppose it would help to read up on the subject beforehand, like Balancing on One Leg for Dummies, say? An AI researcher might want to build a robot that is agile enough to be able to stand on one leg (for whatever reason) without feeling there is much to be learned from those parts of AI that deal with thinking.

In fact, there are many different kinds of AI research, and much of it is quite different from the original work on thinking (and planning and problem-solving) that began in the 1950s. Fundamental assumptions about the direction and goals of the field have shifted. This is especially evident in some of the recent work on machine learning. As we will see, from a pure technology point of view, this work has been incredibly successful, more so, perhaps, than any other part of AI. And while this machine learning work leans heavily on a certain kind of statistics, it is quite different in nature from the original work in AI.

One of the goals of this book is to go back and reconsider that original conception of AI, what is now sometimes called “good old-fashioned AI,” and explain why, even after sixty years, we still have a lot to learn from it—assuming, that is, that we are still interested in exploring the workings of the mind, and not just in building useful pieces of technology.

A book about AI can be many things. It can be a textbook for an AI course, or a survey of recent AI technology, or a history of AI, or even a review of how AI has been depicted in the movies. This book is none of these. It is a book about the ideas and assumptions behind AI, its intellectual underpinnings. It is about why AI looks at things the way it does and what it aspires to tell us about the mind and the intelligent behavior that a mind can produce.

It is somewhat disheartening to see how small this book has turned out to be. The entire text will end up taking about 150KB of storage. For comparison, just one second of the Vivaldi concert on my laptop takes more than twice that. So in terms of sheer raw data, my laptop gives equal weight to this entire book and to a half-second of the video, barely enough time for the conductor to lift his baton.

This can be thought as confirming the old adage that a picture—or a single frame of a video—is worth a thousand words. But I think there is another lesson to learn. It also shows that compared to pictures, words are amazingly compact carriers of meaning. A few hundred words might be worth only a momentary flash of color in a video, but they can tell us how to prepare boeuf bourguignon. The human animal has evolved to be able to make extremely good use of these ultra-compact carriers of meaning. We can whisper something in somebody’s ear and be confident that it can have an impact on their behavior days later.

How the mind is able to do this is precisely the question we now set out to explore. My hope is that the reader will enjoy my thoughts on the subject and get to feel some of the excitement that I still feel about these truly remarkable ideas.