In the mid-60s, I was an undergrad at Carnegie Tech—now CMU—which was and is a leader in artificial intelligence. I was curious about computers, so I signed up to learn how to program as soon as I could. In those days you had to type out punch cards and submit them, then wait a day to see how your program did. My programs kept coming back without success with the remark “semicolon missing in line 25.” I was really annoyed. If it knew a semicolon was missing, why didn’t the computer just put it in? What a dumb machine!
That is how I got interested in AI. I just wanted smarter machines.
The AI gurus at CMU were Allen Newell and Herb Simon, who eventually became my friends, especially Newell. Their views were in the air. Finding out how minds work and getting computers to simulate them was the zeitgeist at CMU.
My fraternity brother was working with them on a program that could play hearts, where the key issue was passing cards at the beginning of the game. Three of us would play hearts every night, and then we would talk about how we chose what cards to pass.
One of the things that interested me was how people understand each other. How did language work? I started to think about that and was appalled that whenever I ran into the work of other people it seemed to me to be totally off base. The linguists were obsessed with syntax, and the computer people were trying to parse sentences into grammar trees. Why? Don’t people worry about what a sentence means and what someone is trying to say? Why the obsession with syntax?
Getting computers to play chess was a big issue at CMU. Some people were studying how grand masters play chess, but others just wanted to win. Why? So they could brag about it. But there was nothing to be learned about the mind by counting faster than a human.
CMU researchers were also studying problem solving. “How do people do it?” was the key question for Newell, Simon, and me— whether it was chess or problem solving.
I first met Steve when he came to do a postdoc with me at Yale. Most people who do postdocs sit around and talk for a year, but Steve came to actually work. He was so impressive that when I started my new AI company, Cognitive Systems, he was one of the first people I hired. He wanted to build things and he wanted to make money. He did not want to make outrageous claims about AI. In fact, Cognitive Systems was intended to be my answer to the AI hype of the time.
Then, as now, falsehoods about AI dominated the media. Back then, the hype was about “expert systems,” which sounds nice, but their so-called expertise was a bunch of if-then statements. In people, expertise is embodied by experience and learned through having new experiences. The computer was not capable of having experiences, much less processing them.
But the media had caught on to this next big thing. They love writing AI stories to scare the public.
AI researchers were also big contributors to the hype. When I worked at the Stanford AI lab in the late ’60s, there was a sign in the parking lot saying, “Caution robot vehicle.” It was a sign that was nonsense then as it would be now. Of course there was a vehicle of sorts run by a computer then just like there are self-driving car prototypes today. But I wouldn’t ride in either.
And then there are the businesses working in AI.
Recently, IBM ran an ad for its Watson program, claiming that it can read 800 million pages per second and is able to identify key themes in Bob Dylan’s work, like “time passes” and “love fades.”
IBM said Watson’s abilities “outthink” human brains in areas where finding insights and connections can be difficult due to the abundance of data (e.g., cancer, risk, doubt, and competitors).
I am a child of the ’60s and I remember Dylan’s songs well enough. Ask anyone from that era about Bob Dylan, and no one will tell you his main theme was “love fades.” He was a protest singer, and a singer about the hard knocks of life. He was part of the antiwar movement. “Love fades”? That would be a dumb computer counting words. How would Watson understand that many of Dylan’s songs were part of the antiwar movement? Does he say “antiwar” a lot? He probably never said it in a song.
For example, “The Times They Are A-Changin’” contains iconic Dylan statements that manage to transcend the times. However, he doesn’t mention Vietnam or civil rights in the lyrics to that song. So how would Watson know that the song had anything to do with those issues? It is possible to talk about something and have the words themselves not be very telling. Background knowledge matters a lot. I asked a twenty-something about Bob Dylan a few days ago, and he had never heard of him. He didn’t know much about the ’60s. Neither does Watson.
It is against this backdrop that Steve Shwartz has written Evil Robots, Killer Computers, and Other Myths. Since Steve and I went our separate ways, he has managed to continue to make money while doing worthwhile things. He has a sane perspective on what computers can and cannot do. In this book, he carefully goes over all the hype in what passes for AI these days and explains how it works—and why it doesn’t really work all that well.