I’m an optimist. Not by nature, but by U.S. government design.
After Russia humiliated the United States with the 1957 launch of Sputnik, the first space satellite, the government decided that science education should be a national priority. The cold war was in full swing, and Senator John F. Kennedy made closing the “missile gap” a centerpiece of his presidential campaign. Ceding leadership in this critical emerging arena was unthinkable.
Young boys like me (but tragically, not many girls) were fed a steady diet of utopian imagery extolling technological innovation as the path to eternal peace and prosperity, not to mention a way to beat them clever Russkies. Dog-eared copies of Amazing Stories and Fantastic Adventures illustrated how spaceships and ray guns would help you save the world and get the girl.
When I moved to New York at the age of ten, the city seemed the Land of Oz to me, and the 1964 World’s Fair was the Emerald City. For less than the two dimes tucked into my Buster Brown penny loafers, I could catch the IRT at Grand Central Station to visit sparkling visions of the future like the Unisphere, the Monorail, and General Electric’s Progressland, where Disney’s animatronic robots would herald “a great big beautiful tomorrow” in cheerful harmony.
The world of science fiction seemed to grow up right alongside me. As I struggled with calculus and solid geometry, Star Trek offered solace and encouragement—surely Captain Kirk had aced his SATs. But 2001: A Space Odyssey took things to a new level with a mystical glimpse of the destiny of humankind. Mesmerized by the infinite red halo of the Hal 9000, I knew what I had to do.
Ten years later, after earning a B.A. in history and philosophy of science at the University of Chicago and a Ph.D. in computer science at the University of Pennsylvania, I accepted a research position in the Stanford Artificial Intelligence Lab.
I thought I had died and gone to heaven. Inhabited by disheveled geniuses and quirky wizards, the dilapidated lab sat atop an isolated rise in the gentle hills west of Stanford’s campus. Strange electronic music wafted through the halls at odd hours; robots occasionally moseyed aimlessly around the parking lot. Logicians debated with philosophers over whether machines could have minds. John McCarthy—founder of the lab, who coined the term artificial intelligence, or AI—haunted the halls stroking his pointed beard. A large clearing inside the semicircular structure seemed to await first contact with an advanced extraterrestrial civilization.
But even in paradise, the natives can grow restless. Silicon Valley made its siren call—a chance to change the world and get rich at the same time. We had been scrounging around for research funds to build our projects; now a new class of financiers—venture capitalists—came calling with their bulging bankrolls.
Several startup companies and thirty years later, I finally curbed my entrepreneurial enthusiasm and retired, only to find I wasn’t quite prepared to fade quietly into my dotage. A chance encounter opened a new door; I was invited to return to the Stanford AI Lab, but this time as a gray-haired patrician, knowledgeable in the ways of the big, bad commercial world.
To my surprise, the lab was completely different. The people were just as bright and enthusiastic, but the sense of common mission was gone. The field had fragmented into a number of subspecialties, making cross-disciplinary dialog more difficult. Most people were so focused on their next breakthrough that I felt they had lost sight of the broader picture. The original goal of the field—to discover the fundamental nature of intelligence and reproduce it in electronic form—had given way to elegant algorithms and clever demos.
In the hopes of rekindling the original spirit of the lab, I offered to teach a course on the history and philosophy of artificial intelligence. But as I dived into the subject matter, I became acutely aware of some serious issues looming on the horizon.
Having witnessed enough frames of the movie, I could see that a happy ending is anything but assured. Recent advances in the field are poised to make an astonishing impact on society, but whether we will make a graceful transition or emerge bruised and battered is uncertain.
The brilliant and dedicated people in the Stanford AI Lab—and their many colleagues in universities, research centers, and corporations around the world—are working on the twenty-first-century moral equivalent of the Manhattan Project. And, like the staff of that supersecret project to develop the atom bomb, only a few are cognizant of the breathtaking potential of their work to transform lives and livelihoods, right down to altering our concept of who we are and our proper place in the universe. It’s one thing to make a cute little robot that reads names and addresses, then tootles down the hall delivering intramural mail, but quite another when incrementally more capable versions of this technology operate our farms, manage our pension funds, hire and fire workers, select which news stories we read, scan all our communications for subversive ideas, and fight our wars.
Sure, but that’s science fiction. We’ve seen this kind of stuff in the movies for decades and nothing terrible has happened in real life. So what’s the big deal? Why all the fuss now?