Artificial Intelligence is a familiar enough idea today to have earned an acronym—AI—that provided the title of a Steven Spielberg film in 2001. As a phrase, however, it’s considerably older, having been coined in 1955 by the American computer scientist John McCarthy for a conference he and the cognitive scientist Marvin Minsky organized the following year at Dartmouth College.150
Interestingly, researchers during the first few decades of computing consistently tended to overestimate the potential of artificial intelligence, and of the rate at which machines would catch up with human cognition. In a sense, it wasn’t until computer programmers tried to design machines capable of mimicking human intelligence that they discovered just how immensely complicated a phenomenon intelligence actually is.
Six decades later, AI has become less a radical claim about the future than a description of the way in which many everyday devices and applications are able to learn and adapt their behavior. What we mean by the word intelligence has, in parallel with other terms such as memory, expanded beyond the human—although few would yet claim that machine and human intelligence have much in common.
It’s a distinction embodied in the formal division of artificial intelligence into “strong” and “weak” AI, where “weak” denotes basic machine problem-solving and “strong” denotes the as-yet-hypothetical matching of human intelligence by machines.
Of course, the degree to which machines can appear human was a subject of fascination since well before the era of computing; and it’s one that was given its definitive contemporary form even before AI entered the scene: the Turing Test.
In 1950, the pioneer of modern computing Alan Turing published a paper entitled “Computing Machinery and Intelligence.” In it he argued that actually determining whether or not a machine could “think” was almost impossible, given the difficulty of defining thinking in the first place. Therefore, he proposed a test to act as a proxy for this question: could a machine convince a person that it was, in fact, another human being?151
One of the most intriguing early developments in exploring Turing’s proposition arrived in 1966, thanks to an early computer “chatterbot” program called ELIZA (named after Eliza Doolittle in George Bernard Shaw’s play Pygmalion), which was designed to “talk” with users by a simple process of turning back their words into predetermined questions, in the style of a psychotherapist. The program’s creator, MIT computer scientist Joseph Weizenbaum was shocked to discover that this crude simulacrum of empathy and interest was enough to fool some users into thinking of ELIZA as human. This phenomenon came to be known as the “ELIZA effect,” and is used today to describe the tendency to ascribe human motivations to computer effects.152
Anthropomorphism is nothing new. Pleasingly, though, there’s also a term for the reverse of the ELIZA effect, born thanks to the founding in 1991 of the Loebner Prize for Artificial Intelligence—a now-annual enaction of the Turing Test under controlled conditions to try to find the world’s most convincing conversational computer program.
The Loebner Prize asks participants to spend five minutes conversing via a computer console with an unseen other, who might either be human or a machine. While few programs, even today, can fool an experienced interlocutor for long, the organizers of the prize noticed that some real people were rated during the test as “almost certainly a machine.” It’s a phenomenon that has been dubbed “the confederate effect” as it involves one of the human “confederates” failing to prove their humanity. This was observed during the very first Prize, in 1991, when one human participant’s detailed knowledge of Shakespeare was deemed too wide-ranging and precise to be anything other than mechanical.153