In the 1950s, the nascent field of artificial intelligence was split into rivaling factions. Two groups competed for academic mindshare: the top dogs were the “symbolists,” led by authorities such as Marvin Minsky and John McCarthy; the runners-up were the “connectionists,” led by the charismatic Frank Rosenblatt.
The two tribes had very different approaches. The symbolists believed in programming an intelligent machine from the ground up. Piece by piece, they planned to build computers that would eventually manipulate concepts faster and better than humans.
This idea might sound overly optimistic today, but it made perfect sense in the 1950s, when people were inventing the first high-level programming languages. Those languages seemed much closer to human thinking than plain old assembly language. Who knew how far that process could go? (In fact, John McCarthy invented one of the first high-level programming languages, LISP, in his quest to code intelligence.)
The opposite faction, the connectionists, chased another dream. Their idea could be summed up as: build a brain, and intelligence will come.
To simplify things, the brain is made of neurons connected through fibers. Each neuron has multiple input fibers, and one output fiber. If the inputs are active in a certain pattern (maybe because they get a signal from the sensory organs), then the output also activates. The connectionist leader Frank Rosenblatt built a machine inspired by that mechanism. By analogy with neurons, he named the machine “perceptron,” and its final processing step “activation function.”
The first perceptron was a far cry from our tiny Python program. The “Mark 1 perceptron” was a room-sized piece of hardware that looked a bit like a server rack covered by an impenetrable tangle of wires. It had a camera connected to 400 photocells—essentially, very lo-res pixels. The weights were implemented with potentiometers wired to the photocells. During the learning phase, the potentiometers were physically rotated by electric motors.
To overcome the perceptron’s limitations, the connectionists also studied multilayer perceptrons, which seemed able to tackle non-linearly separable data. Meanwhile, the symbolists were busy writing programs that solved algebra problems and stacked construction blocks with a robot arm.
To be fair, neither faction was making much progress toward intelligent machines. On the other hand, both factions were inclined to ballyhoo and extravagant promises. At one point, Rosenblatt declared that the perceptron was the first step toward machines that would not only be damn smart, but even self-conscious. The popular press bought into it hook, line, and sinker, making symbolists jealous.
The feud went on for years, with the symbolists reaping the lion’s share of research funds, and the connectionists playing the part of the popular underdogs. Then, at some point, things really hit the fan.
From the 50s to the mid-60s, connectionists had been nibbling at AI research funds. The powerful symbolist leader Marvin Minsky thought that was a waste of money, and decided to set things straight once and for all.
Minsky’s plan was simple: he would study the connectionist’s ideas, with an eye toward showing their limitations. Together with the like-minded Seymour Papert, he published an entire book on the topic, called Perceptrons. The book was essentially Minsky’s way to damn perceptrons with faint praise. It focused a lot on what perceptrons could not learn—such as non-linearly separable data.
To their credit, Minsky and Papert admitted that multilayer perceptrons could overcome the limitations of regular perceptrons. However, they hasted to add their gut feeling: multilayer perceptrons were probably impossible to train. In their opinion, the whole idea of building intelligence with perceptrons was little more than a pipe dream.
Bolstered by Minsky’s reputation, Perceptrons had more impact than the authors themselves intended. Where Minsky and Papert had been nuanced, the scientific community went for the “too long; didn’t read” version: “connectionism is a dead end.” Within a few months, funding for connectionist research dried up.
The impact of their book didn’t stop there. The public had been waiting for a perceptron to stand up and ask for a cup of tea anytime soon. Now one of the major AI eggheads was scoffing at the whole thing. The popular opinion switched, and all of connectionism was filed under “unscientific bollocks.”
In the early 70s, Rosenblatt died in a sailboat accident. That seemed like the last nail in the perceptron’s coffin.
After connectionism was disgraced, it almost disappeared from academic research. Only a handful of researchers over the world, like medieval monks, kept the study of perceptrons alive.
Minsky’s parting questions loomed over them like a prophecy of doom. Were multilayer perceptrons really impossible to train? Was the entire idea a dead end? Fifteen years passed before they could answer those questions.
What they found is the subject of the next part of this book.