The debate of nature versus nurture has raged for millennia. Are babies comparable to a white page, a blank slate, or an empty bottle that experience must fill? As early as 400 BCE, in The Republic, Plato was already rejecting the idea that our brains enter the world devoid of any knowledge. From birth, he claimed, every soul possesses two sophisticated mechanisms: the power of knowledge and the organ by which we acquire instruction.
As we have just seen, two thousand years later, a remarkably similar conclusion arose from advances in machine learning. Learning is vastly more effective when the machine comes equipped with two features: a vast space of hypotheses, a set of mental models with myriad settings to choose from; and sophisticated algorithms that adjust those settings according to the data received from the outside world. As one of my friends once said, in the debate on nature versus nurture, we have underestimated both! Learning requires two structures: an immense set of potential models and an efficient algorithm to adjust them to reality.
Artificial neural networks do this in their own way, by entrusting the representation of mental models to millions of adjustable connections. However, these systems, while capturing the rapid and unconscious recognition of images or speech, are not yet able to represent more abstract hypotheses, such as grammar rules or the logic of mathematical operations.
The human brain seems to function in a different way: our knowledge grows through the combination of symbols. According to this view, we come into the world with a vast number of possible combinations of potential thoughts. This language of thought, endowed with abstract assumptions and grammar rules, is already in place prior to learning. It generates a vast realm of hypotheses to be put to the test. And to do so, according to the Bayesian brain theory, our brain must act like a scientist, collecting statistical data and using them to select the best-fitting generative model.
This view of learning may seem counterintuitive. It suggests that each human baby’s brain potentially contains all the languages of the world, all the objects, all the faces, and all the tools that it will ever encounter, in addition to all the words, all the facts, and all the events that it will ever remember. The combinatorics of the brain are such that all these objects of thought are potentially already there, along with their respective a priori probabilities, as well as the ability to update them when experience says that they need to be revised. Is this how a baby learns?