At its core, intelligence can be viewed as a process that converts unstructured information into useful and actionable knowledge.

Demis Hassabis, founder of the AI company DeepMind (2017)

What is learning? In many Latin languages, learning has the same root as apprehending: apprendre in French, aprender in Spanish and Portuguese. . . . Indeed, learning is grasping a fragment of reality, catching it, and bringing it inside our brains. In cognitive science, we say that learning consists of forming an internal model of the world. Through learning, the raw data that strikes our senses turns into refined ideas, abstract enough to be reused in a new context—smaller-scale models of reality.

In the following pages, we will review what artificial intelligence and cognitive science have taught us about how such internal models emerge, in both brains and machines. How does the representation of information change when we learn? How can we understand it at a level that is common to any organism, human, animal, or machine? By reviewing the various tricks that engineers have designed to allow machines to learn, we will progressively conjure up a sharper picture of the amazing computations that infants must perform as they learn to see, speak, and write. In fact, as we shall see, the infant brain keeps the upper hand: despite their successes, current learning algorithms capture only a fraction of the abilities of the human brain. Understanding exactly where the machine learning metaphor breaks down, and where even an infant’s brain still surpasses the most powerful computer, we will delineate exactly what “learning” means.