2.1 The Origin of Intelligence
Mal Looked at the Water Then at Each of the People in Turn, and They Waited.
“I Have a Picture.”
He Freed a Hand and Put It Flat on His Head as if Confining the Images That Flickered There.
“Mal is not Tall But Clinging to his Mother's Back. There is More Water Not Only Here But Along the Trail Where We Came. A Man is Wise. He Makes Men Take a Tree That Has Fallen and –”.
His Eyes Deep in Their Hollows Turned to the People Imploring Them to Share a Picture with Him. He Coughed Again, Softly. The Old Woman Carefully Lifted Her Burden.
At Last Ha Spoke.
“I Do not See This Picture.”
The Old Man Sighed and Took His Hand Away from His Head.
“Find a Tree That Has Fallen.”
The Inheritors: William Golding [1]
In this short excerpt from the “The Inheritors” by Nobel Prize winning author, William Golding, we encounter a Neanderthal family confronted with a serious problem: a fallen tree that served as their bridge across a river has been swept away. Mal, an old man, recollects a similar situation from the time when he was an infant, clinging to his mother’s back. He tries to convey this memory to his family, but is unsuccessful. He proposes that they find another fallen tree, and drag it to the crossing. This is a major innovation: to the other family members, the original fallen tree, like the sun, the moon and the river, had always been there.
Golding provides us here with a glimpse of the possible origins of rational thought. Is this how it began, with the sharing of experiences and memories? Did intelligence evolve from this exchange of mental images?
Archaeological research and findings from the Human Genome Project suggest that modern cognition and behaviour had developed in sub-Saharan Africa by at least 50,000 BP.1 The last glacial period, or Ice Age, extended from 110,000 to 12,000 years ago. During this period, the environment changed markedly, with the sea level falling and rising by many metres. The changing environmental conditions during this period may have provided an evolutionary advantage to those primates with a higher intelligence, and influenced the evolution of modern humans [2]. The spread of Homo sapiens from Africa to other continents appears to have occurred in several waves during this period, and to have been facilitated by low sea levels arising from large masses of water being tied up in glaciers. Genetic evidence suggests that the first exodus occurred about 70,000 years ago, and followed the coastline of Asia to Australia, which was reached at least 50,000 years ago. Subsequent migrations led to the populations of Central Asia, Europe and the Americas [3, 4].
Today intelligence appears to be dependent on a multiplicity of genes, as well as the environment, so it is likely that it is a characteristic that evolved over time, rather than from a single spontaneous genetic mutation [5].
Before discussing intelligence, we need to be clear what exactly we are talking about. Howard Gardner in his 1983 book, Frames of Mind: The Theory of Multiple Intelligences, lists nine different types of intelligence [6]. Some of these involve musical ability and hand–eye coordination, which some might regard as aptitudes, rather than intelligence. In our book we are interested primarily in Gardner’s category: Logical-Mathematical Intelligence, which deals with logic, abstractions, reasoning, numbers and critical thinking.
As intelligence evolved, humans (or Homo sapiens, as Homo Neanderthalensis had by now either lost their identity by interbreeding with their newly arrived cousins, or become extinct) turned their skills to survival. They learned how better to protect themselves and their families from the elements and wild beasts, and how to gather enough food on an almost regular basis. A higher level of intelligence improved their chances of survival.
If intelligence offered humanity an increased prospect of survival in a hostile world, as a by-product it provided them with an awareness of the world around them. They could not fail to notice and exploit the cycles of nature. Day changed into night, then back to day again. The seasons succeeded one another inexorably through the course of a year. The exception appeared to be humans themselves, who aged or died through misadventure or disease and were lost forever. How could this be? Perhaps death was not the end, but just a doorway into a different cycle. Perhaps we do not entirely die, and some part of our essence remains.

Cave paintings of aurochs, horses and deer from Lascaux caves (Montignac, Dordogne, France). Image, courtesy of Prof saxx (Creative Commons: https://commons.wikimedia.org/wiki/File:Lascaux_painting.jpg (accessed 2020/8/11))
The Chauvet-Pont-d'Arc Cave in southern France contains rock paintings dating to about 30,000 years ago of a variety of animals. Australian Aboriginal rock art has been found in Western Australia's Pilbara region and the Olary district of South Australia, and the oldest is about 35,000 years old. It depicts extinct megafauna, e.g. Genyornis newtoni, a flightless bird over two metres tall, and Thylacoleo, a formidable marsupial carnivore.
Since humans were alive, they interpreted everything around them as also being animated. Unable to understand natural bodies and their behaviour, they attributed to them a divine nature. Their description of the world was mythological, i.e. intertwined with symbols and legends. To answer the question of the world’s origin, some primordial form of cosmogony was assumed: the shapeless chaos of Greek mythology, the waters of the Bible and of Mesopotamian and Egyptian mythologies, the qi of the Chinese tradition. Gods were also conceived, like any other form of life, as the result of some inexplicable and sudden event.
In the dreamtime of the Australian aboriginals, the earth began as a bare plain, with no life or death. Then the eternal ancestors rose and found half-formed shapeless human beings with no limbs or features, created from animals and plants. The ancestors carved out heads, arms and feet, and finished the creation process. They then went back to sleep, leaving traces of their presence in what are today sacred sites. To the aboriginal peoples, the dreamtime is not in the past, it is eternal.
Both the Chinese and the Hindu traditions conceive the evolution of the universe as a cyclic process. The Hindu cosmology even quantifies the duration of each cycle in terms of billions of years. At the end of each cycle the universe is destroyed by fire, and then after an interlude, it is created again.
It is instructive to realize that human beings have long been awestruck by the mysteries of nature. One might even argue that it is this yearning for knowledge that defines the species, Homo sapiens. However, their intuitive approach was very remote from the formal mindset of modern science. In the next Section, we will consider the formal logic, or systematised way of thinking, that underpins modern science.
2.2 Logic, and Its Role in Science
Science, and the mathematics that underpins it, is derived from formal logic, which is a systematised way of valid reasoning. Formal logics were developed in China, India and Greece. The roots of logic probably began with the development of language. In modern humans both of these activities appear to occur largely in the left hemisphere of the brain. There is no consensus on when spoken language first appeared, with estimates varying widely. Because there was apparently no way to tackle this question in a meaningful, i.e. scientific, manner, the Linguistic Society of Paris in 1866 banned any existing or future debates on this subject. The prohibition remained influential for over a century [7].
It is far beyond the scope of this book to delve into the history and nature of formal logic. After all, it is a subject that has occupied philosophers for centuries. However, a few points are worth considering to set the background for the material we wish to cover later. In the West, Aristotle, whose work inspired Euclid, Archimedes, Apollonius and many others, pursued a bottom-up approach to rational thought. They began from simple concepts that could be accepted readily (axioms), and deduced more complicated consequences by following strict rules of reasoning. The geometry of Euclid that we all learned in high school is probably the best-known example of this approach.
“If no Q is R and some Ps are Qs, then some Ps are not R.”
Here Q is shorthand for “desire”, R for “voluntary”, and P for “belief”. The same logic is applicable to countless other examples. For instance, if Q = “flyer”, R = “pig”, and P = “bird”, the above logical statement leads to: “If no flyers are pigs, and some birds are flyers, then some birds are not pigs.” It might be tempting to cry out: “but surely no birds are pigs.”2 While undoubtedly true, this stronger conclusion cannot be deduced from our two premises. Such is the uncompromising nature of formal logic.
So confident was Aristotle in the reliability of this method that, when he applied it to the physical world, he seldom bothered to check whether his inferences were in accord with actual reality. For instance, he asserted that a moving body in a vacuum came to rest immediately after the force that had induced the motion was removed. In air, he believed the motion persisted because the surrounding air sustained the force for a while, until it ultimately dissipated. This “fact” was accepted by scholars until finally refuted by Galileo, almost two millennia later. It is the combination of logic and observation, beginning with the work of Galileo, that is the characteristic of modern science.
- (1)
Why does our mathematical logical approach explain the observed physical world so well?
- (2)
Does logic, which is a product of our brains, have any existence outside of us?
- (3)
Assuming that other rational beings exist in the universe, would they necessarily develop the same system of logic that we have, or could they possibly find some other system equally capable of explaining their world?
Mathematics is often described as a priori, meaning that it exists independently of the outside world, in contradistinction to an a posteriori knowledge, which is obtained by empirical observation. This implies that mathematics would exist even if the world around us vanished. The brains of humans, and presumably other intelligent beings, harness this universal resource in the development of science. The term Platonism is used to describe this philosophical tenet. Eccentric 20th Century mathematician, Paul Erdös, often referred to “The Book”, in which God keeps the most elegant proof of each mathematical theorem. He once said in a lecture: you don’t have to believe in God, but you should believe in The Book [8].
Other philosophers deny that mathematics is a priori at all, claiming that it arose in the search for the best description of experience, and in that sense is no different from the other sciences. This viewpoint is known as Empiricism, and has been propounded by 20th Century philosophers, Willard Van Orman Quine and Hilary Putnam. A criticism of an empirical view of mathematics is that if mathematics is just as empirical as the other sciences, then its results are just as fallible as theirs.
The empiricist explanation opens the way for the evolution of logic and mathematics in the brains of early humans as a survival aid in the process of Darwinian natural selection, and gives insight into why mathematical logic works so well at describing the physical world. Of course, once developed, logic can be applied to any other abstract field not connected with survival. This is nothing new: our eyes did not evolve to read computer screens, but serve that purpose just as well.
We will re-examine these two alternative views of the nature of logic (and mathematics) in Part 3, after a review in Part 2 of what we have learned from the last century of progress in physics.
2.3 Pattern Recognition
Leaving aside these questions, which will no doubt occupy philosophers for another few centuries, it is worth considering whether mathematics and logic are the only approaches to a rational understanding of the universe. Formal logic plays little role in the thought processes of many, who rely on a more intuitive approach to their decision making. It has become customary to designate individuals as either “left-brained” or “right-brained”, depending on whether they are analytical thinkers or intuitive. The differentiation is based on the fact that logical reasoning, along with language, seems to take place in the left cerebral hemisphere, while more holistic activities—art, music, etc.—take place in the right hemisphere.
This was always a doubtful distinction, as many people combine artistic and scientific aptitudes, and intuition—the sudden blinding flash of inspiration coming apparently from nowhere—is a valuable component of the scientific creative process (see Appendix 2.1). The story of Archimedes jumping up and racing naked down the street shouting “Eureka”, after the idea for his famous Principle of Buoyancy occurred to him in his bathtub, is part of the folklore of science. Most research scientists have had their own Eureka moments, sometimes in a dream. For example, the German organic chemist, Friedrich August Kekulé (1829–1896), recounted that he had discovered the ring (or ouroboros3) shape of the benzene molecule during a day dream.
Returning to our point above, we query whether the bottom-up approach of logic is the only worthwhile path to the understanding of the workings of nature. In everyday life we have no difficulty in recognising a photograph of a friend. However, an ordinary computer, which can perform arithmetical calculations at a speed millions of times faster than a human, has difficulty carrying out this simple operation, which even an infant can readily achieve.
The reason lies in the complexity of our brain, which comprises billions of cells interconnected in a vast “neural network”. Many processing operations take place in parallel, unlike the sequential processing of a normal computer. To reduce the facial recognition task into a series of steps for a computer to carry out sequentially is fraught with difficulties.
Most of the brain’s parallel processing appears to take place subliminally, without any direction from the consciousness of its owner. From time to time this “subconscious” network throws the results of its cogitations up into the conscious component of our mind, and so we have Archimedes jumping out of his bath and shouting “Eureka”, or a crossword enthusiast searching for yesterday’s newspaper to fill in the answer to a clue that she had spent hours on, before giving up in frustration.
Clearly logic is an important component of the brain’s activities, but it is certainly not the only component, nor probably the most important for our daily survival. Pattern recognition, which enables us to separate objects into particular classes, even if we have never seen them before, is a vital part of human learning. We all know a tree when we see one, and do not attempt to teach it to heel, in the belief that it is a dog. The extraction of patterns has played a vital role in our survival. It is a facility that is trained into the minds of infants by their earliest childhood experiences. Pattern Recognition is currently being incorporated into computers in research into “machine learning”, with the aim of further developing Artificial Intelligence.
2.4 Complexity
The bottom-up logical approach has been the traditional modus operandi of physics. For instance, the laws of interaction of particles were proposed by Newton and others, and the result was the science of classical mechanics. The orbits of planets, and the paths of rockets, have been deduced from these laws by the use of mathematics. Different laws were formulated for the interaction of high-velocity bodies by Einstein, leading to relativistic mechanics, and by Heisenberg, Schrödinger and others for sub-atomic particles, leading to Quantum Mechanics. Deductions from these laws have led to the prediction of phenomena that have been observed experimentally.
Difficulties arise when attempts are made to apply these physical laws to scenarios with large numbers of interacting particles. It is not because anyone believes the laws do not work. Rather, it is that the mathematics of the problems becomes intractable. As an example, Newton’s Theory of Gravity yields an exact analytical solution only for the case of two interacting bodies. One might imagine the situation of the earth revolving about the sun. The orbit of the earth can only be predicted analytically if we disregard the presence of the earth’s moon, and of the other planets and their moons.
However, we know that during the Apollo missions, NASA predicted the paths of their spacecraft very precisely. How was this possible? Numerical methods have been developed which involve computing the effect on the rocket’s trajectory of one body (the sun or the earth), and then refining the estimates obtained by repeating the calculations, including more and more “perturbations” from hitherto neglected gravitational sources (i.e. other planets and moons). Such a procedure is time-consuming, and only possible because of the development of fast modern computers.
Impressive as the space program undoubtedly was, the score or so of interacting bodies involved in these calculations is negligible compared with the approximately 1022 molecules4 in a jar of air. Numerical methods cannot provide a way to handle the interactions of this number of molecules, and physicists have been forced to resort to a statistical approach.
In “Statistical Mechanics”, the behaviour of molecules is only considered en masse. The bulk properties of matter are studied, and concepts such as temperature and pressure introduced, which arise from the average behaviour of large numbers of molecules. Temperature is related to the average energy of motion of the molecules, and pressure to their impact on the walls of the vessel containing them. New laws of physics are formulated which connect these bulk properties. For instance, increasing the pressure of a gas confined in a flask results in a proportional increase in the temperature of the gas.5 This relationship was first discovered by Joseph Louis Gay-Lussac in 1809.
There is a difference between these “statistical” laws and the more fundamental laws describing the interaction of particles. The former may in principle be derivable from the latter, and as such may not be considered to be basic laws of physics at all. Although this may be true in some cases, in most scenarios such derivations are not possible because of the complexity of the interactions and the huge numbers of particles involved.
The science of Thermodynamics was developed in the 19th Century, and was motivated by a desire to increase the power and efficiency of steam engines. Its laws do not relate to interactions between individual particles, but rather involve higher level concepts, such as heat, temperature and entropy. The term “entropy” has been coined for a measure of the disorder of a system. The universe is analogous to a child's playroom, where the toys start out in the morning neatly arranged on shelves and in boxes, but at the end of the day have become strewn randomly over every horizontal surface. Left to itself, nature tends to the state of maximum disorder. This tendency is expressed in what is known as “the Second Law of Thermodynamics”, i.e. “entropy tends to a maximum.” (A popular skit on this topic is discussed in Appendix 2.2.)
The Second Law is often stated in the alternative form: “heat cannot spontaneously flow from a colder location to a hotter location.” If we consider two flasks of gas, one at a higher temperature than the other, and connect them together with a tube, heat is gradually transferred from the hot flask to the cold one, stopping when the gas in both flasks is at the same temperature. We never have a situation where heat flows in the other direction, thereby increasing the temperature differential between the gases in the two flasks.
If we consider this situation from a microscopic point of view, molecules in the hot flask are travelling on the average faster than those in the cold flask. The interconnecting tube enables the molecules to mix and collide with each other, with the result that on average the molecules of hot gas lose energy and those of cold gas gain energy, until the temperature difference between the two flasks vanishes. The Second Law states this principle formally, i.e. that we never expect the temperature difference between the flasks to increase. Such an action would be equivalent to replacing the disorder of completely mixed up gases with a situation where there are more hot molecules in one flask than in the other. The latter situation is less disordered than the first, a violation of the first statement of the Second Law.
Imagine now that we have a minimal amount of gas (very few molecules) distributed between the two interconnected flasks so that the temperatures in the two flasks are the same. It is quite possible in this case that random collisions might produce a situation whereby, for a while, more hot gas molecules are in one flask than the other, and we would have a temporary violation of the Second Law.
We can draw an analogy with the tossing of a coin. On the average we expect as many heads as tails when we toss an unbiased coin. If we toss the coin a million times, our expectation is that the number of heads will be within about 0.1% of the number of tails. However, if we only have three throws, the likelihood of all three producing the same result is reasonably high (1 in 4). The Second Law is similar, in that it is a statistical one; i.e., it applies very accurately when we have large numbers of molecules taking part in the collision processes. In a situation where there are approximately 1022 molecules in the two jars, the law is essentially exact.
Essentially exact, but not quite. We shall discuss the importance of this difference in the next Chapter, where we seek further insight into the nature of physical truth.
Let us return now to the main topic of this Section. Complexity is a relatively new field of study arising from a recognition that there are areas of science, that are too complex to be tackled by the conventional bottom-up methodology. A statistical approach is the only available way for some problems. An everyday example occurs in meteorology, where it is quite common to read a weather forecast along the lines that the chance of rain tomorrow is 60%, with a 30% chance of an afternoon thunderstorm. This may be frustrating if one is planning a picnic and would like more certainty about what to expect from the heavens. However, that is the best prediction possible at the present stage of development of meteorology.

The veins in a leaf, showing a fractal-like pattern. Image courtesy of Curran Kelleher (https://www.flickr.com/photos/10604632@N02/922705627 under CC licence https://creativecommons.org/licenses/by/2.0/ (accessed 2020/5/10))

Example of a Mandelbrot Set. The Mandelbrot Set is generated by successive repetitive applications of a simple mathematical formula. Image courtesy of Wolfgang Beyer (https://commons.wikimedia.org/wiki/File:Mandel_zoom_00_mandelbrot_set.jpg (accessed 2020/5/10))
Not only can similar behaviours be found at different levels within a single complex system, but also some common patterns can be observed over and over again in completely unrelated contexts. Examples are the growth curves associated with various types of tumours, crystals, prices (e.g. in the stock market), defects and infiltrations in materials, etc.

A plot of the dimensionless mass vs. dimensionless time for a wide variety of species. Figure reprinted from ref [9]. by permission of Springer Nature, Copyright (2001)
Dimensionless units are used in physics, and in other sciences, to avoid dependence on arbitrary units, such as the metre and the kilogram, which have been specified by humans and have no particular physical significance. For instance, a person’s mass may be specified as 77 kg, or 170 lb. In this case, his or her mass is being compared with that of a lump of metal lying in a Paris vault.6 By comparing the mass of an animal with some other more relevant measurement, perhaps the mass of the same animal at birth (or hatching), one can obtain a dimensionless value for the mass at any time after the animal’s birth. Time may also be expressed in dimensionless units, by comparing the elapsed time with some other relevant unit of time (e.g. the average lifetime of individuals of that species). (For the dimensionless units of time and mass actually used in Fig. 2.4, the reader is referred to Ref. [9].)
What is striking in Fig. 2.4 is that when the appropriate choice is made, the growth curves of many species collapse onto a single graph. If we infer that this pattern is applicable to all animals, we have in Fig. 2.4 a law expressing the growth of any animal’s mass as a function of time. This is a very far-reaching and powerful statement.
2.5 Understanding
As we have seen in Chap. 1, at various times in history, physicists have felt that everything of note in physics had already been discovered. Although research work was still zealously proceeding, a Theory of Everything (TOE) was postulated and the common wisdom was that it was only a matter of time before all the details would be wrapped up as well. However, the demise of physics, like that of Mark Twain, turned out to be “greatly exaggerated”: a TOE is still sought in vain, and physicists continue to seek explanations for new phenomena that are currently being discovered.
At the start of this chapter, we explored the origins of rational thought, the nature of logic and its relationship with physics. We discussed two approaches that are currently in use in the quest for understanding: a bottom-up methodology, where we begin with physical laws and deduce the consequences, and a top-down approach, which looks for common patterns in a range of very different phenomena.
Before proceeding further, we must clarify what we mean by “understanding”, a term that we have introduced glibly and even included in the title of the Chapter. The various dictionaries are of little help, providing us with a wealth of definitions with different shades of meaning. Richard Feynman has added his contribution to the confusion by famously declaring that no one understands Quantum Mechanics.
For our purposes in this book, let us try to firm up our interpretation of “understanding” by comparing it with “knowledge”. In the course of our lives we gain knowledge of many facts, which have been accrued from our observations, readings, and other sources of information. In general, these facts are collected one by one, as if they were isolated among themselves. However, when we have garnered enough of them, we may start to notice relationships between some of them. This is the beginning of what we mean by understanding: true understanding occurs when these relationships enable us to make predictions. In fact, predictability based on “understanding” is the essence of science.
Some would argue that true understanding comes only when physical laws are stated, from which mathematics can be used to deduce new facts, which are in turn verified by further observations. This is the bottom-up approach that we have discussed above, and whose power is demonstrated by the success of Newtonian Mechanics. However, as we shall see in Chap. 4, even this rigorous methodology may contain a fatal flaw embedded in its heart.
In Sect. 2.4, we saw that the less formal approach of pattern matching may provide us with a different kind of understanding. If Fig. 2.4 does indeed represent a true growth “law” for the animal kingdom, we can expect it to be valid also for animals (e.g. dogs, cats, lions, etc.) not included in Fig. 2.4. Otherwise Fig. 2.4 is little more than a collection of curious coincidences. As is always the case in science, further observation remains the final arbiter.
So, while it is true that the bottom-up approach lies at the core of physics, and will always remain the bedrock on which physical understanding is built, in some circumstances the complexity of nature forces physicists (and other scientists) to countenance a less formal, top-down, methodology. In the next chapter, we will explore some of these ideas further.