Mervyn Peake’s Gormenghast Castle is one of the strangest settings for a work of fiction – vast, misshapen, ancient, crumbling and architecturally idiosyncratic. Peake’s visual imagination was wonderful – he was an artist and illustrator as much as a writer – and his sharp and striking descriptions create a sense of a world of solidity, richness and detail. As you read his novels Titus Groan and Gormenghast, Gormenghast Castle begins to inhabit your imagination. Over the years, some particularly committed, and perhaps slightly obsessive, readers have tried to piece together the geography of the castle from its scattered descriptions. Yet this appears to be an impossible task: the attempt to draw a map, or build a model, of Gormenghast Castle leads to inconsistency and confusion – the descriptions of great hallways and battlements, libraries and kitchens, networks of passages and vast, almost deserted wings can’t be reconciled. They are as tangled and self-contradictory as the inhabitants of the castle itself.
Peake’s verbal magic aside, this should not be surprising. Creating a fictional place is a bit like setting a crossword. Each description provides another ‘clue’ to the layout of the castle, city or country being imagined. But as the number of clues increases, knitting them together successfully soon becomes extraordinarily difficult – indeed, it rapidly becomes impossible, both for Gormenghast’s readers and for Peake himself.
Troubles with the coherence of fictional worlds go far beyond mere geography, of course. Stories have to make sense in so many ways: through consistency of plot, character and a myriad of details. Some authors go to inordinate pains to minimize such mishaps: J. R. R. Tolkien set The Hobbit and The Lord of the Rings in a world – Middle-earth – with a detailed history, mythology and geography, complete with maps, to say nothing of invented ‘Elvish’ languages with extensive vocabulary and grammar. At the other extreme, Richmal Crompton, author of the stories of the charmingly roguish schoolboy William Brown, sketched in the details of her stories with considerable abandon, and cheerfully admitted flagrant inconsistencies (the hero’s mother is sometimes Mary, sometimes Margaret; his best friend is either Ginger Flowerdew or Ginger Merridew).
So inconsistencies are one thing that make fiction different from fact: the actual world may seem puzzling, paradoxical and downright contrary, but it cannot actually be self-contradictory; and while a description of a castle or a country can turn out to make no sense, an actual castle or country is, by its very existence, perfectly consistent – all the facts, distances, photographs, theodolite measurements, satellite imagery and geological soundings must yield a coherent picture – because there is just one unique, actual world. But with fictional worlds, avoiding inconsistency requires incredible vigilance. Despite the painstaking efforts of a brilliant and remarkably retentive mind, Tolkien’s Middle-earth has yielded a haul of apparent inconsistencies, when scoured by its huge fanbase.
Fictional ‘worlds’, even the immensely detailed worlds of Peake and Tolkien, are notable too for their sheer sparseness. In real life, everyone has a specific birthday, fingerprint and an exact number of teeth. In fictional worlds, most characters have none of these properties, or any of a million others, whether significant (having a recessive gene for haemophilia) or trivial (the precise family relation to Elvis1).
But, of course, the scarcity of information in fiction is much more profound than this. Consider again Anna Karenina, whose public persona, relationships and, perhaps, sense of her own identity, depend on her beauty. Yet what did she look like? Artist and celebrated book-cover designer Peter Mendelsund points out that Tolstoy says astonishingly little – that she has thick lashes, a thin down of hair on her upper lip and scarcely more.2 Is she tall or short? Blonde, redhead or brunette? Blue-eyed or brown-eyed? The astonishing thing is not that Tolstoy tells us so little, but that we don’t notice and, still less, care. We can read the book with the subjective feeling that it is the story of a flesh-and-blood, three-dimensional woman, rather than a blurry stick figure, but Tolstoy tells us almost nothing about which flesh-and-blood, three-dimensional woman.
One might retort, of course, that literary fiction is not about the physical appearance of characters, but about the inner life of the mind. Yet the truth is that Anna’s mind is just as vaguely sketched as her body: what sort of person is Anna, exactly? How would it feel to have a conversation with her? How does she view the Russian state and its vast inequalities? Is she both defiant of, and crushed by, the opprobrium she receives in pursuing her affair with Vronsky? The wonder of Tolstoy’s novel is that these questions are not answered, but are tantalizingly and fascinatingly open: we can ‘read’ Anna in a variety of ways: as heroic, obsessive, romantic, defiant, wild, oppressed, loving, or cold, in various degrees and combinations. But this very openness implies, of course, that Anna’s characteristics, whether physical or mental, are not pinned down by the text of the novel.
Consider, now, the ‘real Anna’ we imagined in the Prologue. Suppose, indeed, that the novel were novelized biography, rather than pure fiction. Then all those missing facts about Anna (her precise physical appearance, her genome, her relationship to Elvis) would all be completely well defined. We might be able to figure some of them out, through concerted research (e.g. careful genealogical analysis might reveal a common ancestor with Elvis in, perhaps, seventeenth-century Kiev); other facts (e.g. her height on her eleventh birthday) might be neither known nor knowable, given the surviving traces from her life. But is there a true ‘reading’ of Anna’s life, a precise delineation of her personality traits, motives and beliefs, if we only knew more?
Recall the two characteristics of fiction we mentioned earlier – inconsistency and sparseness. If Anna were to explain her own inner life, she would surely be as jumbled, incoherent and self-contradictory as Mervyn Peake’s Gormenghast Castle. Her explanations would be inherently sparse – she might have little idea of her views on many aspects of Russian society, the people around her, her own goals and aspirations, and many other topics she has scarcely considered. While the real Anna really would have a family relationship to Elvis, she would surely not have precisely defined views on the merits of different modes of Russian agricultural reform or the future of the Tsar. She could, of course, create and articulate opinions, on demand. But these opinions would themselves be both vague and liable to fall into self-contradiction. The real Anna’s mind would be just as much a work of fiction as the fictional Anna’s mind; our own minds are no more ‘real’. Whereas the fictional Anna is a sketchy and contradictory character created by Tolstoy’s brain, a real Anna would be an equally sketchy and contradictory character, created by her own brain.
The external world is quite the opposite, of course. It is specified in complete detail, whether we know the details or not. My coffee mug was bought on a particularly day of the week and fired at a particular temperature, in a specific kiln; it has a particular weight and distance from the equator. And the real world is relentlessly consistent: for facts to hold good in the same world, they cannot be contradictory.
By contrast, our beliefs, values, emotions and other mental traits are, I suggest, as tangled, self-contradictory and incompletely spelled out as the labyrinths of Gormenghast Castle. It is in this very concrete sense that characters are all fictional, including our own. Inconsistency and sparseness are not just characteristics of fiction. They are also the hallmarks of mental life.
It is hardly controversial that our thoughts seem fragmentary and contradictory. But can’t the gaps be filled in and the contradictions somehow resolved? The world and mind of Anna Karenina are defined by Tolstoy’s text – there is no ‘ground truth’ that can fill out the details. But perhaps with real people, there might be such a ground truth, if only we search hard enough. Perhaps, somewhere within us, lies a complete specification of our beliefs, motives, desires, values, plans and more. Perhaps we have rich mental depths – a complete and coherent inner realm from which our thoughts and actions consistently follow. Perhaps we can uncover the contents of such inner depths, if we only search hard enough: we can consult our ‘inner oracle’ by asking ourselves to outline and explain our knowledge as clearly as we can. If so, we could then try to piece together the ‘wisdom’ of the inner oracle, from a careful study of its utterances; through weeding out its inconsistencies and filling in the gaps.
Might this work? The only way to find out is to try. And try we have. Two thousand years of philosophy have been devoted to the problem of ‘clarifying’ many of our common-sense ideas: causality, the good, space, time, knowledge, mind, and many more. Science and mathematics began with our common-sense ideas, but ended up having to distort them so drastically – whether ‘heat’, ‘weight’, ‘force’ or ‘energy’ – that they were refashioned into entirely new, sophisticated concepts, with often counter-intuitive consequences. We don’t intuitively distinguish between heat and temperature; common-sense doesn’t distinguish weight, mass and momentum; we imagine (as did Aristotle) that if no force is acting on a body, it comes to rest – whereas in reality it keeps moving at a constant velocity; we have no intuitive idea that heat is a kind of energy; or that energy can be stored by moving objects uphill, carrying out chemical reactions, stretching elastic bands, and so on.
The laws of motion, thermodynamics and more that govern the physical world are strange and counter-intuitive. Indeed, this is one reason that ‘real’ physics took centuries to discover, and presents a fresh challenge to each generation of students. Whatever our inner oracle, hidden somewhere in our mental depths, might know, it can’t be anything like physics.3
Now, of course, no one seriously proposes that each of us has an inner Newton, Darwin and Einstein – or rather inner representations of their astonishing theoretical achievements – generating our common-sense explanations of the physical world. But maybe our inner oracle has something different: a simple, intuitive, approximate physics, biology or psychology. Perhaps our thoughts are guided by common-sense theories which, while nothing like the theories painstakingly created by science, might be theories none the less.
This is a seductive idea. Indeed, starting in the 1950s, decades of intellectual effort were poured into a particularly sophisticated and concerted attempt to crystallize some of our common-sense theories. The goal was to systematize and organize human thought in order to replicate it and to create machines that think like people. This was the guiding idea in the early years of one of the great technological challenges of our time: the goal of creating artificial intelligence.
The pioneers of artificial intelligence in the 1950s, 1960s and beyond, and their collaborators in cognitive psychology, philosophy and linguistics, took the idea of mental depth very seriously. Indeed, they took it for granted that the thoughts that we consciously experience and can put into words are drawn from a vast sea, or web, or database of similar, pre-formed thoughts, which we are not currently consciously experiencing. Behind each expressed thought lies, supposedly, a thousand others beneath the surface. And all this hidden knowledge is, it was assumed, organized into theories, rather than being a hopeless jumble. So to emulate human intelligence, the starting strategy is:
Step 1. To excavate our mental depths, and to bring to the surface as much of this supposed inner storehouse of beliefs as we can.
Step 2. To organize and systematize this knowledge to recover our hidden ‘common-sense theory’. To encode this knowledge in a computer database involves expressing it in a tidy and precise formal language that the computer can work with, rather than merely noting it down in ‘plain English’.
Step 3. To devise computational methods to reason over this database, in order to use this common-sense knowledge to make sense of new experiences, use language, solve problems, make choices, plans and conversation, and generally to engage in intelligent behaviour.
Early attempts to create artificially intelligent computer programs to replicate human intelligence took precisely this approach. There were plenty of sceptical philosophers and psychologists who felt that this method was doomed from the outset – people who suspected, in our terms, that mental depth might be an illusion. But the researchers were undeterred. If there was a chance that the approach might succeed, it was surely well worth attempting – the achievement of creating genuinely intelligent machines by capturing and recreating our own understanding of the world would be so spectacular.
And hopes were high. Over successive decades, leading researchers forecast that human-level intelligence would be achieved within twenty to thirty years. Yet progress seemed slower, and the challenges far greater, than had been imagined. By the 1970s, serious doubts began to set in; by the 1980s, the programme of mining and systematizing knowledge started to grind to a halt. Indeed, the project of modelling human intelligence has since been quietly abandoned, in favour of specialist projects in computer vision, speech-processing, machine translation, game-playing, robotics and self-driving vehicles. Artificial intelligence since the 1980s has been astonishingly successful in tackling these specialized problems. This success has come, though, from completely bypassing the extraction of human knowledge into common-sense theories.
Instead, over recent decades, AI researchers have made advances by building machines that learn not from people but from direct confrontation with the problem to be solved: much of AI has mutated into a distinct but related field: machine-learning. Machine-learning works by extracting information not from people, but from huge quantities of data: images, speech waves, linguistic corpora, chess games, and so on. And this has been possible because of advances on a number of fronts: computers have become faster, data sets larger and learning methods cleverer. But at no stage have human beliefs been mined or common-sense theories reconstructed.
The project of creating artificial intelligence by extracting, systematizing and reasoning with human thoughts – trying to coax out the ‘theories’ of our inner oracle – failed in a particularly instructive way. The very first step, drawing out the knowledge, beliefs, motives, and so on, that underpinned people’s behaviour, turned out to be hopelessly difficult. People can fluently generate verbal explanations and justifications of their thoughts and actions; and, whenever parts of those explanations are queried, out will tumble further verbal explanation or justification. But analysis of these streams of verbal description, however long they continue, shows that they are little more than a series of loosely connected fragments. Chess grandmasters, it turns out, can’t really explain how they play chess; doctors can’t explain how they diagnose patients; and none of us can remotely explain how we understand the everyday world of people and objects. What we say sounds like explanation – but really it is a terrible jumble that we are making up as we go along.
This becomes all too obvious when artificial intelligence research attempts to carry out Step 2: arranging and organizing the fragments into a coherent and reasonably complete form to create the database for the artificial intelligence system. This is a hopeless task: the fragments of knowledge that people generate are both woefully under-specified and fatally self-contradictory. So it was scarcely possible to get started on Step 3: getting computers to do reasoning with any extracted human knowledge.
It turned out, indeed, that even the simplest aspects of knowledge, about the very most basic properties of the everyday world, proved completely intractable. For example, artificial intelligence researchers had hoped to extract the common-sense physics that was presumed to govern our everyday interactions with the physical world. In the 1960s and 1970s, this seemed a good place to start the project of capturing human knowledge.4 Yet half a century later, we are still at square one.
To understand why, let us focus for a moment on a familiar aspect of common-sense physics: the knowledge that we apparently all share about the behaviour of everyday objects and substances. Specifically, let us think about what we know about the behaviour of coffee, ball-bearings and sugar when dropped onto the kitchen floor. As we all know, the coffee would splash and settle in various sized puddles and blobs; the sugar might form a shallow heap or spread more evenly across the floor; and the ball-bearings would scatter in all directions, disappearing under units and appliances.
So we know roughly how everyday things behave. But it is surprisingly difficult to explain convincingly why any of these things is true. We can certainly generate lengthy explanations. The coffee spreads out, we might say, because it is trying to ‘find a level’. But quite why some coffee stays on the floor and some breaks free into droplets and splashes is not explained by the ‘find a level’ intuition. Perhaps one clue is that water, the main ingredient of liquid coffee, likes to stick together, which would explain why coffee moves through the air in streams and droplets, and why it holds together in rounded puddles and patches. This story could, as you can imagine, be extended indefinitely.
Now what about sugar? This doesn’t splash like coffee for some reason; it also doesn’t seem to ‘stick’ together. It spreads out a little on impact with the floor but not much – this must be something to do with its roughness and with some consequent friction when it tries to move. Would this be different if the sugar were super-fine or super-coarse? Is coffee behaving a little like frictionless, or nearly frictionless, sugar? Presumably the sugar wants to find a level, like coffee, but much less so – though if it is blown by a draught or a fan it can gradually create a fairly even covering over the floor. Ball-bearings are different again, being smooth, hard and having no tendency to stick together – when one ball-bearing lands on another with a glancing blow, both can shoot off in any direction. Quite how this works is not too clear. Somehow the bounciness of ball-bearings is important – balls of putty would behave very differently. And this is odd, as ball-bearings do not seem very elastic (unlikely rubber balls).
Now suppose that coffee, sugar or ball-bearings were being dropped into an empty plastic bucket – or a bucket full of water – or any of a number of variations. You can generate verbal explanations for this too. One point to note is that each explanation seems to be new, different and typically incompatible with the last one, rather than following from a single set of underlying principles – the explanations of every new scenario just seem to run off on all directions, apparently without limit. Moreover, each step in each explanation can itself be queried. Why does water tend to ‘find a level?’ Why do ball-bearings bounce off each other? Why does sugar change consistency as it enters the bucket of water? And so on.5
We have run, predictably enough, into the twin problems of sparseness and incoherence. Our explanations have holes everywhere and inconsistencies abound. Indeed, psychologists have a phrase, ‘the illusion of explanatory depth’, for the bizarre contrast between our feeling of understanding and our inability to produce cogent explanations.6 Whether explaining how a fridge works, how to steer a bicycle, or the origin of the tides, we have a feeling of understanding which seems wildly out of balance with the mangled and self-contradictory explanations we actually come up with.
Perhaps the single most important discovery from the first decades of artificial intelligence is just how profound and irremediable this problem is. The starting assumption was that our intuitive verbal explanations just need to be fleshed out and patched up – that there must be a common-sense theory in there ‘deep down’ if we only looked hard enough. Our assumptions need to be firmed up, and our concepts knocked into shape. The hope was that, with a bit of order and organization, verbal descriptions could be distilled into a form that could be turned into clear, comprehensive theories that could be coded up by computer programmers.
But the opposite proved to be the case. Armies of artificial intelligence researchers, with an impressive combination of raw ingenuity, mathematical firepower and sheer tenacity, struggled to squash verbal knowledge into a usable form. And they have consistently failed. Our verbal explanations of the physical world – but equally of the social, economic worlds or our moral or aesthetic judgements – turn out not to be a confused description of inner clarity, but a confused description of inner confusion.
Our verbal explanations and justifications are not reports of stable, pre-formed building blocks of knowledge, coherent theories over which we reason, deep in an inner mental world. They are ad hoc, provisional and invented on the spot. We have consulted the inner oracle of common-sense physics, psychology, ethics and much more hoping to uncover its hidden wisdom. But the oracle turns out to be a fraud, a fantasist, a master of confabulation.
We have vastly underestimated our powers of invention. Our ‘inner oracle’ is such a good storyteller – so fluent and convincing – that it fools us completely. But the mental depths our mind conjures up are no more real than the worlds of Gormenghast or Middle-earth. The mind is flat: our mental ‘surface’, the momentary thoughts, explanations and sensory experiences that make up our stream of consciousness is all there is to mental life.
The illusion of mental depth is much more pervasive than it appears at first sight. Two and a half millennia of philosophy have tried to systematize our intuitions and verbal explanations about core concepts, from ‘the good’ to the nature of objects and events, mind and body, knowledge, belief or causality. This only makes sense if there is a coherent way of fitting together our intuitions and explanations using these categories. Yet no such coherent theories are ever outlined.
In the late nineteenth and twentieth century, philosophers in what was to become the analytic tradition began to explore what became a hugely influential approach to wrestling the chaos of common sense into shape. Gottlob Frege, Bertrand Russell, the early Wittgenstein and many others attempted to regiment common sense through understanding the language in which it was expressed and, specifically, to focus on clarifying the ‘logical structure’ of language through exploring and systematizing intuitions about meaning. Getting language and meaning ‘straight’ was seen as a crucial stepping stone to launch an indirect attack on big philosophical questions. The idea was that many of the confusions in our thoughts would disappear if only we could clarify how those thoughts are captured in language. Yet it turned out that our intuitions about language and meaning are also hopelessly full of gaps and contradictions. Intuitions are either absent, or conflict horribly, over such elementary questions as the meaning of names (e.g. there is deep puzzlement over tricky cases such as Homer – more ‘oral tradition’ than author? – or Sherlock Holmes or any fictional character, people with noms de plume, multiple people with the same name, and so on). Again, the very idea that there is some inner coherent theory of meaning that can be drawn out by intuition and reflection is misguided – our use of, and thoughts about, the meaning of our language is a chaotic, incoherent jumble.
After careful reflection on our contradictory intuitions about meaning, truth, knowledge, value, mind, causality, or whatever common-sense notion is under analysis, philosophers are able to fill in a gap here and iron out a contradiction there. But new gaps and fresh contradictions continually appear. If the mind is a confabulator, not a theorist, no such theory of our common-sense intuitions about anything can be constructed, any more than enthusiastic fans would ever be able to draw a map of Gormenghast Castle.7
In a parallel development, linguists began to pursue the project of systematizing the structure of language, following Noam Chomsky’s project of generative grammar: the goal was to systematize our intuitions about which sentences are acceptable into a mathematically rigorous theory, which was assumed to capture the nature of each person’s knowledge of the language. Yet this programme too has foundered: it turns out that even the structural patterns observed in language – not just its meaning – are a jumble of inconsistent regularities, sub-regularities and outright exceptions.8 The same story applies in economics. Economists worked on the assumption that consumers and companies would have a complete and consistent theory of the ‘world’ (or the economically relevant parts anyway), including a complete understanding of their own preferences. The behaviour of markets could be seen as ‘emerging’ from the interaction of these ‘super-rational’ agents. This programme, for all its mathematical elegance, has also foundered. For one thing, countless experiments in psychology and behavioural economics have shown just how spectacularly ill-defined and self-contradictory our beliefs and preferences are. For another, the confusion of individual decision-makers (their exuberant hopes, desperate panics, their tendency to blindly follow or wildly over-react) can generate unexpected turbulence at the level of markets or of entire economies.
The idea that people have complete and consistent theories of the world, and preferences as to what they want, is also widely presupposed in business and policy. Market researchers try to work out what goods or services we want. Decision analysts attempt to distil the beliefs and preferences of the many stakeholders in complex projects such as airports or power stations. Health economists try to put stable monetary valuations on disease, disability and life itself. All of these projects are bedevilled by the same problem: the inconsistent and partial nature of our intuitions. People routinely supply wildly different answers to exactly the same question (even within a few minutes), and their answers to different questions are often inconsistent; there is the same variation in their actual choices (people can express a high valuation of their own life, but still engage in dangerous behaviour). And often we are expressing views about matters (for example nuclear power, climate change, or whether a new cancer drug should be funded by the government) where our explanations are shallow indeed – most of us understand these matters no better than we understand the operation of our fridge. We may or may not have strong opinions, but these opinions don’t – and could not possibly – spring from coherent and fully spelt-out common-sense theories. There are, as the artificial intelligence ‘experiment’ showed, no such theories to be extracted. The problem, in short, is that our intuitions about everyday physics, psychology, morality, meaning, or what we want, are no more coherent than Peake’s description of Gormenghast Castle.
Yet we are often seduced by a very different picture – that the confusions and contradictions in our thoughts and lives must represent a clash between multiple, and conflicting, selves. Perhaps, for example, we believe that we are the product of a conflict between a ‘conscious self’ and also a hidden, perhaps dark, atavistic ‘unconscious self’. But the incoherent nature of the ‘self’, and the thoughts, motives and beliefs it is supposed to contain, is not explained by adding extra selves, any more than the incoherence of Peake’s description of Gormenghast Castle is resolved by postulating multiple castles.9
I’ve spent decades being pulled towards, and at the same time desperately resisting, the conclusions in this book, as I mentioned in the Prologue. This may strike you as puzzling. For many people, the idea that we are, at bottom, story-spinning improvisers, interpreting and reinterpreting the world in the moment, is immediately appealing for a variety of reasons.
For one thing, the fact that our thoughts don’t cohere together in a way that can be replicated in a computer may seem to provide a welcome defence against the idea that human freedom, creativity and ingenuity can be reduced to mere calculation.
Moreover, the ‘mind is flat’ perspective can seem entirely natural – perhaps even old news – from the perspective of the arts, literature and humanities, where there is a long tradition of seeing people and their actions as the subjects of conflicting, fragmented and endlessly re-created interpretation. Indeed, many scholars would go further and argue that the right conclusion to draw from our reflections is that human nature cannot, and should not, be understood from a scientific point of view at all. Perhaps we should simply embrace our intuitive interpretations of ourselves and each other, with all their gaps and contradictions, as all there is to say about human behaviour.
According to this perspective, psychology should be aligned with the arts and the humanities rather than the sciences; perhaps understanding ourselves is inevitably just a matter of eliciting, reflecting on, analysing, challenging and reconceptualizing our interpretations of thought and behaviour; and interpretations of other people’s interpretations; and so on, indefinitely. If so, then perhaps we should create a psychology in which everyone has a valid perspective on themselves and everyone else, in which any view can be re-analysed, contested, overturned or revived, which sees the understanding of mind and behaviour as an open-ended discussion, where there are no ‘right answers’ and never could be.
For many people, this is a thrilling and ennobling vision. I am not one of those people. I despair at the prospect that our understanding of ourselves will take us no further than our hopelessly inadequate intuitive explanations, reflected and distorted in an endless hall of mirrors of equally flawed and baseless intuitions. For me, this is not liberation but nihilism; not a freeing of psychology from the bonds of sciences, but the total abandonment of the project of understanding ourselves through the application of science.
Viewing psychology as part of the arts and humanities is to react to the illusion of mental depth in, I think, precisely the wrong way: it embraces the ad hoc, the improvised, the partial and contradictory style of everyday verbal explanation of our thoughts and behaviours, and adds endless layers of further, ever more convoluted, verbal speculation, whether about dreams, associations, complexes, multiple selves, metaphors, archetypes, phenomenology, and more. Turning a wild, creative but entirely untrustworthy imagination upon itself could scarcely be expected to lead to reliable results. This would be like explaining the origin of fairy tales by means of yet another fairy tale.
A science of the mind requires the opposite approach: understanding how the ‘engine of improvisation’ that is the core of human intelligence can be constructed out of the machinery of the human brain. The brain is, after all, ultimately a biological machine – specifically, a machine constructed from a network of about a hundred billion brain cells, densely wired together. It is a biological machine that creates, improvises, dreams and imagines. One of the deepest challenges in science is to figure out how this can possibly work – to understand how electrical and chemical activity in our neural circuits can somehow generate our stream of thought and actions.
Early artificial intelligence explored one tack – and initially a very appealing one – which makes perfect sense if we assume that the human brain operates according to roughly the same principles as the type of computer on which the researchers were writing their computer programs – and, indeed, on which I am writing these words. The symbolic explanations we generate in everyday language don’t seem so far from the symbolic representations used in computer language and databases. We just need to knock our intuitive verbal explanations into shape a bit – fill in the gaps and iron out the inconsistencies – and, perhaps, we can turn them into the contents of our inner database, over which symbolic calculations can occur. So the researchers take the stories, justifications, intuitions and explanations that we spin at face value; and try to systematize and organize these into theories over which a machine might reason. Yet, as we have seen, this approach has never worked, and can never work: our stories are irreparably flimsy, inconsistent inventions, invented on the spot, rather than clues about deep inner theories.
But there has long been an alternative viewpoint: that biological computation is very different from the symbolic computation of conventional computers. In Part Two, we’ll explore the so-called ‘cooperative’ style of computation used by the brain to tell a new story about how the mind works – how our continually inventive stream of thoughts arises from the machinery of the brain. But first we need to explore, and undermine, our existing intuitions about how our minds work more thoroughly. We need to clear the ground before we can build anew. It turns out that the illusion of mental depth is far more insidious and all-pervasive than we have seen so far. Unravelling our intuitions and seeing our minds afresh is the topic of the rest of Part One.