If we are conscious of one thing at a time, and the brain is a network of 100 billion neurons communicating by streams of electrochemical pulses, we must necessarily be unconscious of almost everything our brain does. This should not surprise us. As we have seen, we are only ever conscious of the results of our brain’s attempts to make sense of the world – or rather, to make sense of some small part of it. Yet these results arise from a hugely complex cooperative computation, the cycle of thought, involving a substantial fraction of those 100 billion neurons and drawing on vast amounts of information from our senses and our memories.
Consciousness is, then, analogous to the ‘read-out’ of a pocket calculator, a search engine, or an ‘intelligent’ computer database. When we ‘feed in’ a sum (43 + 456), a search term (‘Fife fishing villages’), or a query (‘What is the capital of France?’), the ‘read-out’ gives us an answer, but it gives us absolutely no explanation or justification of where this answer came from. We have not the faintest idea of the algorithms for binary arithmetic lurking deep within the calculator, the vastness of the web that our search engine is exploring, or the clever inferences and huge ‘knowledge-base’ embodied in our intelligent database. When we attend to an image, word or memory, we are, in essence, asking ‘What sense can I make of that?’ The conscious read-out springs to mind – an interpretation of what we are seeing or thinking about. Yet behind that read-out is a welter of electrical signals sparking across rich and complex networks of neurons, responding to current sensory input and past memory traces. This is the real nature of the unconscious: the vastly complex patterns of nervous activity that create and support our slow, conscious experience.
The neural processes within each cycle of thought are, crucially, not the kind of thing that could be conscious. They are, after all, hugely complex patterns of cooperative neural activity, searching for possible meanings in the current sensory input by reference to our capacious memories of past experience. But we are only ever conscious of particular interpretations of current sensory input. We could no more be conscious of our mental processes than a pocket calculator could ‘read out’ the design and operation of its own computer chip. Similarly, we could no more be conscious of the flow of cooperative neural activity by which we make sense of the world than we could be conscious of the biochemistry of the liver.
We are conscious of, and could only ever be conscious of, the meanings, patterns and interpretations that are the output of this cooperative computation. Consciousness is limited to awareness of our interpretation of the sensory world; and these interpretations are the result of each cycle of thought, not its inner workings.
To reinforce the conclusion that conscious experience reports the brain’s interpretation, rather than providing direct access to the input to our senses or the processes by which the interpretation is created, consider the lovely image on the left-hand side of Figure 36, created by Japanese vision scientist Masanori Idesawa. The vividness of the smooth white billiard ball, radiating black conical spines, is quite astonishing; the white spherical surface is bright, smooth and shiny and it floats a little above the white background of the page, seemingly a slightly brighter white. Look closely at the boundary of the sphere and the white background, and you may get a sense of a discernible curved edge, marking the boundary between sphere and background. Some of the black spines loom somewhat ominously towards us; others point away from us. Yet this entire construction is pure interpretation – a product of your imagination. The figure is no more than a few flat black geometric shapes on a white background; the shapes look innocently two-dimensional when randomly rearranged, as in the right-hand panel. Yet we ‘see’ the spiky sphere, not the flat patches of which it is composed. Our conscious experience is determined by what the brain thinks is present – the output of the cycle of thought, not its input.
What, then, are the underlying (and unconscious) calculations that our brain networks are carrying out in order to generate the conscious experience of a spiky white sphere? Introspection is, of course, of no avail here. But we can get some sense of the nature and complexity of the calculations by considering how we would write a computer program to mimic our brain’s ability to ‘create’ Idesawa’s 3D spiky sphere from a scatter of 2D shapes.
What principles might be required to do this? For a start, the computer program will need to calculate how 3D figures project into 2D: how, for example, a black cone, pointing towards us, may project a ‘triangle’, with its shortest side bulging slightly outwards. And it will need to calculate that a solid white sphere would block our view of the spines pointing away from us – so that, for example, some might project smaller, chopped-off, triangles, with the shorter side curving slightly inwards, following the line of the sphere. Our program will somehow have to capture the fact that a spike pointing towards us is shorter and stubbier than one perpendicular to our viewpoint; and to work out the 3D location and orientation of the join between the black cone and the white sphere, from observing its 2D position in our image. Moreover, the program would need to be able to piece together these different 3D locations, realizing that they are all consistent with a single curved surface – namely the surface of an (invisible) white sphere. In short, the calculations involved are nothing less than an exquisitely subtle and complex web of geometric inferences.2
When trying to work out the interpretation that best fits the web of inferences, the ideal approach is to consider all the constraints simultaneously – and continue to ‘jiggle’ the interpretation until it fits this full set of constraints as well as possible. To the extent that the brain focuses on just a few constraints, satisfies them as well as possible, and then looks at the remaining constraints, there is a real danger of heading up a cul-de-sac – the next constraints may not fit our tentative interpretation at all, and it will then have to be abandoned. The task of simultaneously matching a huge number of clues and constraints is just what the brain’s cooperative style of computation is wonderfully good at. But these are the calculations that our imagined computer vision program would have to carry out – and which, we can conjecture, the brain must carry out in order to create Idesawa’s spiky sphere.
It turns out, in fact, that the brain may be particularly well adapted to solving problems in which large numbers of constraints must be satisfied simultaneously. One influential account suggests that different aspects of the sensory input (and their possible interpretations) are associated with different brain cells, and the constraints between sensory fragments and interpretations can be captured by a network of connections between these brain cells. Subsequently the neurons cooperate to find the ‘best’ interpretation of the sensory data (or, at least, the brain settles on the best interpretation it can find) by exchanging electrical signals.3 The details of the process are complex, and only partially understood. But it seems clear that the network-like structure of the brain is perfectly designed for the kinds of cooperative calculation needed to weave together the many clues provided by our senses into coherent objects.
So the calculations the brain needs to carry out are clearly going to be rather complex. It is tempting to imagine that the brain must have found some clever shortcut to avoid all these complex calculations. But the current consensus in artificial intelligence, machine vision and perceptual psychology is that no such shortcut exists. Computer vision systems, from recognizing faces, scenes or even handwriting, typically work using roughly the ‘web of inference’ approach I’ve just outlined.4 It is even more tempting to think that no calculation of any kind is required – we just ‘see’ what is there.5 This apparent immediacy of perception has even led some psychologists to suggest that perception is, in some not entirely clear sense, direct, rather than emerging from incredibly complex and subtle hidden inferences.
However, this idea of direct contact between our conscious experience and the ‘real world’ can’t be right – because we see the white spiky sphere, even when there is no white spiky sphere. There are just lines and shapes: the circles and spheres are constructions – perceptual conjectures to make sense of the 2D patterns projected onto our retina.
This line of reasoning implies that, when our brain pieces together a ‘puzzle’ from sensory fragments (e.g. a pattern of flat black shapes), the glue which sticks the different fragments together is inference: in this case, geometric reasoning about how spikes and the invisible surface will interact to generate a part of our 2D sensory input. And the puzzle will be ‘solved’, and a coherent interpretation will arise in our conscious experience, when the network of inferences has a solution: here, the spiky sphere elegantly explains the entire layout, size and shape of the black geometric shapes.
Perception, then, is a process of incredibly rich and subtle inference – the brain is carefully piecing together the best story it can about how the world might be, to explain the agitations of its sense organs. Indeed, attempts to interpret sensory input, language or our own memories typically involve inference of great subtlety to figure out which ‘story’ weaves together the data most compellingly. This viewpoint has a long history: it was discussed by the brilliant German physician, physicist and philosopher Hermann von Helmholtz as far back as 1867, before psychology had even resolved itself into a distinct field of study.6 Helmholtz realized that our experience of the world is not merely a copy of the light flowing into our eyes, or the sound waves flowing into our ears – he came to understand that perception requires puzzling out the significance of a set of clues, each of which has little significance when considered in isolation. Helmholtz was ahead of his time by an entire century – the inferential nature of vision has only come to dominate thinking in psychology, neuroscience and artificial intelligence since people have started to build computer models of vision.
Moreover, perception is not merely inference – it is, of course, unconscious inference. The subtle patterns of reasoning that our perceptual processes go through are invariably opaque to us, whether in ‘constructing’ Idesawa’s ‘invisible’ sphere (Figure 36), suddenly ‘seeing’ a Dalmatian or cow (Figure 34), or ‘reading’ the emotional expression of Ivan Mozzhukhin, the Russian silent film star who demonstrated the Kuleshov effect (Figure 22). We can conjecture the kind of reasoning the perceptual system might go through, but we can’t report it, as it were, ‘from the inside’. All we know is the interpretation that is the result of perceptual inference, not the clues and chains of reasoning that our brain has used to reach that interpretation.
Yet, with regard to conscious awareness, perception is no different from any other type of thought, whether composing a tune, diagnosing a patient, choosing a holiday, having a daydream, losing oneself in a novel, formulating a mathematical proof or solving an anagram. In each of these cases, the cycle of thought can take us forward, step by step, and create meaning, but we are conscious only of the results of each step. Or consider, when deep in a novel, how our flow of experience is taken over by the story – while we have no awareness at all of the mysterious process by which the brain transforms sequences of printed letters into images and emotions. Or, more prosaically, when struggling with the rather baffling anagram ‘ncososcueisns’, I may eventually find myself wondering, perhaps after many failed attempts – could it be consciousness? I’m aware of the various possible words that popped into my mind; I have no awareness of the sources of these possibilities – how the different letters, and for that matter the various things I have recently been reading or thinking about, somehow trigger those words rather some others. This is because mental processes are always unconscious – consciousness reports answers, but not their origins.
According to the cycle of thought account, then, we are only ever conscious of the results of the brain’s interpretations – not the ‘raw’ information it makes sense of, or the intervening inferences. So there is nothing especially unconscious about perception. In perception, as in any other aspect of thought, the result is conscious; the process by which the result was achieved is not.
The very intuition that we experience a continuously flowing stream of consciousness must, according to the cycle of thought viewpoint, itself be an illusion. Rather, our conscious experience is a sequence of steps, of irregular length, in which the cycle of thought continually attends to, and makes sense of, fresh material.
But if so, shouldn’t we have some sense of discontinuity between one thought and the next – some subtle hint of the turning of our mental engine? As so often, eye movements are a crucial clue: scanning a perceptual scene or reading a text, our eyes jump, on average, three to four times a second. During a typical eye movement, the eye will be in motion for between about 20 to 200 milliseconds, depending on the angle through which the eyes ‘jump’. During this period, we are in effect almost completely blind. And each time our eyes ‘land on’ and stick to a new spot, the image projected onto our retina, and hence onwards to our brain, is a fresh snapshot, abruptly discontinuous with what we have seen before. So our visual input is a sequence of distinct ‘snaps’ of the scene or page, rather than a continuous flow – and when the eye lands on its target, a new cycle of thought begins, locking onto elements of the snapshot and making sense of it (recognizing an object, reading a facial expression, identifying a word).
It is rather astonishing that we are, from the point of view of conscious experience, entirely oblivious to the highly discontinuous process by which our eyes gather information. Look at your surroundings for a moment, and ask yourself as you explore the world around you, how often you are moving your eyes. When you flick your attention dramatically from one side of the room to the other, you can reasonably infer that your eyes must have shifted. Most of the time, though, it is incredibly difficult to tell whether one’s eyes are moving at all; whether they are moving in discrete jumps or roving smoothly across the image (in case you’re wondering, your eyes never rove smoothly, except when you’re tracking a moving object such as a passing car; aside from these specialized ‘smooth pursuit’ eye movements, your eyes always move in discontinuous jumps). Indeed, it is surprisingly difficult even to tell exactly where one is looking at any given moment – such is the power of the grand illusion that the entire visual field is simultaneously present in all its richness.
In particular, notice that, when viewed from outside (with the eye-tracker), the process of picking up visual information is clearly discontinuous: we lock onto a piece of the scene and impose meaning upon it; we shift our eyes to another piece of the scene and impose meaning on that, and so on, following the cycle of thought. But from the inside, the flow of thought feels entirely seamless. So we cannot trust introspection to reveal the step-by-step, cyclical nature of thought: even in vision, where the discontinuities can be read off directly in our eye movements, we are entirely unaware of them – the cracks between one thought and the next are, as it were, smoothly papered over.
But why does thought feel smooth, if it is actually ‘lumpy’? The explanation is the same as that for the grand illusion more generally. The brain’s goal is to inform us about the world around us, not about the workings of its own mechanisms. Were we consciously aware of the continually flickering snapshots as our eyes jump from place to place, we would be all too aware of how our eyes are moving, but completely unable to figure out whether the world itself is like the changing set of images in a slideshow or a single unitary scene.
It is, of course, only the stable world that matters – not our wildly unstable view as our eyes dash here and there. In order to decide how to act, we need to know what the world is like; our brains don’t care about the complex process of gathering and knitting together our stable world. We’re like military officers reading a message in a cipher that has been broken: in order to decide our next move in battle, all we care about is what the message says; the process by which the code was broken, with whatever teams of brilliant analysts and banks of computers, is entirely invisible and irrelevant.
So, in short, our sense that seeing and hearing seem continuous arises because the brain is informing us that the visual and auditory world are continuous; and subjective experience reflects the world around us, not the operations of our own minds. And it follows, of course, that the cycle of thought will, more generally, be imperceptible – we cannot ‘beat time’ following the irregular pulse of the cycle of thought. Conscious awareness tells us of the state of the world (including, of course, our own bodies), not the process by which we perceive it: as ever, our internal narrator wants us to focus on the ‘story’ and to remain as unobtrusive as possible.
According to the cycle of thought viewpoint, our conscious experience is of the meaningful organization of sensory information. If this is right, then talk of being conscious of one’s self is incoherent nonsense – ‘selves’, after all, aren’t part of the sensory world. And all ‘higher’ forms of consciousness (being conscious of being self-conscious; or being conscious of being conscious of being self-conscious), though beloved of some philosophers and psychologists, are nonsense on stilts.
We can be conscious of the sound of the words expressing such thoughts running through our mind, or conscious of the words as they appear on the screen or on the page in front of us as we type. But we are not conscious of the thoughts behind the words, only the words themselves. Still less are we conscious of the ‘mind’ behind the words. The great eighteenth-century Scottish philosopher David Hume put the point with his characteristic elegance: ‘For my part, when I enter most intimately into what I call myself, I always stumble on some particular perception or other, of heat or cold, light or shade, love or hatred, pain or pleasure. I never can catch myself at any time without a perception, and never can observe any thing but the perception.’7
Think about your conscious awareness of the number seven – clearly something, as a mathematical abstraction, that can’t be picked up by our senses. Possibly you have a shadowy mental image of the number, or of arrangements of patterns of seven dots, or perhaps you hear yourself ‘saying’ the word seven. Various properties of the number may come to mind. You may say to yourself ‘it’s my lucky number’; ‘it’s an odd number’, ‘it’s a prime’, and so on. But here what we are conscious of is not the number itself, but sensory impressions pertaining indirectly to the number, such as sensory impressions of the sound of things we might say about the number. The more we reflect, the more the very idea of being conscious of the number itself seems increasingly peculiar. We know lots of facts about seven, of course; surely what we are conscious of are not the facts themselves, but sensory impressions – most notably of rather abbreviated snippets of English, running through our minds. The consciousness of ‘seven’ is really rather fake and second-hand. But none the less, we know lots about ‘seven’: we can count to seven, decide whether there are more or less than seven people in a room, run through our seven-times table, and so on.
The same is true, I suggest, of supposed higher-order consciousness. I can hear myself ‘saying’ internally: ‘I’m know I’m conscious.’ And for that matter, ‘I must be conscious of being conscious’, and I may perhaps have a vague visual image floating through my mind. But it is these sensory impressions, and their meaningful organization into images and snippets of language that I am conscious of – and no more.
We can conclude that consciousness is not at all directly connected to belief, knowledge or similar notions. I know that Paris is the capital of France, but I’m not conscious of this fact or any other fact – except in the ersatz sense that I may be conscious of intoning the words ‘Paris is the capital of France’ in my imagination. But then I’m conscious of the words not the fact – my conscious experience will be different if I formulate the same fact in a different language. Or consider what, if anything, it means to be conscious of the fact that Inspector Lestrade was jealous of Sherlock Holmes – I can be conscious of my inner speech, articulating these words; but I can’t possibly be, in any sense whatever, conscious of the actual people, let alone one’s putative jealousy of the other, because they are not actual people at all, but fictional characters. Similarly, my conscious experience of perceiving a really convincing hologram of an apple might be exactly the same as my conscious experience when confronting a real apple. The sensory input, and my brain’s organization of that sensory input to interpret it as an apple, is identical in both cases. But there is nothing in my conscious experience that signals my real contact with the actual apple, rather than the entirely non-existent, holographic image of an apple. Consciousness is, in this sense, necessarily superficial: it is defined by the interpretations through which we organize sensory experience.
So, except in a rather uninteresting sense, we aren’t really conscious of numbers, apples, people, or anything else – we’re conscious of our interpretations of sensory experience (including inner speech) and nothing more.8
In this light, the tower of levels of consciousness, each built on the last, collapses. It is one more trick played on us by the brain. So we have another sense in which the mind is ‘flatter’ than one might expect: that our conscious experience consists of organizations of the surface of our sensory experience, whether conjured up through perception, imagination or memory. We have no subjective experience of ‘deep’ concepts of mathematics, the inner workings of our minds, or, indeed, consciousness itself. We can talk and write about these things; we can express them in symbols and sketches. But we are conscious only of the perceptual properties of these words, symbols and pictures, not of the supposedly shadowy abstract realms themselves. In short, we consciously experience the sensory information, broadly construed (including images generated by our own minds; sensations from inside our bodies, such as pain, feelings of exhaustion or hunger; and crucially from inner speech). But there is nothing more.
It is tempting to imagine that thoughts can be divided in two as the waterline splits an iceberg: the visible conscious tip and the submerged bulk of the unconscious, vast, hidden and dangerous. Freud and later psychoanalysts saw the unconscious as the hidden force behind the frail and self-deluded conscious mind. Psychologists, psychotherapists and psychiatrists have often suspected there may be two (or more) different types of mental system fighting for control of our behaviour: one or perhaps many unconscious mental systems that are fast, reflexive and automatic; and a deliberative system that is conscious, reflective and slow.9 Neuroscientists have suggested there may be multiple decision-making systems in the brain, at most one of which operates consciously, and which can generate conflicting recommendations concerning how we should think and act.10
But the vision of the iceberg, with its vast dark mass hidden below the water, hides an important but entirely flawed assumption. In an iceberg, the material that is above and below the waterline is precisely the same – ice is ice, whether deep beneath the waves or sparkling in the sunlight. And, for this reason, it seems only natural that what is hidden can be made visible and what is visible can be made hidden – it is still the same ice whether we lift it from the waters or plunge it into the depths. The metaphor suggests that the very same thought could be either conscious or unconscious – and could jump between the two states. Accordingly, a thought that was previously unconscious might be brought into the light of consciousness (whether through casual introspection, intense soul-searching or years of psychoanalysis). And a thought that was once conscious could sink into our unconscious (through sheer forgetfulness, or perhaps some mysterious psychic process of active repression). To continue the iceberg metaphor, the same story applies not just to individual thoughts but to our thought processes as a whole. Our conscious trains of thought are presumed to be paralleled by shadowy unconscious musings, torments and symbolic interpretations. This unconscious mental activity is supposed to be the same ‘stuff’ as conscious thought – the only difference being that it is submerged below the level of conscious awareness.
From the point of view of the cycle of thought, the iceberg metaphor could scarcely be more misleading. Remember that we have already concluded that we are always conscious of the results of our interpretation of sensory information, and we are never conscious of the process by which these interpretations are created. The division between the conscious and the unconscious does not distinguish between different types of thought. Instead, it is a division within individual thoughts themselves: between the conscious result of our thinking and the unconscious processes that create it.
There are no conscious thoughts and unconscious thoughts; and there are certainly no thoughts slipping in and out of consciousness. There is just one type of thought, and each such thought has two aspects: a conscious read-out, and unconscious processes generating the read-out. And we can have no more conscious access to these brain processes than we can have conscious awareness of the chemistry of digestion or the biophysics of our muscles.
Unconscious thought is a seductive and powerful myth. But the very possibility of unconscious thought clashes with the basic operating principles of the brain: the cooperative computation across billions of neurons, harnessed only to the challenge of the moment.
Prior to Freud, such a conclusion would have seemed natural enough – and the very idea of unconscious thought would have seemed rather paradoxical, because the very idea of thought was tied up with conscious experience. Since Freud, though, we have become so familiar with the idea of the ‘Unconscious’ that we feel an undue attachment to it – any and every unexpected, paradoxical, insightful or self-defeating aspect of thought and behaviour can be attributed to mysterious subterranean forces intruding upon our frail and perhaps slightly foolish conscious selves. If the present argument is correct, there can’t be another mind, system or mode of thought operating under the radar of conscious mental processing – the brain (or, at least, a given network of neurons) can only do one thing at a time.
As we’ve seen, the operation of this cycle of thought is by no means transparent to us: we are only ever conscious of its outputs – the meaningful organizations of sensory information. The flow of conscious experience is a sequence of ‘meanings’, but the processes generating those meanings (and the sensory data and the memories upon which they work) are never directly available to us. And this is not, perhaps, surprising: we can’t introspect how our lungs or stomachs work – why should it be any different for the brain? So rather than imagining that there may be two systems of thought vying for control of our thoughts and actions, we see that there is just one system, striving cycle by cycle to impose meaning on sensory input. The meaningful interpretations are conscious – yielding a world of patterns, objects, colours, voices, words, letters, faces and more; the brain processes through which these interpretations are achieved are no more conscious than any other physiological process.11 Novelists exploit the chatter and imagery running through our heads – but notice that the stream of consciousness of Virginia Woolf’s To the Lighthouse or James Joyce’s Ulysses is hardly an exploration of the innermost workings of the mind. Quite the reverse: the technique displays, at best, a sequence of partial results, workings, intermediate steps – the outputs of successful cycles of thought. These partial steps may sometimes offer useful clues – indeed, the Nobel Prize-winning psychologist, computer scientist, economist and social scientist Herbert Simon (1916–2001) put great stress on the value of analysing ‘think aloud’ data, obtained while people were reasoning or solving problems.12 But they are no more than clues: the process by which the cycle of thought generates ideas, crossword solutions and chess moves – and hence the question of why some ideas ‘pop’ into our minds and others don’t – remains entirely outside the realm of consciousness. After all, we only see the results of our perceptual experience – that is, we see objects, colours and movement, but we have no insight whatever into the calculations the brain went through to present the world to us in this particular way.
It seems, then, that we can at least say something about what flits across our own ‘stream of consciousness’, moment by moment – we engage in introspection to a limited degree, not about the operation of the cycle of thought, but about its successive outputs. Yet reporting even these conscious states can be a hazardous business. The philosopher John Stuart Mill (1806–73) famously remarked: ‘Ask yourself whether you are happy, and you cease to be so.’13 And there is a parallel danger for introspection: ‘Ask yourself what you are thinking, and you cease to think it.’
Our brain is perpetually struggling to organize and make sense of the sensory information to which we are currently attending. We find meaning in Idesawa’s wonderful demonstration (Figure 36) by creating a ‘spiky sphere’ to explain the arrangement of black and white patterns. And that very same drive to find meaning applies, of course, when we are trying to make sense of snippets of conversation, paragraphs of text, or entire plays and novels. Of course, our attempt to make sense of a movie or symphony will proceed in many sequential steps, following the flow of dramatic or musical events as they occur, but also stepping back to ponder their interrelationships and significance.
It is interesting to ponder what is happening when we are in this type of reflective mode, exploring and critically analysing. Looking back on a film, for example, we try to make sense of the plot, and point out real or apparent flaws (‘if she had the key then, why did she need to break in the first time?’); we try to get a grip on the thoughts and motivations of the characters (‘Romeo and Juliet can’t have had more than a romantic infatuation – they hardly knew each other!’); we may connect the setting and action of the film with other films or books (e.g. ‘that scene was straight out of Casablanca’) or real life (‘that’s a total violation of police procedure!’ or ‘a wonderful evocation of 1950s Spain’).
We can draw back further, arguing about whether a particular analysis or critique itself makes sense (‘that’s totally unrealistic’, ‘it’s supposed to be a whodunnit, not a police training film’); and the chatter of analysis and re-analysis, evaluation and re-evaluation, can continue indefinitely. After all, the Mahabharata, and the works Homer, Dante and Shakespeare are probably subject to as much critical analysis today as at any time in history. And, in the broadest sense, such discussion concerns what works of literature and art mean: their internal structure, their relation to other pieces of literature and art, to history and society, and to us, living in the twenty-first-century.
I suspect that the way we project meaning onto works of literature and art has a lot in common with how we understand events, stories and relationships in our daily lives. As our lives unfold, we continually attempt to make sense of what is happening to us: why we, and the people around us, act as we do; we compare our lives with other lives and, indeed, with lives in art, literature and the movies. And, from time to time, we step back and try to make sense of how the different pieces of our lives fit together (or don’t); and we do the same for other people’s lives, our relationships, the groups we are part of, the projects we are engaged in, and so on. We can debate endlessly and reconsider not just our own lives, but our analysis and evaluations of our lives.
As with art and literature, such evaluations are about meaning: how best to make sense of our lives, and how to make our lives more meaningful in the future. Meaning, in this broad sense, is about fitting together, finding patterns, seeking coherence. We do not merely live our lives, but frequently step back to comment on what happens and why; and we also wonder about the validity of our commentary, and so on, without any definite limit. But at each of these moments, the cycle of thought has a single task: to lock onto sensory (including, crucially, linguistic) information and organize and interpret that information as far as possible.
Each process of interpretation is local and piecemeal – we can’t ‘zoom out’ to consider the meaning of an entire literary work, a whole symphony or an entire relationship. Imagining that we can, of course, would be to fall for yet another variation of the grand illusion: thinking that we can load up, simultaneously, a complex whole in its entirety, when in reality our minds dash from one fragment of experience, commentary or argument to the next. We can, of course, debate art, literature and life endlessly – and each turn of the cycle of thought attempts to impose meaning on those fragments that have gone before.
Our ability to create ‘meaning’ from nothing is beautifully exemplified in games and sports. How could kicking a ball towards one rectangular frame and away from another possibly be a ‘meaningful’ activity? Or knocking a small white ball into a hole in the ground, with as few blows as possible from specially designed sticks? Or propelling a bouncy greenish yellow ball backwards and forwards over a horizontal net, using a stringed bat? Yet football, golf, tennis and many more games and sports are, perhaps, among the most meaningful ways in which millions of people choose to spend their time – the actions, tasks, challenges involved achieve no higher purpose, but fit together delightfully (everything just ‘works’) when things are going well (each shot sets up the next shot), and horribly badly (each action foils the next action) when we are playing poorly.
It is tempting to think, though, that meaning-as-coherence is not enough: that our lives should be guided by some ultimate purpose – something beyond our everyday understanding, or perhaps deep inside our innermost core. Or we may conclude that no such transcendental meaning exists and that human life is no more than a brief, purposeless biochemical agitation at one edge of a vast and lifeless cosmos. I think that this temptation, and its tendency to lead both to hope and to despair, is based on a misunderstanding.
The search for meaning is the object of each cycle of thought; and meaning is about organizing, arranging, creating patterns in and making sense of thoughts, actions, stories, works of art, games and sports. In short, finding meaning is about finding coherence. And coherence is created step by step, one thought at a time; it is never complete, but is continually open to challenge and debate. And this is how it should be: surely no novel, poem or painting, however profound, can be as rich, complex, challenging and as endlessly open to re-evaluation and reinterpretation as an individual human life.