1 D. Dennett, Consciousness Explained (London: Penguin, 1993), p. 68.
2 There are many different versions of this metaphor of introspection as perception of an inner world. We examine our consciences; find (or lose) ourselves; try to learn who we really are, what we really believe or stand for.
3 Those sceptical of common-sense explanations of the mind who have particularly influenced my thinking include Daniel Dennett, Paul Churchland, Patricia Churchland, Gilbert Ryle, Hugo Mercier, James A. Russell, Dan Sperber, among many others. A particularly influential study that cast doubt on the psychological coherence of common-sense explanations of all kinds is: L. Rozenblit and F. Keil (2002), ‘The misunderstood limits of folk science: An illusion of explanatory depth’, Cognitive Science, 26(5): 521–62.
4 Experimental methods relying on introspection, for example those in which people describe their experiences of different perceptual stimuli, were a focus of the very first psychological laboratory, set up in Leipzig by Wilhelm Wundt in 1879. Philosophy and psychology continue to contain strands of phenomenology – where the goal is to try to understand and explore our minds and experience ‘from the inside’. These methods have been, in my view, notably unproductive – phenomenology draws us into the illusion of mental depth, rather than uncovering its existence.
5 Sceptics include behaviourists such as Gilbert Ryle and B. F. Skinner, theorists of direct perception such as J. J. Gibson and Michael Turvey, and philosophers influenced by phenomenology (Hubert Dreyfus). Paul and Patricia Churchland have long argued that everyday ‘folk’ psychology is no more scientifically viable than ‘folk’ physics or biology. Over the years I have argued both in favour of this view (see N. Chater and M. Oaksford (1996), ‘The falsity of folk theories: Implications for psychology and philosophy’, in W. O’Donaghue and R. F. Kitchener (eds), The Philosophy of Psychology (London: Sage), pp. 244–56) and (wrongly, I now feel) against it (see, for example, N. Chater (2000), ‘Contrary views: A review of “On the contrary” by Paul and Patricia Churchland’, Studies in History and Philosophy of Biological and Biomedical Sciences, 31: 615–27). The ideas in this book owe a lot to the philosopher Daniel Dennett and his discussion of an ‘instrumentalist’ view of everyday psychological explanation and the nature of conscious experience (D. C. Dennett, The Intentional Stance (Cambridge, MA: MIT Press, 1989) and D. C. Dennett, Consciousness Explained (London: Penguin, 1993)).
6 Some of the ideas in Part Two of this book have close links to joint work with my close friend and colleague Morten Christiansen of Cornell University on how we use and learn language (e.g. Morten H. Christiansen and N. Chater, Creating Language: Integrating Evolution, Acquisition, and Processing (Cambridge, MA: MIT Press, 2016); Morten H. Christiansen and N. Chater (2016), ‘The now-or-never bottleneck: A fundamental constraint on language’, Behavioral and Brain Sciences, 39, e62).
1 After all, given that humans have a common ancestor, we are all Elvis’s nth cousins m times removed, for some numbers n and m. As all life has a common ancestor, we are also rather more distant cousins of pond algae.
2 Mendelsund gives this and many other compelling examples of the astonishingly sketchiness of fiction and the vagueness of the imagery that we conjure up when reading. We can, none the less, have the subjective sense of being immersed in another ‘world’ full of sensory richness. P. Mendelsund, What We See When We Read (New York: Vintage Books, 2014).
3 We could, of course, make a parallel, and equally strong, argument for any scientific or mathematical topic, from chemistry, biology, economics and psychology to mathematics and logic.
4 Two particularly sophisticated and influential papers were: J. McCarthy and P. J. Hayes (1969), ‘Some philosophical problems from the standpoint of artificial intelligence’, in B. Meltzer and D. Michie (eds), Machine Intelligence 4 (Edinburgh: Edinburgh University Press, 1969); and P. J. Hayes, ‘The naive physics manifesto’, in D. Michie (ed.), Expert Systems in the Micro-Electronic Age (Edinburgh: Edinburgh University Press, 1979). It is important to stress that artificial intelligence has proceeded primarily not by solving the deep problems of understanding human knowledge, but by strategically skirting around them. The challenges raised by early artificial intelligence remain both hugely important and largely unresolved.
5 My friend and colleague Mike Oaksford and I have called this the fractal nature of common-sense knowledge – each step in a chain of reasoning seems to be just as complex as the whole chain. M. Oaksford and N. Chater, Rationality in an Uncertain World: Essays on the Cognitive Science of Human Reasoning (Abingdon: Psychology Press/Erlbaum (UK), Taylor & Francis, 1998).
6 L. Rozenblit and F. Keil (2002), ‘The misunderstood limits of folk science: An illusion of explanatory depth’, Cognitive Science, 26(5): 521–62. We have the same shallow understanding of complex political issues. Perhaps not entirely surprisingly, people with extreme political views appear to have a particularly shallow understanding: P. M. Fernbach, T. Rogers, C. R. Fox and S. A. Sloman (2013), ‘Political extremism is supported by an illusion of understanding’, Psychological Science, 24(6): 939–46.
7 An aside: where philosophy has mutated into theory – including psychology, probability, logic, decision theory, game theory, and so on – it becomes, like physics, drastically disconnected from its intuitive foundations. The theory will have all sorts of implications that are wildly counter-intuitive, but this is inevitable, because our intuitions are inconsistent. To my mind, one of the spectacular successes of philosophy has been its propensity to ‘spin-out’ theories that ultimately transcend mere ‘intuition-matching’ and which, like physics, come to have a life of their own.
8 The project of generative grammar still struggles on. But the prospect of anyone writing down a generative grammar of, say, English, seems ever more remote – and indeed, Chomsky and his followers have drifted ever further from practical engagement with the project, and have resorted to abstract theory and philosophical speculation. In the last couple of decades, a new movement in linguistics – construction grammar (A. E. Goldberg, Constructions at Work (New York: Oxford University Press, 2006); P. W. Culicover and R. Jackendoff, Simpler Syntax (New York: Oxford University Press, 2005)) – has abandoned the ‘grammar-as-theory’ point of view and embraced the piecemeal nature of language wholeheartedly. This viewpoint also fits well with the fact that language is learned, and languages change over time, incrementally ‘piece-by-piece’ rather than undergoing system-wide reorganizations (M. H. Christiansen and N. Chater (2016), ‘The now-or-never bottleneck: A fundamental constraint on language’, Behavioral and Brain Sciences, 39, e62; M. H. Christiansen and N. Chater, Creating Language (Cambridge, MA: MIT Press, 2016).
9 Multiple systems views have been prevalent, from early psychoanalysis (e.g. Sigmund Freud, Das Ich und das Es, (Leipzig, Vienna and Zurich: Internationaler Psycho-analytischer Verlag, 1923); English translation, The Ego and the Id, Joan Riviere (trans.) (London: Hogarth Press and Institute of Psycho-analysis, 1927)) to modern cognitive science (e.g. S. A. Sloman (1996), ‘The empirical case for two systems of reasoning’, Psychological Bulletin 119: 3–22; J. S. B. Evans (2003), ‘In two minds: Dual-process accounts of reasoning’, Trends in Cognitive Sciences, 7(10): 454–9).
1 A close variant of the triangle on the left-hand side of Figure 1 was later independently discovered by the father and son team of Lionel and Roger Penrose (L. S. Penrose and R. Penrose (1958), ‘Impossible objects: A special type of visual illusion’, British Journal of Psychology, 49(1): 31–3) and their very elegant version is known as the Penrose triangle. Reutersvärd worked entirely intuitively and had no background in geometry, discovering his famous triangle while still at school. The Penroses were both distinguished academics; indeed, Roger Penrose went on to apply geometry with spectacular results in mathematical physics. It strikes me as remarkable that the same astonishing figure could independently be created from such different starting points.
2 The philosopher Richard Rorty famously argued that the ‘mirror of nature’ metaphor marks a fundamental wrong turn in Western thought (R. Rorty, Philosophy and the Mirror of Nature (Princeton, NJ: Princeton University Press, 1979)). Whether or not this is right, viewing the mind as a mirror of nature, creating an internal copy of the outer world, is certainly a wrong turn in understanding perception.
3 Strictly speaking, there are 3D interpretations of the 2D patterns we view as ‘impossible’ objects, but they are bizarre geometric arrangements, which are incompatible with the natural interpretations of parts of the image.
4 http://www.webexhibits.org/causesofcolor/1G.html.
5 http://www.scholarpedia.org/article/File:Resolution.jpg.
6 http://www.bbc.co.uk/news/science-environment-37337778.
7 J. Ninio and K. A. Stevens (2000), ‘Variations on the Hermann grid: an extinction illusion’, Perception, 29(10): 1209–17.
8 K. Rayner and J. H. Bertera (1979), ‘Reading without a fovea’, Science, 206: 468–9; K. Rayner, A. W. Inhoff, R. E. Morrison, M. L. Slowiaczek and J. H. Bertera (1981), ‘Masking of foveal and parafoveal vision during eye fixations in reading’, Journal of Experimental Psychology: Human Perception and Performance, 7(1): 167–79.
9 A. Pollatsek, S. Bolozky, A. D. Well and K. Rayner (1981), ‘Asymmetries in the perceptual span for Israeli readers’, Brain and Language, 14(1): 174–80.
10 G. W. McConkie and K. Rayner (1975), ‘The span of the effective stimulus during a fixation in reading’, Perception & Psychophysics, 17(6), 578–86.
11 E. R. Schotter, B. Angele and K. Rayner (2012), ‘Parafoveal processing in reading’, Attention, Perception, & Psychophysics, 74(1): 5–35; A. Pollatsek, G. E. Raney, L. LaGasse and K. Rayner (1993), ‘The use of information below fixation in reading and visual search’, Canadian Journal of Experimental Psychology, 47(2): 179–200.
12 E. D. Reichle, K. Rayner and A. Pollatsek (2003), ‘The E–Z Reader model of eye-movement control in reading: Comparisons to other models’, Behavioral and Brain Sciences, 26(4): 445–76.
13 By stabilizing the retinal image, so that the eye can no longer scan from place to place, we are drastically reducing our ability to make sense of different parts of the image. However, we can, to a limited degree, shift our attention, even without moving our eyes, so that retinal stabilization dramatically reduces, but doesn’t entirely eliminate, our ability to change which pieces of visual information we lock onto.
14 R. M. Pritchard (1961), ‘Stabilized images on the retina’, Scientific American, 204: 72–8.
15 Here, I’m picking out some highlights from research on stabilized images, and not, of course, attempting to be comprehensive. One still controversial issue is whether the image necessarily fades completely and irretrievably, if it is perfectly stabilized – it is difficult to completely eliminate any ‘wobble’ which might be sufficient for the eye to register change (H. B. Barlow (1963), ‘Slippage of contact lenses and other artefacts in relation to fading and regeneration of supposedly stable retinal images’, Quarterly Journal of Experimental Psychology, 15(1): 36–51; L. E. Arend and G. T. Timberlake (1986), ‘What is psychophysically perfect image stabilization? Do perfectly stabilized images always disappear?’, Journal of the Optical Society of America A, 3(2): 235–41).
16 A. Noë (2002), ‘Is the visual world a grand illusion?’, Journal of Consciousness Studies, 9(5–6): 1–12; D. C. Dennett, ‘“Filling in” versus finding out: A ubiquitous confusion in cognitive science’, in H. L. Pick, Jr, P. van den Broek and D. C. Knill (eds), Cognition: Conceptual and Methodological Issues (Washington DC: American Psychological Association, 1992); D. C. Dennett, Consciousness Explained (London: Penguin Books, 1993).
1 Image (a) from A. L. Yarbus (1967), Eye Movements and Vision (New York: Plenum Press), reprinted by permission; image (b) from Keith Rayner and Monica Castelhano (2007), ‘Eye movements’, Scholarpedia, 2(10): 3649, http://www.scholarpedia.org/article/Eye_movements.
2 J. K. O’Regan and A. Noë (2001), ‘A sensorimotor account of vision and visual consciousness’, Behavioral and Brain Sciences, 24(5): 939–73; R. A. Rensink (2000), ‘Seeing, sensing, and scrutinizing’, Vision Research, 40(10): 1469–87.
3 Reprinted from Brian A. Wandell, Foundations of Vision (Stanford University): https://foundationsofvision.stanford.edu.
4 L. Huang and H. Pashler (2007), ‘A Boolean map theory of visual attention’, Psychological Review, 114(3): 599, Figure 8.
5 If so, then we might expect some interesting effects of the colour grids stabilized on the retina, e.g. that patterns corresponding to individual colours might be seen, with the rest of the grid entirely invisible. This has not, to my knowledge, been attempted, but it would be a fascinating experiment.
6 Patterns can also be ‘shrink-wrapped’ by sharing properties other than colour – for example, being lines with the same slant, or items which are all moving in synchrony (like a flock of birds).
7 J. Duncan (1980), ‘The locus of interference in the perception of simultaneous stimuli’, Psychological Review, 87(3): 272–300.
8 Huang and Pashler (2007), ‘A Boolean map theory of visual attention’, Figure 10.
9 Note, though, that the perception of the colour of each patch will be influenced by neighbouring patches; indeed, the perceived colour of any individual patch on the image is determined by the comparison of that specific patch with neighbouring patches in a very complex and subtle way (for an early and influential theory, see E. H. Land and J. J. McCann (1971), ‘Lightness and retinex theory’, Journal of the Optical Society of America, 61(1): 1–11). The key, and remarkable, point is that, none the less, the output of this interactive process is sequential: we can only see one colour at a time.
10 D. G. Watson, E. A. Maylor and L. A. Bruce (2005), ‘The efficiency of feature-based subitization and counting’, Journal of Experimental Psychology: Human Perception and Performance, 31(6): 1449.
11 Masud Husain (2008), ‘Hemineglect’, Scholarpedia, 3(2): 3681, http://www.scholarpedia.org/article/Hemineglect.
12 Nigel J. T. Thomas, ‘Mental Imagery’, in the Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.): http://plato.stanford.edu/archives/fall2014/entries/mental-imagery/.
13 The remarkable video of this interaction can be found online at <https://www.youtube.com/watch?v=4odhSq46vtU>.
1 This is the so-called Cathode Ray Tube theory of imagery (see S. M. Kosslyn, Image and Mind (Cambridge, MA: Harvard University Press, 1980)).
2 The illusion that the mind is the stage of an inner theatre is explored by philosopher Daniel Dennett in his book Consciousness Explained. My thinking has been heavily influenced by Zenon Pylyshyn’s long-standing critique of pictorial theories of imagery (Z. W. Pylyshyn (1981), ‘The imagery debate: Analogue media versus tacit knowledge’, Psychological Review, 88(1): 16).
3 G. Hinton (1979), ‘Some demonstrations of the effects of structural descriptions in mental imagery’, Cognitive Science, 3(3): 231–50.
4 J. Wolpe and S. Rachman (1960), ‘Psychoanalytic “evidence”: A critique based on Freud’s case of little Hans’, Journal of Nervous and Mental Disease, 131(2): 135–48.
5 Wolpe and Rachman (1960), ‘Psychoanalytic “evidence”: A critique based on Freud’s case of little Hans’.
6 S. Freud, ‘Analysis of a phobia in a five-year-old boy ‘Little Hans’ (1909), Case Histories I, Vol. 8, Penguin Freud Library (London: Penguin Books, 1977).
7 Wolpe and Rachman (1960), ‘Psychoanalytic “evidence”: A critique based on Freud’s case of little Hans’, quoting Freud.
1 http://www.elementsofcinema.com/editing/kuleshov-effect.html.
2 See online at <http://www.imdb.com/name/nm0474487/bio>;
3 See online at <https://www.youtube.com/watch?v=DGA6rCOyTh4>;
4 L. F. Barrett, K. A. Lindquist and M. Gendron (2007), ‘Language as context for the perception of emotion’, Trends in Cognitive Sciences, 11(8): 327–32. Reprinted by permission; original photo Doug Mills/New York Times/Redux.
5 http://plato.stanford.edu/entries/relativism/supplement1.html.
6 W. James, The Principles of Psychology (1890), 2 vols (New York: Dover Publications, 1950).
7 J. A. Russell (2003), ‘Core affect and the psychological construction of emotion’, Psychological Review, 110(1): 145; J. A. Russell (1980), ‘A circumplex model of affect’, Journal of Personality and Social Psychology, 39(6): 1161.
8 P. Briñol and R. E. Petty (2003), ‘Overt head movements and persuasion: A self-validation analysis’, Journal of Personality and Social Psychology, 84(6): 1123–39.
9 Briñol and Petty (2003) explain their results using a different account, which they call self-validation theory. They interpret the nodding as ‘validating’ one’s own thoughts (i.e. one’s internal monologue of ‘this is nonsense, total nonsense!’ when given the unpersuasive message), rather than affirming the message itself. Experimentally splitting apart these approaches is an interesting challenge.
10 D. G. Dutton and A. P. Aron (1974), ‘Some evidence for heightened sexual attraction under conditions of high anxiety’, Journal of Personality and Social Psychology, 30(4): 510.
11 B. Russell, The Autobiography of Bertrand Russell (Boston, MA: Little, Brown & Co., 1951), p. 222.
1 Wikipedia: http://upload.wikimedia.org/wikipedia/commons/6/60/Corpus_callosum.png.
2 M. S. Gazzaniga (2000), ‘Cerebral specialization and interhemispheric communication: Does the corpus callosum enable the human condition?’, Brain, 123(7): 1293–326.
3 L. Hall, T. Strandberg, P. Pärnamets, A. Lind, B. Tärning and P. Johansson (2013), ‘How the polls can be both spot on and dead wrong: Using choice blindness to shift political attitudes and voter intentions’, PLoS ONE 8(4): e60554. doi:10.1371/journal.pone.0060554.
4 P. Johansson, L. Hall, S. Sikström and A. Olsson (2005), ‘Failure to detect mismatches between intention and outcome in a simple decision task’, Science, 310(5745): 116–19. Reprinted by permission.
5 P. Johansson, L. Hall, B. Tärning, S. Sikström and N. Chater (2013), ‘Choice blindness and preference change: You will like this paper better if you (believe you) chose to read it!’, Journal of Behavioral Decision Making, 27(3): 281–9.
6 T. J. Carter, M. J. Ferguson and R. R. Hassin (2011), ‘A single exposure to the American flag shifts support toward Republicanism up to 8 months later’, Psychological Science, 22(8): 1011–18.
7 E. Shafir (1993), ‘Choosing versus rejecting: Why some options are both better and worse than others’, Memory & Cognition, 21(4): 546–56; E. Shafir, I. Simonson and A. Tversky (1993), ‘Reason-based choice’, Cognition, 49(1): 11–36.
8 K. Tsetsos, N. Chater and M. Usher (2012), ‘Salience driven value integration explains decision biases and preference reversal’, Proceedings of the National Academy of Sciences, 109(24): 9659–64.
9 Tsetsos, Chater and Usher (2012), ‘Salience driven value integration explains decision biases and preference reversal’.
10 The literature is vast. Some classic references include: D. Kahneman and A. Tversky, Choices, Values, and Frames (Cambridge, UK: Cambridge University Press, 2000); C. F. Camerer, G. Loewenstein and M. Rabin (eds), Advances in Behavioral Economics (Princeton, NJ: Princeton University Press, 2011); Z. Kunda, Social Cognition: Making Sense of People (Cambridge, MA: MIT Press, 1999).
11 P. J. Schoemaker (1990), ‘Are risk-attitudes related across domains and response modes?’, Management Science, 36(12): 1451–63; I. Vlaev, N. Chater and N. Stewart (2009), ‘Dimensionality of risk perception: Factors affecting consumer understanding and evaluation of financial risk’, Journal of Behavioral Finance, 10(3): 158–81.
12 E. U. Weber, A. R. Blais and N. E. Betz (2002), ‘A domain-specific risk-attitude scale: Measuring risk perceptions and risk behaviors’, Journal of Behavioral Decision Making, 15(4): 263–90.
13 This ‘constructive’ view of preferences (as created in the moment of questioning) has been persuasively advocated for several decades (P. Slovic (1995), ‘The construction of preference’, American Psychologist, 50(5): 364). Many economists and psychologists have not, though, taken on the full implications of this viewpoint, imagining that there is still some ‘deep’ and stable underlying preference that is merely distorted by the particular measurement method.
1 A classic discussion is J. A. Feldman and D. H. Ballard (1982), ‘Connectionist models and their properties’, Cognitive Science, 6(3): 205–54.
2 This connectionist or ‘neural network’ model of computation has been a rival to conventional ‘digital’ computers since the 1940s (see W. S. McCulloch and W. Pitts (1943), ‘A logical calculus of the ideas immanent in nervous activity’, Bulletin of Mathematical Biophysics, 5(4): 115–33) and exploded into psychology and cognitive science with books including G. E. Hinton and J. A. Anderson, Parallel Models of Associative Memory (Hillsdale, NJ: Erlbaum, 1981) and J. L. McClelland, D. E. Rumelhart and the PDP Research Group, Parallel Distributed Processing, 2 vols (Cambridge, MA: MIT Press, 1986). State-of-the-art machine-learning now extensively uses neural networks – although, ironically, implemented in conventional digital computers for reasons of practical convenience. Building brain-like hardware is currently just too difficult and inflexible.
3 While the brain is interconnected into something close to a single network, this isn’t quite the whole story. As with a PC, the brain seems to have some specialized hardware for particular problem, such as the ‘low-level’ processing of images and sounds and other sensory inputs, and for basic movement control. And perhaps there are somewhat independent networks specialized for other tasks too (e.g. processing faces, words and speech sounds). The questions of which ‘special-purpose’ machinery the brain develops, whether such machinery is built in or learned and, crucially, the degree to which such networks are ‘sealed off’ from interference from the rest of the brain, are all of great importance.
4 For a recent review, see C. Koch, M. Massimini, M. Boly and G. Tononi (2016), ‘Neural correlates of consciousness: progress and problems’, Nature Reviews Neuroscience, 17(5): 307–21.
5 W. Penfield and H. H. Jasper, Epilepsy and the Functional Anatomy of the Human Brain (Boston, MA: Little, Brown, 1954).
6 Reprinted by permission from B. Merker (2007), ‘Consciousness without a cerebral cortex: A challenge for neuroscience and medicine’, Behavioral and Brain Sciences, 30(1): 63–81; redrawn from figures VI-2, XIII-2 and XVIII-7 in Penfield and Jasper, Epilepsy and the Functional Anatomy of the Human Brain.
7 B. Merker (2007), ‘Consciousness without a cerebral cortex: A challenge for neuroscience and medicine’, Behavioral and Brain Sciences, 30: 63–134.
8 G. Moruzzi and H. W. Magoun (1949), ‘Brain stem reticular formation and activation of the EEG’, Electroencephalography and Clinical Neurophysiology, 1(4): 455–73.
9 Psychologists and neuroscientists will recognize these ideas as drawing on a range of prior ideas, from the emphasis on organization in Gestalt psychology and Bartlett’s ‘effort after meaning’ in human memory, to Ulric Neisser’s perceptual cycle, the vast range of experiments on the limits of attention, to O’Regan and Noë’s theory of consciousness, to the astonishing results from Wilder Penfield’s early experiments in brain surgery and Björn Merker’s theorizing about the central role of ‘deep’ (sub-cortical) brain structures in conscious experience. My own attempt, to lock onto, and organize, these and other findings and ideas into a cohesive pattern probably doesn’t correspond precisely to any previous theory, though it has strong resemblances to many.
10 Indeed, precisely because we see only the stable, meaningful world, and have no awareness whatever of the vastly complex calculations our brain is engaged in, newcomers to psychology and neuroscience are often surprised that the brain even needs to make such calculations. It is easy to imagine that the world merely presents itself, fully interpreted, to the eye and ear. Yet the opposite is the case: about half of the brain is dedicated, full-time, to what is fairly uncontroversially agreed to be perceptual analysis. But, as we shall see, the reach of perception may be greater still.
11 The question of whether we have so-called imageless thoughts was hugely controversial early in the history of psychology. Otto Külpe (1862–1915) and his students at the University of Würzburg famously reported that they experienced ineffable and indescribable states of awareness when thinking about abstract concepts. These mysterious experiences, supposedly lacking any sensory qualities, were viewed as of great theoretical significance by Külpe. Other early psychologists, including the British psychologist Edward Titchener (1867–1927), who had studied in Germany and set up a laboratory at Cornell University in upstate New York, reported that they had no such experiences. Perhaps remarkably, the resulting transatlantic controversy shook the psychological world. I, for one, have no idea what it would be like if I did have an impalpable non-sensory experience, any more than I know what it would be like to see a square triangle.
1 The role of alarm systems in conscious experience has been particularly highlighted by Kevin O’Regan’s concept of the ‘grabbiness’ of perception – that is, if something changes in the image, it grabs your attention. J. K. O’Regan, Why Red Doesn’t Sound Like a Bell: Understanding the Feel of Consciousness (Oxford: Oxford University Press, 2011).
2 Redrawn with permission from A. Mack and I. Rock (1999), ‘Inattentional blindness’, Psyche, 5(3): Figure 2.
3 J. S. Macdonald and N. Lavie (2011), ‘Visual perceptual load induces inattentional deafness’, Attention, Perception, & Psychophysics, 73(6): 1780–89.
4 Redrawn with permission from Mack and Rock (1999), ‘Inattentional blindness’, Figure 3.
5 Inattentional blindness and deafness require going ‘under the radar’ of the alarm system – a bright flash or a loud bang would surely be detected, however carefully we are focusing on the central cross, because the alerting mechanisms will drag our focus from the cross to the unexpected, rather shocking, stimulus. But this is not a case of locking onto two set of information – the shock of the flash (or, equally, a loud bang) would disengage our existing visual analysis of the arms of the cross and, we would presume, dramatically reduce the accuracy of our judgements concerning which arm is longer.
6. Reprinted with permission from R. F. Haines (1991), ‘A breakdown in simultaneous information processing’, in Presbyopia Research, ed. G. Obrecht and L. W. Stark (Boston, MA: Springer), pp. 171–5.
7 U. Neisser, ‘The control of information pickup in selective looking’, in A. D. Pick (ed.), Perception and its Development: A Tribute to Eleanor J. Gibson (Hillsdale, NJ: Lawrence Erlbaum Associates, 1979), pp. 201–19.
8 A wonderful update of this study, where the woman with the umbrella is replaced by a person in a gorilla suit, became something of a YouTube hit. D. J. Simons and C.F. Chabris (1999), ‘Gorillas in our midst: Sustained inattentional blindness for dynamic events’, Perception, 28(9): 1059–74.
9 The possibility that many objects, faces and words are analysed at a ‘deep’ level, but only one or so is then selected by attentional resources is the ‘late-selection’ theory of attention (J. Deutsch and D. Deutsch (1963), ‘Attention: Some theoretical considerations’, Psychological Review, 70(1): 80).
10 This does not mean that the brain processes only pieces of information relevant to the object, word, face or pattern that is the current focus of attention. Indeed, some amount of processing of irrelevant information is inevitable, because the brain can’t always know which new pieces of information are part of the current ‘jigsaw’. This point is demonstrated elegantly in experiments in which people listen to different voices ‘speaking’ into left and right headphones. Instructed to listen to, and immediately repeat, the voice in the left ear, people have almost no idea what the other voice is saying (D. E. Broadbent, Perception and Communication (Oxford: Oxford University Press, 1958); N. P. Moray (1959), ‘Attention in dichotic listening: Affective cues and the influence of instructions’, Quarterly Journal of Experimental Psychology, 11: 56–60). For example, they can fail to notice that the unattended voice is speaking in a foreign language or repeating a single word. But suppose the messages abruptly switch ears – so that the natural continuation of the sentence heard in the left ear now continues in the right ear (A. Treisman (1960), ‘Contextual cues in selective listening’, Quarterly Journal of Experimental Psychology, 12: 242–8). In this case people frequently ‘follow’ the switched message to the other ear. As the brain is continually searching for new ‘data’ that matches as well as possible with its existing ‘jigsaw’, when new ‘jigsaw pieces’ appear to fit unexpectedly well with the current jigsaw, the brain ‘grabs’ hold of them. Yet the cycle of thought is rigidly sequential: we can only fit new information into one mental jigsaw at a time.
11 Of course, the brain has to figure out which pieces of information are meaningfully grouped together. Even if we are solving one jigsaw at a time, we may need to make some sense of other irrelevant jigsaw pieces in order to reject them – for example, if we are working on a jigsaw containing a rural scene, spotting that a jigsaw piece or pieces that make up a fragment of aircraft engine might lead us to put them aside. In the same way, the brain imposes meaning on information irrelevant to the meaningful pattern it is constructing just enough to reject it as irrelevant.
12 Indeed, the most popular model of how eye movements and reading work, the E–Z Reader model, assumes that attention shifts completely sequentially, from one word to the next, with no overlaps – even though there would seem to be huge advantages to being able to read many words simultaneously. Attention locks on and makes sense of one word after the next, exemplifying the cycle of thought viewpoint (see, for example, E. D. Reichle, K. Rayner and A. Pollatsek (2003), ‘The E–Z Reader model of eye-movement control in reading: Comparisons to other models’, Behavioral and Brain Sciences, 26 (4): 445–76.
13 G. Rees, C. Russell, C. D. Frith and J. Driver (1999), ‘Inattentional blindness versus inattentional amnesia for fixated but ignored words’, Science, 286(5449): 2504–507.
14 Some primitive aspects of the perceptual world may, though, be grasped without the need for attention. Indeed, such processing seems to be a prerequisite for attentional processes to be able to select and lock onto specific aspects of aspects of the visual input or stream of sounds. We shall not consider here the vexed question of what information the brain can extract without engaging the cycle of thought – but note that it will not include describing the world as consisting of ‘meaningful’ items such as words, faces or objects, but rather will be closely tied to features of the sensory input itself (e.g. detecting bright patches, textures, or edges – although none of these is uncontroversially pre-attentive). See, for example, L. G. Appelbaum and A. M. Norcia (2009), ‘Attentive and pre-attentive aspects of figural processing’, Journal of Vision, 9(11): 1–12; Li, Zhaoping (2000), ‘Pre-attentive segmentation in the primary visual cortex’, Spatial Vision, 13 (1): 25–50.
15 D. A. Allport, B. Antonis and P. Reynolds (1972), ‘On the division of attention: A disproof of the single channel hypothesis’, Quarterly Journal of Experimental Psychology, 24(2): 225–35.
16 L. H. Shaffer (1972), ‘Limits of Human Attention’, New Scientist, 9 November: 340–41; L. H. Shaffer, ‘Multiple attention in continuous verbal tasks’, in P. M. A. Rabbitt and S. Domic (eds), Attention and Performance V (London: Academic Press, 1975).
1 H. Poincaré, ‘Mathematical creation’, in H. Poincaré, The Foundations of Science (New York: Science Press, 1913).
2 Paul Hindemith, A Composer’s World: Horizons and Limitations (Cambridge, MA: Harvard University Press, 1953), p. 50; online at <http://www.alejandrocasales.com/teoria/sound/composers_world.pdf>.
3 I challenge the reader to listen to a short piano piece such as Hindemith’s fascinating Piano Sonata No. 3 (Fugue) and to believe that its astonishing intricacies could have been conceived, except in the vaguest and most general terms, in any sudden flash of insight. Indeed, it seems mysterious how Hindemith could have convinced himself that this dazzling web of notes, extending over several minutes, arose fully formed in his consciousness in a single moment. We shall see later that Hindemith did not intend to be taken entirely literally.
4 Left image: R. L. Gregory (2001), The Medawar Lecture 2001: ‘Knowledge for vision: vision for knowledge’, Philosophical Transactions of the Royal Society Lond B, 360: 1231–51; the right image is by psychologist Karl Dallenbach.
5 U. N. Sio and T. C. Ormerod (2009), ‘Does incubation enhance problem solving? A meta-analytic review’, Psychological Bulletin, 135(1): 94.
6 But might unconscious mental work occur when we are asleep, when the brain is otherwise unoccupied? This is very unlikely: the coherent, flowing brain waves that overtake our brains through most of the night are utterly unlike the brainwaves indicative of intensive mental activity – the brain is, after all, resting. And the short bursts of dream sleep, though much more similar to waking brain activity, are taken up with other things: namely, creating the strange and jumbled images and stories of our dreams.
7 Hindemith, A Composer’s World: Horizons and Limitations, p. 51.
8 J. Levy, H. Pashler and E. Boer (2006), ‘Central interference in driving: Is there any stopping the psychological refractory period?’ Psychological Science, 17(3): 228–35.
9 Psychologists typically use ‘detection’ for tasks which require determining whether a ‘signal’ (a flash, a beep, or an aircraft on a radar screen) is present or not. This task is marginally more complex, requiring categorization into one or two categories (one event or two).
10 J. Levy and H. Pashler (2008), ‘Task prioritization in multitasking during driving: Opportunity to abort a concurrent task does not insulate braking responses from dual-task slowing’, Applied Cognitive Psychology, 22: 507–25.
11 E. A. Maylor, N. Chater and G. V. Jones (2001), ‘Searching for two things at once: Evidence of exclusivity in semantic and autobiographical memory retrieval’, Memory & Cognition, 29(8): 1185–95.
1 Reprinted with permission from M. Idesawa (1991), ‘Perception of 3-D illusory surface with binocular viewing’, Japanese Journal of Applied Physics, 30(4B), L751.
2 We will see later that the brain may operate by extrapolating from vast batteries of examples, rather than working with general principles, whether geometric or not. However, this point, while crucially important, does not affect the present argument.
3 Beautiful theoretical work has analysed how this process of finding the best interpretation of the available data might work, and there are many elegant proposals for ‘idealized’ versions of the nervous system (and some of these proposals can be shown to carry out powerful computations). But the details of how the brain solves the problem are by no means resolved (see J. J. Hopfield (1982), ‘Neural networks and physical systems with emergent collective computational abilities’, Proceedings of the National Academy of Sciences of the United States of America, 79(8), 2554–8). Importantly, there are powerful theoretical ideas concerning how such networks learn the constraints that govern the external world from experience (e.g. Y. LeCun, Y. Bengio and G. Hinton (2015), ‘Deep learning’, Nature, 521(7553): 436–44.).
4 Although in a digital computer, cooperative computation across the entire web of constraints is not so straightforward – more sequential methods of searching the web are often used instead.
5 The idea of ‘direct’ perception, which has been much discussed in psychology, is appealing, I think, precisely because we are only ever aware of the output of the cycle of thought: we are oblivious to the calculations involved, and the speed with which the cycle of thought can generate the illusion that our conscious experience must be in immediate contact with reality.
6 H. von Helmholtz, Handbuch der physiologischen Optik, vol. 3 (Leipzig: Voss, 1867). Quotations are from the English translation, Treatise on Physiological Optics (1910) (Washington DC: The Optical Society of America, 1924–5).
7 D. Hume (1738–40), A Treatise of Human Nature: Book I. Of the understanding, Part IV. Of the sceptical and other systems of philosophy, Section VI. Of personal identity.
8 From this point of view, the question of what we are thinking about needs to be kept strictly separate from the issue of consciousness. Two people might both hear an identical snippet of conversation, but in one case, the speakers are talking about a real couple who, by sheer coincidence, are called Cathy and Heathcliff; in another, the speakers are members of a book group, discussing Wuthering Heights. What might be an identical conscious experience of thinking: ‘Poor Cathy!’ is a thought about a real person in the first case (though the hearer has no clue who this person is); in the second case, it is a thought about a fictional character (though the hearer may have no clue which fictional character, or even that she is a fictional character). The nature of consciousness and of meaning are both fascinating and profound puzzles, but they are very distinct puzzles.
9 For example, dual process theories of reasoning, decision-making and social cognition take this viewpoint (see, for example, J. S. B. Evans and K. E. Frankish, In Two Minds: Dual Processes and Beyond (Oxford: Oxford University Press, 2009); S. A. Sloman (1996), ‘The empirical case for two systems of reasoning’, Psychological Bulletin, 119(1): 3–22. The Nobel Prize-winning psychologist Daniel Kahneman is often seen as exemplifying this viewpoint (e.g. D. Kahneman, Thinking, Fast and Slow (London: Penguin, 2011), although his perspective is rather more subtle.
10 For example, P. Dayan, ‘The role of value systems in decision making’, in C. Engel and W. Singer (eds), Better Than Conscious? Decision Making, the Human Mind, and Implications for Institutions (Cambridge, MA: MIT Press, 2008), pp. 51–70.
11 There is a small industry in psychology attempting to demonstrate the existence of ‘unconscious’ influences on our actions (see, for example, the excellent review by B. R. Newell and D. R. Shanks (2014), ‘Unconscious influences on decision making: A critical review’, Behavioral and Brain Sciences, 37(1): 1–19). From the present point of view, this hardly needs demonstrating: we are only ever conscious of the outputs of thought and our speculations about their origins are always mere confabulation. A consequence of this viewpoint is that any demonstrations of the ‘unconscious influences’ on thought do not imply the existence of hidden unconscious pathways to decisions and actions that compete with conscious decision-making processes (although this has been a popular conclusion to draw: see A. Dijksterhuis and L. F. Nordgren (2006), ‘A theory of unconscious thought’, Perspectives on Psychological Science 1: 95–109). On the contrary, such effects are entirely consistent with the cycle-of-thought viewpoint: there is just one engine of thought, the results of which are always conscious, and the origins of which are never conscious.
Must we conclude that each of us is completely oblivious to the processes which generate our thoughts and behaviour? Within a single cycle of thought, I think this is right. But conscious deliberation – pondering different lines of attack on a crossword clue, planning ahead in chess, weighing up advantages and disadvantages of a course of action – involves many cycles of thought. And each cycle will generate conscious awareness of some meaningful organization (a candidate word for our crossword clue, an image of a hypothetical chess move, a snippet of language, a pro or a con). The output of each cycle will feed into the next – if we are to have a stream of coherent thought rather than an aimless daydream.
12 For example, K. A. Ericsson and H. A. Simon (1980), ‘Verbal reports as data’, Psychological Review, 87(3): 215–51.
13 J. S. Mill, The Autobiography (1873).
1 For analysis of the psychology of chess, see classic studies by A. D. de Groot, Het denken van de schaker [The thought of the chess player] (Amsterdam: North-Holland Publishing Co., 1946); updated translation published as Thought and Choice in Chess (The Hague: Mouton, 1965; corrected second edition published in 1978); W. G. Chase and H. A. Simon (1973), ‘Perception in chess’, Cognitive Psychology, 4: 55–81; and more recently, F. Gobet and H. A. Simon (1996), ‘Recall of rapidly presented random chess positions is a function of skill’, Psychonomic Bulletin and Review, 3(2): 159–63.
2 J. Capablanca, Chess Fundamentals (New York: Harcourt, Brace and Company, 1921).
3 For examples, see http://justsomething.co/the-50-funniest-faces-in-everyday-objects/. The third photo is reprinted by permission of Ruth E. Kaiser of the Spontaneous Smiley Face Project.
4 This viewpoint ties up nicely with the picture of brain organization described in Chapter 7. Sub-cortical brain structures are the crucible of perceptual interpretation, serving as gateways to the senses, but they also have bi-directional projections into the entire cortex. This type of two-way link between the current perceptual interpretation and the past stock of memory traces represented in the cortex is just what is required to support a parallel process of resonance.
5 M. H. Christiansen and N. Chater (2016), ‘The now-or-never bottleneck: A fundamental constraint on language’, Behavioral and Brain Sciences, 39: e62; M. H. Christiansen and N. Chater, Creating Language (Cambridge, MA: MIT Press, 2016).
6 http://restlessmindboosters.blogspot.co.uk/2011/06/tangram-construcao.html.
7 The idea that human knowledge is rooted in precedents or ‘cases’ has a long tradition in, among other fields, artificial intelligence (e.g. J. Kolodner, Case-Based Reasoning (San Mateo, CA: Morgan Kaufmann, 1993), machine-learning and statistics (e.g. T. Cover and P. Hart (1967), ‘Nearest neighbor pattern classification’, IEEE Transactions on Information Theory, 13(1): 21–7) and psychology (e.g. G. D. Logan (1988), ‘Toward an instance theory of automatization’, Psychological Review, 95(4): 492). Principles are also important, but they are invented post-hoc and then themselves become fresh precedents to be amended and overturned, rather than rigid rules to be applied relentlessly.
1 C. M. Mooney (1957), ‘Age in the development of closure ability in children’, Canadian Journal of Psychology, 11(4): 219–26.
2 Mooney, ‘Age in the development of closure ability in children’, 219.
3 It is possible that such memory storage is not completely immutable. In my experience, though, one moment of ‘insight’ into an image does appear to be enough to last a lifetime.
4 G. Lakoff and M. Johnson, Metaphors We Live By (Chicago: University of Chicago Press, 1980).
5 Almost certainly, this is something of an over-simplification. If there are some people who are good at finding answers we all agree with, then we may trust them to define the answers for more tricky problems, which leave most of us flummoxed. This is how things work in lots of areas, of course – we trust mathematicians or literary critics more than ourselves to work out what is a really exciting mathematical breakthrough or a landmark novel. And perhaps we trust these ‘experts’ (if at all) because they can demonstrate their competence at things we do all know something about. So maybe we should give more weight to judgements of the ‘right answer’ by people who do well in IQ tests.
6 The spectacular successes of contemporary artificial intelligence work by incredibly memory-intensive methods has been made possible by major advances in both computer algorithms and an exponential growth in computer memory, computer power and the availability of massive quantities of data. These successes will, I believe, change our lives fundamentally, but they will do so by assisting and enhancing the human mind, rather than replacing it. It is telling, I suspect, that in large areas of mathematics the computer is a powerful and sometimes essential tool, but almost no interesting mathematical results have been discovered automatically; and, indeed, most mathematics is still done, more or less, with a pen and paper. The elasticity of the human imagination has, as yet, no computational parallel.
7 Lakoff and Johnson, Metaphors We Live By; D. R. Hofstadter, Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought (New York: Basic Books, 1995).