NEUROETHICS offers unprecedented opportunities as well as challenges. The challenges are obvious enough, and stem from the range of difficult ethical issues which confront us as neuroethicists. The opportunities arise from the fact that as we come to understand our targets, we also come to a better understanding of our tools: in coming to the kind of understanding of the mind we need to make progress on ethical problems, we also come to better understand the strengths and limitations of the ways in which we think about these problems. Because the target and the tools of ethical investigation are, for the neuroethicist, broadly the same, neuroethics is alone among the branches of applied ethics in giving us the opportunity for a profound exploration of the nature of ethical thought itself.

Because neuroethics is as much a theoretical endeavor as a practical undertaking—a practical undertaking which requires and leads to theoretical advances—progress on its subject matter requires reflection not only on issues in ethics, but also on much more abstract and apparently abstruse questions. Questions concerning the nature of consciousness, of personal identity, free will, and so on, are all grist for the neuroethical mill. One such apparently esoteric question concerns the location of the mind. In this chapter, I will argue that this debate bears centrally on neuroethics. Moreover, I will argue that this debate is significant for neuroethics no matter how it turns out. Whether the best interpretation of the facts to which proponents of the extended mind appeal is that the mind is genuinely extended, or merely embedded, reflection on these facts, which are not themselves in dispute, will show that the domain of neuroethics extends to all the processes and mechanisms subserving cognition, rather than just those processes and mechanisms internal to the skull. Once this fact is recognized, I claim, both the scope of neuroethics and some of its characteristic concerns will be transformed.


One of the perennial concerns of philosophers of mind is the relationship between minds and brains. Descartes famously argued that minds and matter (and hence brains) are composed of different kinds of substance. According to Cartesian dualism, the mind is categorically different to the brain. Substance dualism is no longer considered a respectable philosophical position today. Almost all philosophers are monists, believing there is only kind of substance in the universe. If there is only one kind of substance, then everything substantial is composed of it; it follows that mind and brain are made of the same kind of stuff (it must be stressed, however, that it does not follow from the claim that mind and brain are composed of the same stuff that they have the same kinds of properties). This makes the identity thesis attractive. On the identity thesis, minds just are (appropriately functioning) brains.

To the question “where is the mind?” the identity thesis answers by pointing to the brain. In all but the most unusual cases (science fiction cases involving brains in vats, for instance), the mind is to be found inside the skull of the person whose mind it is. The extended mind hypothesis holds that the mind is not to be found exclusively within the skull, though proponents concede that (again, in all but the most unusual cases) the brain is a necessary and especially significant core part of the mind. Instead, it holds that minds extend beyond the skull, and even beyond the skin, of the person whose mind it is, out into the world.

This is sometimes put in terms of a distinction between the contents and the vehicles of minds (Hurley 1998). Mental states have content, where the content of a mental state can be expressed by “that” statements. Mental states include beliefs, such as my belief that I am typing, desires, such as my desire that I write well, intentions, such as my intention that I go home at 5 o’clock, and so on (these mental states need not be conscious, of course). Mental states need vehicles. On the identity thesis, the vehicles of mental states are neural states. On the extended mind hypothesis, the vehicles of mental states include many things beside neural states.

The extended mind hypothesis was first explicitly defended in a short but influential article written by Andy Clark and David Chalmers (1998). In this paper, they introduced a pair of agents, Inga and Otto, who live in New York City. Inga and Otto both wish to go to see a new exhibition at the Museum of Modern Art. Inga recalls that the museum is situated on 53rd street and sets out to visit it. Otto, however, suffers from Alzheimer’s disease and cannot recall the location of the museum in just the way that Inga can. However, Otto carries a handy notebook which he consults regularly. He pulls out his notebook and looks up the location of the museum. Having retrieved the information from his notebook, he sets out for the exhibition.

Clark and Chalmers claim that there is (or at least need not be) any relevant difference between the way in which Otto and Inga retrieve the information that the Museum of Modern Art is on 53rd Street. Prior to recall, both had a dispositional belief—that is, a mental state with the functional role of belief, but which was not currently active—with that content. There are many differences between the ways in which this belief was stored and retrieved in each person, of course, but these differences should not lead neuroethicists to say that only one of these beliefs was genuinely a mental state. Instead, we should say that both agents had a belief with the same content; only the vehicle of the content differed.

On the basis of this imaginary case, Clark and Chalmers advance the Parity Principle, which is an explicitly functionalist principle. If something functions in the way in which a mental state functions for an agent, then it is a mental state. One could, indeed, argue that the Parity Principle is an entailment of functionalism. Functionalists believe that what makes something a mental state of a particular type depends not on what its realizer or vehicle is, but on the role it plays; on its causal relations to inputs from the environment, to other mental states, and to behavior. Functionalism is motivated, among other things, by the thought that aliens or robots or the advanced computers of the far-future might have mental states, yet obviously these mental states will not be realized by neural networks in precisely the same way human mental states are realized. Instead, functionalists claim, mental states can be multiply realized: the same type of mental state (a pain, the belief that aspirin helps headaches, the desire that I take aspirin) can be realized by a variety of different vehicles, some of them made of flesh, some of silicon or what have you. If functionalism is true, there seems to be little motivation for regarding what is within the head as especially privileged. If something outside the head plays much the same role in cognition as something within, then—given the truth of functionalism—it should be ascribed the same status as it would were it in the head.

The Parity Principle attempts to capture the conditions which an extra-neural process or mechanism has to satisfy in order for it to count as mental. Obviously, it is not sufficient that some mechanism contribute causally to my thinking for that mechanism to count as part of my mind. In addition, Clark and Chalmers claim, the resource must be constantly and easily accessible; its contents must be automatically endorsed and must have been consciously endorsed in the past. These conditions are met by the information contained in Otto’s notebook, but they are not met, for instance, by Wikipedia (in relation to me). Though I often consult Wikipedia, accessing it is relatively slow and effortful for me (I do not always have an internet connection available) and I often wonder whether the entry is correct. In this case, it seems that the conditions Clark and Chalmers set down give the right result, because it is clearly false that I believe everything that is written in Wikipedia (right now, before I look).

Though the conditions Clark and Chalmers set out might be sufficient for a mental state to count as mental, there are reasons to doubt that they are necessary. Take ease of access: in both normal and pathological cases (ranging from ordinary lapses through to dementia), dispositional beliefs may not be readily retrievable. Yet they are clearly mental states. In any case, it seems that if the proponent of the extended mind can show that some extended states satisfy these conditions, given that they are sufficient conditions, they will have adequately demonstrated the thesis.

The extended mind hypothesis is driven by two independent phenomena: new developments in technology, on the one hand, and new discoveries in cognitive science on the other. The new developments in technology include many which fall within the purview of neuroethics, broadly construed: cognitive enhancement technologies which do not simply target the brain and its capacities but which instead work to enhance cognition by interfacing the brain with extra-neural and extra-somatic mechanisms. For instance, brain-computer interfaces which expand their users’ cognitive powers by reducing the differences between Otto’s notebook and Wikipedia blur the boundaries between internal and external informational states, or reduce it to irrelevance (from the functionalist perspective). At the limit, it might be true that I (dispositionally) believe the contents of Wikipedia, if I effortlessly and automatically retrieve its contents via such an interface. Of course, it may be that the kinds of technologies envisaged here prove to be beyond the capabilities of science for the foreseeable future. But the extended mind hypothesis is not only driven by such developments. It is also motivated by work in cognitive science and evolutionary theory.

One motivation is a perspective driven by a lively respect for the parsimony of evolution. Evolution is very conservative and very frugal; in general, it will find the cheapest and simplest means for organisms to achieve their goals. Now, it seems that for those goals which involve cognition, the cheapest and simplest means often involve the organism relying upon external resources to cut down on effort and processing costs. Brain-based cognitive processes are energetically demanding, and evolution is sensitive to even small increments in such costs. Moreover, there are opportunity costs as well; cognitive resources expended on one task are not available for other tasks. If it possible to outsource cognitive tasks and such outsourcing reduces these costs (without, of course, raising other costs excessively) than evolution can be expected to hit upon ways of outsourcing.

The most obvious ways in which organisms outsource cognitive costs involve memory. When the environment is stable (enough) it makes sense to use it as (in a phrase Clark uses many times, owed originally to Rodney Brooks) “its own best model.” The organism uses the world as its own best model when, rather than constructing a detailed inner representation of the environment, with all the associated costs involved in creating and storing such a representation, the organism relies upon the stability of the world and its perceptual access to it in order to retrieve information when it needs it. Studies of the visual saccades of human beings engaged in various tasks such as copying a model indicate that human beings proceed in just this way under certain circumstances, thereby reducing the burden on the brain (Clark 2008). Now if human beings had constructed a detailed inner representation and guided their behavior by reference to it, philosophers would have no hesitation in treating the representation as part of the mind; since the external representation—the world itself—plays precisely the same role in human behavior, including mental behavior, it ought to be counted as mental by the Parity Principle.

Using the world as an external memory store reduces the burden on the human brain, but it is far from the most spectacular (alleged) extension of the mind. It is relatively unspectacular for the following reason: it allows agents to accomplish more cheaply and efficiently a goal—representing certain states of affairs—which we could, in principle, have accomplished in a brain-based manner. But extending the mind is not just something which makes agents better at achieving goals they might have achieved in other ways; it also allows them to do things they could not otherwise have done at all.

One thing we accomplish by externalizing memory is to make it publicly accessible and resistant to decay. It then becomes available for learning and transmission, with far greater fidelity than could otherwise be achieved (with perfect fidelity, insofar as the externalized object is its own model). Human cultures have thus externalized information for thousands of years: material artefacts have many uses, one of which is representational (Sterelny 2004). Of course, with the advent of written language, the range of facts than can be publicly represented expands to include anything that can be thought. As a consequence, ways of accumulating knowledge become possible which would not be available were agents thrown back on brain-based resources. In turn, this makes far greater specialization possible. One person can specialize in using a technology (say, computers) without needing to know how that technology works, secure in the knowledge that other specialists can build and repair the technology (that specialist, in turn, can perform the task because she has specialist knowledge about how to retrieve information from the public store, as well as how to operate on that knowledge when it is retrieved).

External representations expand cognition in an even more direct way. There are cognitive tasks that we can perform only because we are able to operate on tokens that represent states of affairs, rather than having to work with iconic representations. This is clearest in the case of number. Human beings have two innate—brain-based—number senses. We have an innate sense of small exact numbers; we come equipped to understand the differences between quantities like “one” and “two” and “three.” We also have an innate sense of large differences between approximate quantities; we intuitively grasp the difference between about seventy and about ninety-five. But we have no intuitive grasp of large or even mediumsized exact numbers, and therefore no intuitive grasp of the difference between “seventy” and “seventy-one.” When agents become capable of representing exact quantities using number words, these differences for the first time become available to them for calculation. Dehaene et al. (1999) showed that subjects engaged in exact number tasks showed significant activation in speech-related areas, while engagement in approximate number tasks does not give rise to such activation. The ability to engage in such number tasks is dependent on the availability of extra-neural linguistic representations, representations that are, of course, available to the learner in virtue of cultural transmission. The professional mathematician can think about mathematical tasks orders more complex still, because she has available to her many more representations upon which to operate. Thus, she can think of multidimensional spaces, even though her brain is unable to represent such spaces except by using tokens.


The extended mind hypothesis has met with a mixed reception from philosophers. Some have embraced it as an obvious extension of functionalism; others have rejected it out of hand. Resistance to the thesis has been motivated by two major concerns: a denial that external resources have the right kinds of contents to count as mental, and the claim that only internal mental states are psychological kinds.

Mental states, recall, have contents. To that extent, my internal representational states and Otto’s notebook are analogous: they both are repositories of a great deal of information including all kinds of information expressible in propositional form. According to Adams and Aizawa (2008), however, there is a dramatic difference between the kinds of content they contain. Otto’s notebook contains only derived content; that is, its content is meaningful only in virtue of conventions and the intentions of agents. Signs, for instance, have merely derived contents: they carry information only in virtue of various conventions such as the conventions of natural languages. Minds, Adams and Aizawa claim, are different: they and they alone have content that is intrinsically referential. They must have such content, because the alternative is an infinite regress; derived content must derive from somewhere.

One obvious response to this line of argument is to claim out that ordinary minds contain derived content as well (arguably) as non-derived. Humans think in natural languages, but natural languages are only derivatively meaningful. However, the defender of intrinsic content might maintain that human minds do not contain derived content in any important sense. Some philosophers and cognitive scientists, following Fodor (1975), maintain that we think in a “language of thought,” not in natural languages, where the language of thought is non-derivatively referential. On this view, propositions in natural languages must be transformed into equivalents in the language of thought before they can become the contents of minds. But it seems false that all human thinking is carried out in a language of thought: at least some thinking does seem to be carried out in natural languages (as well as by way of the manipulation of other conventional symbols). Indeed, the work of Dehaene et al. (1999) mentioned earlier seems to show just that. Bilinguals trained in exact mathematical tasks in one of their native tongues perform better at that task if the instructions are given in the same tongue: if they have previously translated their number words and concepts into a more basic language of thought, they should perform equally well in both their native tongues.

In any case, no matter how internal processes are implemented, insofar as thinkers are genuinely concerned with what enables human beings to perform the spectacular intellectual feats exhibited in science and other areas of systematic enquiry, as well as in the arts, they need to understand the extent to which the mind is reliant upon external scaffolding. Clark (2008: xxv) quotes a revealing exchange between Richard Feynman, the great physicist, and Charles Weiner. Weiner described a collection of Feynman’s notes as “a record of the day-to-day work.” Feynman rejected the description, claiming it wasn’t a record of the work, it was the work:

“I actually did the work on the paper,” he said. “Well,” Weiner said, “the work as done in your head, but the record of it is still here.” “No, it’s not a record, not really. It’s working. You have to work on paper and this is the paper. Okay?”

Feynman’s point, I take it, is that the paper should not be thought of as simply recording what passed through his head, but instead as constituting a part of the cognitive loop involved in his thinking. By externalizing and labeling ideas, they became available for a kind of representation and extension otherwise impossible. Notes on paper, or on a computer screen (or mathematical models, or what have you) do not make contemporary physics or other kinds of intellectual endeavor easier, they make it possible.

The second major worry advanced by critics of the extended mind hypothesis focuses on its utility as a framework for cognitive science. Some philosophers and cognitive scientists have worried that the causal processes involved in extended systems are so diverse that there could be no science of the extended mind. A science, they argue, has as its domain a set of processes that are causally individuated. And in actual fact, they argue, the sciences of the mind qualify as proper sciences on just this basis: neuroscience, cognitive psychology, and so on, reveal a set of causal regularities that characterize (properly) mental processes. Since the causal processes involved in extended processes are so diverse (looping from brains to notebooks or to computers, or to gestures, or what have you), the prospects for a science of the extended mind are slim. By delineating a unified set of causal processes, science cuts nature at its joints; thinkers should therefore be guided by science in their ontology. If there is no science of the extended mind, because it is too causally diverse, there is no genuine extended mind: instead, the “extended mind” should be seen as constituted out of a set of processes and mechanisms each of which could be the subject of a genuine science and each of which should figure by itself in the list of the constituents of the world.

One problem with this argument is that many actual sciences fail the proposed test: The science of animal communication includes causal processes as disparate as communication by the use of pheromones, threat displays, the dance of honey bees, and territory marking by birds, as well as natural language in human beings. There are few general laws which circumscribe all and only these phenomena: instead they are unified only by their functional similarities. Perhaps proponents of the causal regularities view of science would claim that animal communication is not a proper science, or perhaps they would claim that a science ought, if possible, to have as its domain a single set of causally individuated processes. In that case, however, it seems that cognitive psychology will no longer count as a science, since the mental processes it studies are too diverse to constitute the domain of a science: controlled and automatic processes seem causally quite different.

The criticism that the extended mind thesis will impede science rests, finally, on a misconception: that advocates hold that the proper object of the science of the mind is the extended mind rather than the brain/central nervous system (CNS). Instead, as Clark’s work argues and exemplifies, proponents urge work at many levels and by many specialists simultaneously. The extended mind thesis is not the denial that the brain is special: it is only because the brain/CNS has certain characteristics that there are extended mental systems at all. Studying the extended mind requires the study of the brain; it also requires the study of how the brain interfaces with diverse extra-neural and extra-somatic processes and mechanisms.


Neuroethics is concerned, inter alia, with the permissibility and the advisability of intervening in the mind. Many of the biggest controversies in neuroethics centre on this topic. Is it permissible to use cognitive enhancers to increase the abilities of those who are already functioning in the normal range? Do the use of affective enhancers—say antidepressants or “love drugs”—threaten the authenticity of individuals or relationships? Does the prolonged use of methylphenidate as a treatment for ADHD threaten to alter the identity of users? These questions need to be rethought in light of the extended mind hypothesis.

Much of the heat and the hype surrounding neuroscientific technologies stems from the perception that they offer (or threaten) opportunities genuinely unprecedented in human experience. But if the mind is not confined within the skull, psychopharmaceuticals and other interventions targeted at the brain (say transcranial magnetic stimulation) are not unprecedented inasmuch as they intervene in the mind. Instead, intervening in the mind is ubiquitous. It becomes difficult to defend the idea that there is a difference in principle between interventions which work by altering a person’s environment and those that work directly on her brain, insofar as the effect on cognition is the same; the mere fact that an intervention targets the brain directly no longer seems relevant (ethically or even psychologically). Of course, many environmental interventions target the brain indirectly; better education, for instance, alters the brain just as surely as might cognitive enhancing psychopharmaceuticals. But even if the effect of the intervention is to alter the environment only, insofar as it affects cognitive performance it is hard to see why this fact matters. Clark (2003) recounts how some dementia sufferers remain able to live independently long after neuropsychological testing indicates that their leveling of cognitive functioning is below the threshold believed to be required for someone to be capable of taking care of themselves. The tests are accurate, but they do not take into account the ways in which some people learn to structure their environment to take the burden off their brains (taking cupboard doors off, so they can see the contents at a glance, organizing their environment spatially, so that what they need comes to hand when needed, and so on). The effect is to raise their level of cognitive performance above what they could achieve given their brain alone (in a less well organized environment).

Given these facts, neuroethicists need to be alive to the possibility—indeed, I think, the likelihood—that much of the opposition to cognitive enhancements, as well as the preference for talk therapy over psychopharmacology or deep-brain stimulation, and so on, stems from internalist prejudices: from the inchoate thought that the mind is to be found within the skull and there alone. Once it is recognized that human cognitive success and even our identities as psychological beings depends on extended processes, including processes every bit as mechanical and arational as internal interventions, neuroethicists ought to begin to assess interventions based on their effects, and not on their location. Agential resources of self-control can be strengthened by altering their environment or by drugs; which should be done depends, I claim, not on the means by which the result is achieved, but on the effects: which achieves the best balance of benefits over costs? Just because one intervention is achieved by means that directly target the brain whereas another directly targets external states isn’t relevant; this kind of difference is only relevant when it makes a difference to the costs and benefits.

The central questions of neuroethics must be rethought in light of the extended mind thesis. Questions like “ought society use our new powers to intervene into the minds of agents?”; “does the dependence of someone on external props affect their identity or their authenticity?”; “is it wrong to alter human nature?”; “ought human beings adopt an attitude of gratitude for the unforced gifts of nature, and not interfere with them?” all seem to depend, directly or indirectly, on the idea that in principle it is possible to separate the human mind from its environmental embedding. That is, the questions seem to presuppose that human beings have a choice about whether to intervene into the minds of agents; we can either continue as we are, with unaltered minds, or use our new technologies to intervene. Similarly, the questions presuppose that we can choose between having our identities depend on external props or not, between remaining in a natural state or becoming deeply dependent on the artificial, and so on. If the extended mind thesis is true, these claims are untenable. Human beings have always been dependent on external props to make us who we are. If human beings are rational animals—that is, if our cognitive success is definitive of what we are, as a species and as individuals—then we have always been deeply dependent on external props to make us the kinds of being we are. New technologies and psychopharmaceuticals do not mark a difference in principle; they give us new means to do what we have always done. That is not to say, of course, that we ought to use these new means, just that we should not reject them on the grounds suggested.

Taking the extended mind thesis seriously does not commit neuroethicists either to accepting or to rejecting new technologies. It commits us, rather, to assessing them on grounds which are not a mere reflection of internalist prejudices. That may prove harder to do than it might seem, because internalist prejudices may be deeply buried in arguments apparently turning on other considerations. Consider the oft-heard, and eminently sensible, concern that cognitive enhancers would cause inequality. Though the concern is genuine, it is also often voiced by people in whose mouths it seems confabulatory: the bioconservatives like Francis Fukuyama (2002), for instance. It seems confabulatory because these thinker are little concerned with the massive inequalities that actually exist, both within and especially between nations. Why do they overlook one set of inequalities while insisting on another? I think that the answer is because, like many conservatives, they naturalize inequality (Napier and Jost 2008); they see actual inequalities as the product of nature, or as reflecting the desert of agents. They do not recognize that existing inequalities are dependent on social choices; on the ways in which nations structure their environments. Indeed, the most valuable thing about the intuitions upon which they insist is that they can be turned back upon the views of those who advance them: by demonstrating that actual inequalities are as undeserved as those they contemplate, neuroethicists can motivate the thought that the public ought to be far more concerned with existing inequalities, which are far greater than anything realistically threatened by cognitive enhancement, than with new technologies. Neuroscience has a role in this project, by showing how inequalities in cognitive function are the product, in part, of environments (Farah et al. 2006; Noble et al. 2007).

Clark and Chalmers motivated the extended mind hypothesis by appealing to the Parity Principle. The significance of the hypothesis for neuroethics is that attention to it motivates an Ethical Parity Principle: whether a particular means of altering cognition directly targets the brain/CNS or the external scaffolding shouldn’t make a difference to the assessment of its permissibility or advisability. Causal route is a difference that makes no difference; what matters is the result.


Clark and other proponents of the extended mind thesis appear to me to have good responses to almost all the arguments leveled against it. There is, however, one argument against the thesis which seems to me quite forceful. Some philosophers, Rob Rupert (2004) in particular, have argued on the grounds of theoretical conservatism that the mind ought to be identified with internal goings-on alone. Rupert concedes, as he should, that environmental scaffolding is absolutely essential to human cognitive success, but he argues that recognition of this fact is compatible with the internalist view. He prefers to see mind as deeply embedded in extra-neural and extra-somatic processes and mechanisms, not as genuinely (though partially) constituted by such processes. Insofar as Rupert concedes all the facts upon which proponents of the extended mind insist about external scaffolding, there can be no (direct) empirical response to this claim. Every fact about cognition that could be cited by the proponent of the extended mind Rupert can accept, consistent with rejecting the functionalist claim that if the same kinds of goals and behaviors are subserved by external processes and mechanisms as by internal, they ought to be regarded as the same kind of thing. But just insofar as this is true, the debate seems to become merely terminological. In the end, it doesn’t matter what the mechanisms subserving cognition get called; it is recognition of how it is done that matters.

Embedded cognition views will serve multilevel cognitive science, of the kind Clark advocates, just as well as will the extended mind thesis. It will serve just as well as a basis for neuroethics. What matters to human beings as thinkers doesn’t depend on how these processes are described; what matters is their effects. If neuroethicists and philosophers wish to call only some part of human cognitive machinery our minds, and call the rest its scaffolding, so be it. It remains the case that our cognition is already, and always, deeply dependent on external scaffolding, and that our new technologies do not mark a radical break with our cognitive past. It remains the case that there is no isomorphism between causal routes to mental effects and their moral or intellectual significance. All that is lost, for neuroethics, is the rhetorical power which comes from the identification of extended mechanisms with the mind. Nevertheless, the set of facts to which I appealed in arguing for an ethical parity thesis should not be in dispute.


Adams, F. and Aizawa, K. (2008). The Bounds of Cognition. Malden, MA: Blackwell.

Clark, A. (2003). Natural-Born Cybogs: Minds, Technologies, and the Future of Human Intelligence. Oxford: Oxford University Press.

Clark, A. (2008). Supersizing the Mind. Oxford: Oxford University Press.

Clark, A. and Chalmers, D. (1998). The extended mind. Analysis, 58, 7–19.

Dehaene, S., Spelke, E., Pinel, P., Stanescu, R., and Tsivkin, S. (1999). Sources of mathematical thinking: behavioral and brain-imaging evidence. Science, 284, 970–4.

Farah, M.J., Shera, D.M., Savage, J.H., et al. (2006). Childhood poverty: Specific associations with neurocognitive development. Brain Research, 1110, 166–74.

Fodor, J. (1975). The Language of Thought. Cambridge, MA: Harvard University Press.

Fukuyama, F. (2002). Our Posthuman Future: Consequences of the Biotechnology Revolution. New York: Farrar, Straus and Giroux.

Hurley, S. (1998). Consciousness in Action. Harvard, MA: Harvard University Press.

Napier, J.L., and Jost, J.T. (2008). Why are conservatives happier than liberals? Psychological Science, 19, 565–72.

Noble, K.G., McCandliss, B.D., and Farah, M.J. (2007). Socioeconomic gradients predict individual differences in neurocognitive abilities. Developmental Science, 10, 464–80.

Rupert, R.D. (2004). Challenges to the hypothesis of extended cognition. Journal of Philosophy, 101, 389–428.

Sterelny, K. (2004). Externalism, epistemic artefacts and the extended mind. In R. Schantz (ed.) The Externalist Challenge. New Studies on Cognition and Intentionality, pp. 239–54. Berlin: de Gruyter.