empathize with that individual and so feel the motivation (albeit perhaps weakly) that that individual would feel under those circumstances, thus experiencing a relevant kind of motivation.
We have thus seen how a form of ethical cognitivist internalism can be true of a significant class of practical actions.
Coda: Appraisal and Motivation
Psychologists and philosophers agree that emotions normally include or constitute some kind of appraisal or evaluative element (I will take "appraisal" and "evaluation" to be the same). There is something more to an affective state than just an observation, like observing that the grass is green. Rather, in being angry or afraid, we seem to be making an assessment of the value of a thing. I may judge that a rabid dog is going to hurt or possibly kill me, and given that I value myself, this judgment involves an evaluative element. It is this insight which appears to guide, in part, some judgmentalists, who, although wrong when they claim to have an identity theory of emotions, are often right in some of the broad outlines about how emotions can function in our own lives. Indeed, one common kind of argument that emotions must be cognitive is built on the claim that basic emotions must start with, or otherwise be in part constituted by, an appraisal of some kind, which is assumed to be cognitive. The final debt to cognitivism is to show how the affect program theory can explain appraisal.
Taken as a primitive, the idea of appraisal, like the idea of a disposition to action, is obscure. Typically, the cognitive theory of emotions can seem plausible in this regard because it can allow for cognitive judgments, which are then assumed to be powerful enough to include appraisals. This does not in any way help explain what appraisals are. However, the same parsimony earned in explaining the disposition to action (by drawing attention to the motor program or other essential motivation to an action that constitutes in part a basic emotion) can be earned in explaining some kinds of appraisals. As I understand it, the intuition underlying the notion of appraisals is that they are special kinds of judgments or mental states; they are special because unlike many other judgments, they are states that are sufficient for motivating action in some situations. We can clarify this intuition substantially. I propose that we can define "appraisal" in the following way: an appraisal is an affect or is a representational state that is of a kind that reliably excites an affect. The emotional judgments discussed above are examples of appraisals. One judges that something is infuriating, or terrifying, or disgusting in these cases, and as a result one is motivated to attack, or flee, or expel. This motivational aspect explains what makes an appraisal the kind of state it is (as opposed to a representational state that is not sufficient to motivate).
This definition is prima facie like most standard notions of appraisal, with the very important exception that a view like the affect program theory
end p.153
 
means that the appraisals will fall into categories depending upon the kind of affects they are or excite. That is, there will be different kinds of fundamental appraisals corresponding to different kinds of affects. Thus the ubiquitous talk about "negative" and "positive" appraisals at best describes rich cognitive states that will have complex and very indirect (and hence, perhaps unreliable or irregular) relations to affects. More fundamental appraisals are things like fearful, disgusting, terrifying, and depressing. In between, there may be kinds of states, including kinds of cognitive contents, that are to varying degrees appraisals; such things might include judgments that are closely related to (perhaps in that they directly entail) emotional judgments.
I have said that an appraisal is a representational state, because the notion does not need to be cognitive in the sense that I use the term "cognitive" in this book. We may have representations of concreta or of mere stimuli that are not propositional attitudes, but which excite an affect. Thus, there can be subcognitive appraisals.
end p.154
 
9 Four Puzzles for Consciousness
Abstract: After a basic review of the contemporary debate about phenomenal consciousness, four puzzles about emotional experience are discussed. First, emotions appear to be essentially motivational states, making them poor candidates for arguments, akin to inverted spectra or zombie worlds. Second, emotions differ significantly in their character, whereas some phenomenal experiences do not. Third, emotions vary significantly in their intensity, another feature lacking in some phenomenal experiences. Fourth, representations do not appear to be the best explanation for the features of emotional experience.
Craig Delancey
In recent years, there has been an explosion of interest in explaining phenomenal experience. But although most of us would consider affective experiences prototypical examples of phenomenal experiences, these have largely been neglected by the philosophers (if not by the scientists) concerned. This neglect of emotions can hide long overlooked opportunities and serious problems when our goal is to understand the nature of consciousness. In this chapter, I will pose four puzzles about affective experience that any full account of consciousness will have to solve, and in the next chapter I will offer a theory that solves these puzzles.
A Very Basic Review of the Terrain
(Those familiar with the contemporary consciousness debate may wish to skip to the next section.) We can make a distinction between the functions of consciousness and phenomenal experience itself. This distinction is merely conceptual—it is not an argument or claim that these things are distinct in the world—but it does roughly correspond to the most significant division in the present consciousness debate. On one side of the division are those who believe that this conceptual distinction is so fundamental that phenomenal experience is not going to be explained by any account of the working consciousness. For these theorists, the very existence of phenomenal experience is a fundamental problem that seems to fall outside any kind of functional account of psychological processes. This is because there seems to be a difference in kind between phenomenal experience and working consciousness; phenomenal experience does not seem to be a functional notion. David Chalmers calls this "the hard problem," the problem of giving an account of the nature of phenomenal experience without just supplying an account of working consciousness (1995). To varying degrees, Ned Block (1995), David John Chalmers (1996), Frank Jackson (1982), Thomas Nagel (1979), and others accept that there is a profound difference in kind between the things explained in our functional theories
end p.155
 
and our phenomenal experience. From the perspective of these philosophers, although we have theories of how people learn, how they perceive, how they use and understand language, we have little or no insight into why it is that people feel—why the universe has anything like feeling at all. The progress of the sciences of mind have nothing to say about the existence of phenomenal experience, and so the questions about such experience become more and more compelling: why is not human life just brute mechanical processes, the way that many of us think a landslide or a planet's orbit is? Does not the scientific worldview make it perfectly possible that human life could be like this? Giving a solution to the hard problem amounts to offering some kind of account of why there should be experience at all.
Any reader who is not a philosopher may be confused by this perspective. After all, psychologists have always been concerned with consciousness, and they have made great strides in understanding it. Why claim it is a mystery impervious to science? Anyone who feels this way can be placed on the other side of the divide. The biggest alternative to the hard-problem
Ambiguities of "Consciousness"
The term consciousness can refer to a kind of ability to report features of one's own mental state, to being awake and aware, to the felt experience of being in the world, or to other things. The philosopher Ned Block draws a primary distinction between phenomenal consciousness and access consciousness. Phenomenal consciousness is the what it is like of an experience. Block writes: "[Phenomenal] consciousness is experience. . . . [Phenomenal] conscious properties include the experiential properties of sensations, feelings, and perceptions, but I would also include thoughts, desires, and emotions. . . . I take [phenomenal] conscious properties to be distinct from any cognitive, intentional, or functional property" (1995, 230). Whereas in contrast access consciousness is a functional notion: "A state is access-conscious . . . if, in virtue of one's having the state, a representation of its content is (1) . . . poised for use as a premise in reasoning, (2) poised for rational control of action, and (3) poised for rational control of speech" (231). Block has also suggested that there are several other notions of consciousness, including self-consciousness and monitoring consciousness (235). Other philosophers have introduced similar distinctions. For my purposes, each of these senses of consciousness can be placed into one of two groups: phenomenal experience, the notion of consciousness as experience; and working consciousness, being Block's notion of access consciousness and also including notions of awareness, attention, ability to report on a mental state, and so on. (In chapter 1, I required that for a mental state to be working conscious the subject must be able to report on it in some way; here I will drop that strong criterion, and assume that the intuitive, albeit vague, notions of consciousness listed above are sufficient to distinguish working consciousness from phenomenal experience.)
end p.156
 
camp, what we might call the working-consciousness camp, is to argue that accounts of consciousness as it is studied by scientists—focused on awareness, ability to report on a mental state, being able to follow instructions, and so on—are sufficient to explain phenomenal experience. For philosophers, a wide range of views can cohere with this perspective, including identity theory (with the claim that phenomenal experiences are just brain states) and varying kinds of functionalism (with the claim that phenomenal experiences are just functional roles). Many scientists are in this camp, if only because scientists who concern themselves with consciousness may not distinguish between working consciousness and phenomenal experience. After all, scientists must begin with what appears to be causally significant—what is "working," in the sense we are using it here. Thus, typically, neural scientists or psychologists aiming to explain "consciousness" are working on an account of how some aspect of working consciousness arises and helps us solve problems; and some of these scientists may believe that such an account will be a sufficient explanation of phenomenal experience.
These alternative perspectives on consciousness are in tension because they tend to go hand in hand with very different approaches to consciousness. Those in the hard problem camp tend to continually want to shear the notion of phenomenal experience away from any straightforward kind of functional explanation. It is not clear how legitimate such a move is. On the one hand, it is indeed the case that conceptual differences can reveal differences in the world. If someone tried to explain number in terms of gravity, or weather in terms of hope, we would tell them that they were making a category mistake, that they were mixing two very different kinds of things. Perhaps these mistakes are of a kind with the mistakes that the hard problem camp philosophers claim to identify in functional accounts of phenomenal experience. On the other hand, it is difficult to know when such distinctions are legitimate. Anyone with empiricist inclinations—that is, anyone who believes that experience is the sole or at least primary determinant of knowledge—will have grave doubts that there are conceptual distinctions that do not just reflect some, perhaps unconscious, theoretical presuppositions. In this case, the conceptual difference that separates our understanding of phenomenal experience from our understanding of working consciousness may be nothing more than an expression of some prejudices. But it is equally fair to say that most in the working consciousness camp tend to deny the very thing at issue, or at least its subtleties. Most theories in this camp tend to have a "consciousness is just x" account that leaves us no more clear about why there is experience, or how experience fits into the natural world, than when we started. It is quite easy, after all, to proclaim that "consciousness is just a brain state." This really does not offer much for our understanding of experience.
At this stage in our understanding of consciousness, the most fruitful method of theorizing may be to tread a course between these extremes. One can start from the working consciousness camp, and take lessons about how
end p.157
 
to develop a naturalist theory of working consciousness, but should always keep an eye on the hard problem camp, and try to make progress in understanding how phenomenal experience is related to working consciousness or to physical body states. The most promising accounts of consciousness that take such a middle path include the representational theories of consciousness, explicitly endorsed by William Lycan (1987, 1996) and Michael Tye (1995a), but also implicit in the theories of Paul Churchland (1989b) and in the approaches of many scientists. It would seem that the representational theory of consciousness is the natural starting place for any thinking about consciousness. Working consciousness, for example, will need to be explained in representational terms: attending to some mental contents, explicitly manipulating them, forming memories and recalling memories; these are all the kinds of things that will require at least some minimal notion of our conscious state as using representations. But the theory also has much to offer to help us understand phenomenal experiences. A pain in my finger is about something—it carries information (assuming things are functioning normally) that some kind of damage has been done to my finger—and as such this phenomenal experience is an intentional state. As I will briefly discuss below, this approach may also solve some other problems that phenomenal experience raises. Most important, if phenomenal experiences are representations, then we have a way to explain the role that they can play in a mind.
In this chapter and the next, I will develop and defend an alternative version of this kind of theory: a teleofunctional account of consciousness, in which the conditions that allow for representation, but not representations themselves, are necessary for an account of phenomenal experience. The views I will explore are definitely not an answer to the hard problem. Rather, I am interested in an account of the relation between working consciousness, physical body states, and phenomenal experience. I will begin by seeing what our best understanding of emotions can tell us about consciousness. I pose these insights as four puzzles. The first is general, and applies to any theory of consciousness; the rest are concerned with a representational theory of consciousness.
Puzzle 1: Why Do Hollywood Zombies Shuffle and Mumble?
My first puzzle is concerned with any theory of consciousness that separates the function—if any—of consciousness from phenomenal experience. This includes generally two kinds of theories. The first is epiphenomenalism, where the phenomenal experience of consciousness does no functional or causal work. Frank Jackson (1982) is an example of an epiphenomenalist. The second kind of theory is one in which phenomenal experiences may play a functional role, but that role can, to some degree, be played by different qualia;46 I will call this token role theory. For a token
end p.158
 
role theory, a red quale might be said to be playing a role in my navigation of my environment, but the quale could be switched with another quale and, as long as the quale-role correlation remains consistent, this role could still be fulfilled. Paul Churchland (1989b) holds such a view; Michael Tye (1995a) comes close to it.
A host of thought experiments have been offered as evidence, or at least as motivation to believe, that the concepts of working consciousness and phenomenal experience, with their distinct intensions, can have distinct extensions. The method is to urge that these concepts are so distinct that what we mean by each can be had without what we mean by the other. Here I will consider two such thought experiments: the inverted spectrum and the phenomenal zombie.
In the widely familiar inverted spectrum thought experiment we are asked to suppose that, say, Adam's blue were my and your and Karen's red, and vice versa, and that this relation were consistent and continuous, so that Adam's experience of the spectrum were like an inverted mirror image of Karen's, and both Adam and Karen call the same things by the same color terms, even though the phenomenal experience would usually be different. The result is that although their color terminology would function always the same way, their individual experiences would be distinct. This seems a possibility because, in this coldly abstract case, there seems to be nothing in our concept of the experience of color that presupposes any specific functional role for that specific color experience. The phenomenal experience of color seems to be just a kind of tag, and the functional role of color recognition relies only upon a consistent and regular tagging, and not upon the idea that the tags be of any particular kind.
Intension Versus Extension
Intension and extension are logical notions. A concept or term's extension is the collection of all the things of which it is true. For example, the extension of "has a kidney" will include all those organisms which have a kidney (all humans, dogs, etc.). Intension is a more subtle notion, amounting to something like meaning. No clear and uncontroversial definition is available as of yet; however, it can be easily motivated by contrasting it with extensions. The extension of "has a liver" might be the exact same set of organisms that have kidneys—that is, it could just be a contingent fact that everything that has a liver also has a kidney, and vice versa. But this does not mean that "has a liver" and "has a kidney" mean the same thing. We say that their intensions are different. This is relevant here because it could be that everything that has phenomenal experience also acts in a way exactly corresponding to what we would expect if phenomenal experience played some functional role, but that the two were still different things. This is essentially the thing at issue in the first puzzle.
end p.159
 
There are many reasons to doubt that the inverted spectrum describes a possible or even coherent situation. It could turn out that the nature of this tagging may itself require the order of the spectrum. The experiment could also have seemingly absurd consequences, such as the possibility that color experiences could disappear, or continuously change. If one is an epiphenomenalist, these changes and disappearances of phenomenal experiences would have no effect, and would go unreported. Nonetheless, I will suppose for a moment that this experiment at least shows that the prior intension—our common or pretheoretic understanding of the meaning of concept47—for our concepts of phenomenal color experience and of the function of color recognition together do not rule out the possibility of the inversion. If this inversion were actually possible, then this would be consistent with either a token role theory (different color qualia can serve the same function) or an epiphenomenalism (if color qualia did no work then their order would be insignificant).
The thought experiment of the phenomenal zombie (Kirk 1974) takes this kind of intuition a large step further. If we can invert the phenomenal color spectrum, why not get rid of it entirely? We are asked to imagine that there is a person physically identical to one's self but who has no phenomenal experience. The appeal here really lies not in the reference to physical identity—which is something of an appeal to ignorance since we do not know all the important aspects of our physical features and hence are unsure how they relate to phenomenal experience—but in the idea that phenomenal experience is doing no work, and that normal function could continue without it. Things like talking, moving around, typing, or playing chess are all things that machines might do, and we generally presume that these machines (at least if simple enough) are not conscious. So can't all the things one does also be done by something that is not phenomenal conscious? Could there be a kind of zombie that has no phenomenal experience but is just as competent as a person who does? Here again, if we admit for the sake of the argument that this thought experiment describes a coherent, meaningful possibility, then the thought experiment at least makes it possible that the prior intensions of our concepts of phenomenal experience and working consciousness can be so separated that what we mean by working consciousness can seemingly be had independently of what we mean by phenomenal experience. If the zombie were possible (or, more accurately: if a zombie world that is physically identical to our world were possible), then it would be evidence for epiphenomenalism—the same tasks could get done without phenomenal experience, so it would seem phenomenal experience does nothing.
Conceivability is not a strong argument for an actual distinction, especially because although the notion is meant to be a logical one, it depends upon our mental capabilities. A host of objections are relevant. We may simply be mistaken in thinking that there is a coherent possibility for zombies or inverted spectra; there could even be a contradiction in the experiments that is in principle accessible to us—one that we could easily understand
end p.160
 
Possible Worlds
Philosophers have a way of talking about possibility which can be a bit deceptive to those who first encounter it: they describe possibilities as possible worlds. Possible worlds are meant to be completely described situations. It is often easiest, at least for reasoning through the logic of a situation, to suppose that the situation is complete—that there is nothing absent or indeterminate about it. Since the situation is in principle complete (no one ever actually writes down a complete description, of course; only a description of the details that matter to the argument) you can think of it as an in-principle description of a whole world. Some philosophers actually do believe that these possible worlds are out there "somewhere," and so are really in some sense "existing"; but normally it is understood to be just a useful way of talking about how things could be different. It is worth pointing out all of this because there have often been unfair criticisms made by nonphilosophers who fail to understand the distinctions being drawn: it might seem that philosophers are off on wild flights of fancy when they start talking about these other, slightly different worlds. But if we are going to attempt to understand in what sense two things are dependent upon each other, we will need some notion of what it would be like if things were different. Scientists do this whenever they perform experiments, since they control for different variables and so vary the conditions. For example, if a scientist wants to learn how effective a new drug is at fighting a particular strain of the flu, she cannot just give the drug to everyone and see if they get better. Suppose they did get better; she would be unable to tell whether this was the result of the drug, or just the natural course of the flu. So, instead, she will give some people the medicine, and others a placebo. In this way, she is comparing two different situations, and so will be able to see what it is like with, and without, the drug. In the same way, there are all kinds of dependencies that we would like to test by looking at how they hold up under different conditions. However, when we are doing metaphysics—when we are asking questions even more fundamental than are usually asked by scientists—we often do not have the luxury of controlling different variables. It is perfectly coherent, for example, to wonder what the universe would have been like with much less, or much more, matter (this is a question cosmologists often ask). But we simply cannot create a universe with more, and another with less, matter (although we now often simulate these in computers). Similarly, we may wonder about the relation between phenomenal consciousness and brain states (as described by contemporary neural science). If there is some new, undiscovered principle that links the two, we could speculate that in a complete situation (a world) without this principle, there could be these brain states without there being phenomenal experience (the zombie "world").
end p.161
 
but miss because we have not yet managed to bring the relevant material into our understanding; the required distinction between pre- and posttheoretic intension may be illegitimate; and some impossible situations seem readily conceivable, suggesting that conceivability is not well matched with "possible world" semantics. However, setting these concerns aside, I believe that the real force of these thought experiments lies in their ability to change the nature of the debate about the role of consciousness: they help make plausible certain possibilities (and they may make plausible a shifting of the burden of proof from the epiphenomenal or token role theories to the type physicalist theories—see Chalmers 1996, 96 and 104). The end result is a static balance of conflicting suppositions: the epiphenomenalist will assert that the conceivability of these thought experiments shows that phenomenal experience is prima facie not functional, and the token role theorist will assert that the inverted spectrum shows different qualia could play the same role; while the reductionist may deny that the thought experiments describe conceivable situations, or will assert that although conceivable they are somehow deceptive and prove nothing. Without further information about consciousness we are at an impasse, where each side can insist that the burden for showing why their own presuppositions are inconsistent or otherwise unwarranted belongs to the other side.
One way out of this impasse can be found by examining yet neglected aspects of consciousness. The examples that are used in the thought experiments are invariably color.48 This choice is not innocent, because our concepts of colors just are concepts for which—persumably because of some features of the phenomenal experience of colors—a distinction between phenomenal experience and functional role is naturally made by many. For the phenomenal experience of emotions, however, no such distinction is natural. To see this, consider the plausibility of these two thought experiments with affects instead of color vision taken as the example of phenomenal experience.
The inverted affects thought experiment would ask us to suppose that Adam's experience of happiness were your and my experience of sorrow, and that Adam's experience of sorrow were your and my experience of happiness, but that Adam acts in each case as is appropriate (by my and your standards).49 It is absurd to suppose that Adam—imagine him having an evening out with close friends—could go around having the phenomenal experience of intense despair, all the while smiling and laughing. Similarly, it is absurd to suppose that Adam—imagine him now at beloved friend's funeral—could have the intense phenomenal experience of happiness and all the while frown and weep, lose his appetite, and so on. (Of course, these inversions could be, in a weak sense, possible for anyone if they were a good enough actor, but in this thought experiment we are supposing that the person is doing it all the time, from conception onward; that they are sincere; that uncontrollable autonomic responses of the relevant kind are occurring; and so on.)
end p.162
 
The inverted affects thought experiment is grossly implausible because our concept of the experience of some affects is both richly phenomenal and functional; and the particular phenomenal experiences—the affect qualia—are specific to their function. Here "function" need not refer to the achievement of some plan or goal, but merely some behavior, any behavior, since what is at issue is whether phenomenal experience plays some causal role; hence weeping is enough of a function of sorrow for the phenomenal experience to be doing work. Something that is sad causes in us a phenomenal experience that is inseparable from the motivation to weep, to avoid other instances of this kind of event, and so on; something joyful has an experience that is inseparable from the motivation to laugh, to seek other instances of this kind of thing, and so on.
The affective zombie is a more difficult case because instead of having experience at cross-purposes with action we have the relevant action supposedly happening without the experience. Still, there is no intuitive appeal to a zombie that lacks affective experience but still demonstrates a behavior that is normally the particular consequence of an affect. We must conceive of a zombie that shows the signs of, say, rage, and acts as if it was filled with rage—that strikes out at others with fast and furious intent, shouts vigorously, turns red and gets hot, has increased blood pressure, and so on—but which all the while feels nothing inside. Seen in this way, the zombie is much less plausible. It is not accidental that we have traditionally imagined zombies as shuffling, affectless brutes. Our intuition is that without affects the zombie is a kind of uninspired automaton, and that as a result it behaves as if little more than dead. And even if we can conceive of the zombie having the appearance and behavior of rage, it is not plausible that the zombie acts out of something called "functional rage," which is what normally fills the role of rage but which here excludes the heat and overwhelming experience of rage. Our phenomenal experience of an emotion is inseparable from the motivation that that emotion provides, and the experiment's seemingly enraged zombie that feels nothing requires us to imagine a behavior without its cause—to imagine an effect without an affect! We want then to ask what, if not the phenomenal experience, is motivating the behavior; we want to fill in the void created by the supposition. Just as with the inverted affects thought experiment, the "affective" zombie thought experiment asks us to split a concept that in practice we do not split, and then requires us to discard the half that we cannot do without.50
This is the first puzzle: the phenomenal experiences of basic emotions and other affects are not at all distinct from the physical states that co-occur with them, nor from the motivation they provide. Can we find a theory of consciousness that accounts for this—or can we explain it away?
end p.163
 
Puzzle 2: Why Doesn't 2 Hurt?
A representational theory of consciousness is built on the thesis that qualia are representations. The theory is compelling because we can show that some of the more perplexing features of consciousness can be explained by supposing that experiences are representations. But even given the utility of this approach, is there any direct reason for why we should think that phenomenal experience is representation? One reason that we have already seen is that phenomenal experiences can be thought of as telling us something, as carrying information. A pain in the arm is "about" the damage to the arm, and not just some undirected experience. Generally, representations can be best understood as intentional states that function inside a mental system as stand-ins for some object or stimulus in the body or environment, so that some kind of relation to the thing can be managed even if the thing is absent (or even nonexistent). Since intentional states are about something else, it would seem reasonable that pains, colors, and sounds are also kinds of intentional states. They inform us about things in our environment, damage to our bodies, and so on.
But on a closer look, there is a distinctiveness here that is being glossed over. A pain in my finger might be said to represent the pinprick in my finger. Tony's experience of anger and Adam's experience of disgust may both be said to represent the changes in their bodies that these emotions are causing. Furthermore, these experiences are all very distinct. My experience of disgust is very different from my experience of anger, and both are very different from my experience of pain. At first glance, explaining these kinds of differences seems easy: the phenomenal experiences represent different things, and so of course they are different.
But there are many kinds of representations. Consider the concept of 2. This is ostensibly a referential concept—although what it refers to is controversial. Nonetheless, the concept of 2 represents something; it represents whatever is shared by two ducks, two cats, two mice, and so on. Or consider the referential concept of inflation, or of entropy, or of mass. Each of these concepts represents something quite different. But these concepts, when entertained and so when instantiated as mental states, are not distinguishable on their phenomenal experience. There is no felt difference between thinking about 2 and about inflation—at least not in the same way that there is a striking difference between feeling anger and feeling joy. Note that this distinction is even more clear when we contrast the experience of an emotion and the concept of the emotion. The referential concept of anger does not have a phenomenal experience of its own that distinguishes it from the referential concept of joy, but we surely can distinguish the phenomenal experience of anger from that of joy. There are a host of representations for which there is only the experience of thinking—not also some unique phenomenal experience.
This is the problem of representational distinctiveness:51 the phenomenal experience of some representational mental states are indistinguishable
end p.164
 
from or very similar to each other, while the phenomenal experience of some other states (such as basic emotions and other affects) are quite distinct. But if phenomenal experience is just representation, why is there any such difference? The second puzzle is to explain this distinctiveness of experience understood as representations.
Puzzle 3: Why Doesn't 2,000 Feel a Thousand Times More Intense Than 2?
Not only are some representations different in terms of how distinct their phenomenal experiences are, but many are also different in that they admit of varieties of intensity. There is a huge difference in being mildly annoyed with someone and being overcome with rage at them. Basic emotions and many other affects—anger, disgust, fear, depression, elation—admit of great differences in intensity. But many other representations do not do so. Thinking hard about 2 does not result in an increasingly intense phenomenal experience of conceiving of 2. So some phenomenal experiences vary in intensity, but some representational mental states do not.
This is particularly important because ignoring the issue of intensity makes a token role theory of phenomenal experience more plausible. If experiences are just qualitative simples (quale) then they would seem quite analogous to symbols: they would be tokened, or not, in a representational system. But intensity, being a magnitude, suggests that there is a stronger and more complex relation between the experience and the body state that underlies it.
This is the puzzle of intensity: can we explain how some phenomenal experiences, especially affects, admit of varying degrees of intensity? And can we explain why some representations do not? If the representational theory of consciousness is correct, then since some representations (such as the referential concept of 2) seem to just be either tokened or not, how are these other states, that admit of intensity, represented?
Puzzle 4: Do We Need the Representation in the Representational Theory of Phenomenal Experience?
It may seem common sense that phenomenal experiences represent. But the term "representation" is ambiguous. We can say that tree rings in a tree represent years, that pain in my finger represents damage to my finger, that "two" represents the concept 2, and that a road sign represents the curve in the road ahead. But these are very different things. The first is a static, natural correlation in the world; the second is an event in a body; the third is a term; the last is an iconic artifact. We need to clarify the sense of "representation" in which phenomenal experiences may be representations in order to make more sense of the representational theory of consciousness.
end p.165
 
Michael Type offers that representations occur whenever there is causal covariation under optimal conditions:
S represents that P = dfif optimal conditions obtain, S is tokened in x if and only if P and because P. (1995a, 101)
All the heavy lifting in this definition is being done by the notion of optimal conditions, but the idea is straightforward. It is also obviously too weak. Tree rings co-vary with years, so tree rings "represent" the age in years of the tree. What then distinguishes a phenomenal experience like pain from tree rings? Much more is needed. Tye gets it by offering his PANIC theory of consciousness: the claim that phenomenal experiences are Poised Abstract Nonconceptual Intensional Contents. They are said to be poised because they "stand ready and in position to make a direct impact on the belief/desire system" (1995a, 138).
As I have shown extensively, we have no reason to believe that desire is anything other than a convenient but vague and unscientific notion, and also many affects can operate quite independently of beliefs and the kind of capabilities that would presumably constitute a "belief/desire system" (such as B-D rationality). But in making his case that phenomenal experiences are representations, Tye has linked them essentially to beliefs and desires. This makes it unclear how subcognitive affective states can have phenomenal experience. The answer might be that a state like subcognitive fear, to be conscious, must then be poised to act on the belief-desire system since one is by definition in some sense aware of it and so able to have beliefs about it. But this would be vacuous, since anything could so be poised. More important, on such a view, nonhuman animals that lack the kind of skills necessary for a belief-desire system must lack phenomenal experience. Many are willing to bite this bullet (e.g., Rolls 1999), but I consider it an absurd and indefensible consequence.
If we consider a phenomenal experience that is not so obviously able to play a role in a belief-desire system, this problem becomes more acute. Mood is such a state. Tye's account of mood is unobjectionable. He takes depression as an example of a mood, grants that moods have characteristic phenomenal experiences, and also grants that although a mood usually is elicited by cognitive conditions, it can be free-floating: one can feel depressed "without there being anything in particular about which [one] is depressed" (1995a, xv). But, he claims, moods are representational:
What exactly they represent is not easy to pin down, but the general picture I have is as follows: For each of us, there is at any given time a range of physical states constituting functional equilibrium. Which states these are might vary from time to time. But when functional equilibrium is present, we operate in a balanced, normal way without feeling any particular mood. When moods descend upon us, we are
end p.166
 
responding in a sensory way to a departure from the pertinent range of physical states. (129)
Thus, the phenomenal experience of a mood is somehow the representation of a whole new functional equilibrium. But in what sense can we represent our whole functional equilibrium? Tye gives the same answer that he gives for emotion and similar states: the alteration of body states that occurs when we are angry, for example, results in a phenomenal experience of anger because we are undergoing the representation of these states. "[B]odily reactions . . . are registered in sensory receptors, thereby providing the input for the mechanical construction of a complex sensory representation of the pertinent body states" (1995a, 130-131). This is plausible in the case of emotions both because there are autonomic changes to which we can refer (getting hot when angry, the heart beating more quickly when we are afraid, and so on) and dedicated neural systems that respond to such body states. But we have no reason to posit a neural system dedicated to representing the body's whole functional equilibrium, including the state of the neural system. Perhaps what Tye means is that our experience is not a particular representation but rather is the product of the differences in all the representations that happen as a result of the mood. There would be something it is like to be in the mood, but this would not lie in any particular representation; it will have to be a global change in representations. Let us suppose that this is the claim. The problem now is that the positing of representations seems to add nothing to the explanation of the mood experience that is not already had without representations; it is the global state (including of representations), not a representation of the global state, that is changing.
Besides explaining the intentional nature of phenomenal experiences, what work do representations do for the theory? Representations are events, had or used by one individual, and are intensional. Tye observes that although all and only animals with hearts have kidneys, and although sometimes there is something it is like to have a heart and also sometimes there is something it is like to have a kidney, the experience of having a heart is distinct from the experience of having a kidney. This seems to mirror an intensional context, and it suggests that we can explain this difference as being a product of the representational nature of experience. But Tye's own example of how it can be that there is something it is like to have a kidney refers to kidney pains. Thus, there is only something it is like to have a kidney when one's kidney hurts. This is certainly distinct from being aware of one's heart beating. The neural pathways that are excited in the two cases are different, the brain states that will be caused are distinct. A causal explanation suffices to explain the distinction. Tye also provides two examples that play upon complexities surrounding personal identity. He claims that there is a phenomenal experience typical of his having a back pain, but that there is a different experience associated with being
end p.167