Philosophers of music (and also music theorists) have recognized for a long time that research in the sciences, especially psychology, might have import for their own work. (Langer 1941 and Meyer 1956 are good examples.) However, while scientists had been interested in music as a subject of research (e.g. Helmholtz 1875; Seashore 1938), the discipline known as psychology of music, or more broadly cognitive science of music, came into its own only around 1980 with the publication of several landmark works. Among the most important of these were The Psychology of Music (1982), a collection of papers edited by the psychologist Diana Deutsch, and A Generative Theory of Tonal Music (1983) by music theorist and composer Fred Lerdahl and linguist Ray Jackendoff. These works and others made possible the first attempts to apply scientific research to philosophical issues concerning music (e.g. Raffman 1993; DeBellis 1995).
Since the 1980s, of course, a great deal of research has been done in cognitive science, philosophy, and music. For philosophers, there are perhaps three topics with respect to which findings in the cognitive sciences are most likely to be germane – the nature of musical understanding, the role of emotions or feelings in music, and the evaluation of musical works. This brief overview will describe some of the scientific research that has been done on these topics, and then indicate how it might be philosophically significant.
In his 1976 Norton Lectures at Harvard, Leonard Bernstein had floated, but not developed, the idea that tonal music might have a grammar analogous to the generative grammar that Noam Chomsky (1965) had proposed for natural language. Lerdahl and Jackendoff (1983) developed Bernstein’s proposal into a detailed set of analytical rules, that is, a grammar, designed to capture an experienced listener’s unconscious mental representation of a musical stimulus. (An experienced listener is familiar with a given idiom, here “classical” tonal music, but has no formal training.) The musical grammar contained metrical and grouping rules governing rhythmic structure, and higher-level time-span and prologonational rules governing certain interactions between rhythm and pitch. Lerdahl and Jackendoff hypothesized that conscious musical experience, characterized by feelings of tension, resolution, stability, and the like, was the result of unconsciously analyzing a musical stimulus – recovering its structure – according to these grammatical rules, much as a speaker–hearer’s conscious understanding of a sentence was supposed to be the result of unconsciously analyzing a linguistic stimulus according to the rules of the linguistic grammar. In fact, Lerdahl and Jackendoff conceive of conscious musical experience as the listener’s understanding of a piece of music.In a subsequent book, Tonal Pitch Space Theory (2001), Lerdahl has expanded upon the pitch component of the musical grammar. Here he proposes that the events of a tonal work are heard (understood) as traversing a path through a multidimensional space defined by the relative distances among pitches, chords, and keys.
In designing their musical grammar, Lerdahl and Jackendoff naturally employed the investigative methods of music theory and linguistics; in particular, they took musical and linguistic intuitions as their evidence. But the idea of a significant link between music and language has also received support from research in psychology and neuroscience. For example, it appears that melodic contexts can influence the perception of speech (e.g. Koelsch et al. 2005; Dilley and McCauley 2008), and harmonic contexts can influence phoneme monitoring (Bigand et al. 2001). Shared structures have been observed in speech prosody and musical melody and rhythm (Patel 2008); and ERP (evoked response potential) measures of neural activity reveal that in tasks involving both musical and linguistic syntactic integration, interference occurs between the two processes (Patel et al. 1998). (The same kind of interference shows up in behavior as well; see, for example, Fedorenko et al. 2009.) Musical training appears to facilitate second language learning (Slevc and Miyake 2006); and fMRI (functional magnetic resonance imaging) studies indicate that some musical and linguistic processes activate the same areas of the brain (Tillman et al. 2006). In an overview of the biology and evolution of music, Tecumseh Fitch (2006) concludes that various “design features” of music and language suggest an overlap of the two domains. (See Levitan 2006 and Patel 2008 for sustained defense of this idea.)
Another driving innovation in the cognitive science of music was the introduction, by psychologists Carol Krumhansl and Roger Shepard (1979), of the probe tone test of experienced listeners’ mental representations of tonal pitch structure. In contrast to Lerdahl and Jackendoff, Krumhansl and Shepard were interested in the experienced listener’s standing knowledge (mental representation) of tonal pitch structure, rather than in the understanding of particular pieces. (Presumably some standing or “static” knowledge of tonal pitch structure is mobilized in the understanding of any particular piece.) They wanted to find out whether the pitch relationships postulated by music theory – the circle of fifths, the system of major and minor triads, scales, and keys, etc. – are psychologically real. In each trial in a probe-tone task, the listener hears a brief musical passage, then a short silence, and then one of the twelve chromatic pitches (the probe tone). The listener’s task is to rate how well the probe tone “fits” with the context of the preceding musical passage. This process is typically repeated for each of the twelve chromatic pitches, the idea being that the probe tones allow the researcher to probe the musical representations in the listener’s mind at a given moment. The ratings that emerged from Krumhansl and Shepard’s tests indicated that experienced listeners possess complex hierarchical representations of tonal pitch structure – indeed, a good deal of the fundamental pitch structures recognized by music theorists. This knowledge is what enables listeners to recognize wrong notes in a performance and to produce (sing) the final pitch in an unfamiliar melody when the preceding notes are provided, among other things.
A further significant line of scientific research grew out of the work of music theorist Leonard Meyer (1956), often credited as the first theorist to take account of psychological research. (Meyer was himself influenced by the philosopher Susanne Langer (1941).) Meyer argued that understanding a piece of music involves having certain “undifferentiated feelings” of tension and release in response to it (1956: 18). General features of human perceptual psychology (e.g. gestalt principles of grouping and continuation), together with our knowledge of tonal structure and musical style, engender certain (musical) expectations in us when we listen to music. When our expectations are either violated or fulfilled, we experience a feeling of tension or release, respectively.
Meyer’s views were later formalized by Eugene Narmour (1990) and also developed into psychological theories of musical expectancy. For example, Jamshed Bharucha and Keiko Stoeckig (1986) had subjects perform a series of priming tasks that revealed their harmonic expectancies. On each trial the subject heard a musical passage in a given key. The passage was followed by a single target chord, and the subject’s task was to say whether the target chord was in-tune or out-of-tune. It emerged that subjects were faster in their responses when the target chord was harmonically related to the key of the initial passage, and so was expected, than when it was unrelated. The same result was obtained when the task was to classify the target as major or minor, or to identify its timbre.(Bharucha (e.g. 1987) is one of few researchers who have reformulated their theories of tonal pitch cognition within the framework of parallel distributed processing or connectionism.) More recently David Huron (2006) has proposed an elaborate five-stage theory of expectation which he applies to music perception. Echoing Meyer, Huron argues that the fulfillment and violation of musical expectations evoke emotional responses in the listener.
Much of the research described above is grounded in the idea that as we are exposed to performances of tonal music, we abstract or “infer” from those stimuli the basic structures postulated by music theorists. However, in recent years there has emerged a competing, radically empiricist conception of the learning of tonal pitch structure. According to some statistical learning models (e.g. Krumhansl 1990; Huron 2006), our acquisition of knowledge (representations) of tonal pitch structure depends upon merely statistical, rather than structural, properties of the pitch-time events in a musical stimulus. In a review of Krumhansl, Huron writes:
[T]he tonal hierarchy correlates well with the distribution of [pitches]for musical passages; play a pitch often enough, and the tonic will tend to drift towards that pitch. The correlation between the tonal hierarchy and probabilities of various pitches within tonal music are consistently high (average r = 0.88). Krumhansl exploits this fact to develop a remarkably successful yet simple key-finding algorithm. Third, by cross-correlating the distributions for different keys it is possible to generate a spatial representation of interkey distances . . . [When] Krumhansl applies multidimensional scaling to her response data[,] the “circle of fifths” pops right out – showing that this theoretical construct is not simply a fanciful abstraction, but bears real cognitive import.
(1992: 180)
Unsurprisingly, the idea that learning pitch structure is statistically based is controversial. Of course the tonic is the pitch occurring most often in a tonal work, critics object, because the work is composed by a mind that represents the tonic as the most important pitch. This does not explain how listeners recover the pitch structure of the work: listeners would “find” the tonic even if the tonic was not the most frequent pitch; indeed, even if the tonic did not occur at all. (One could certainly write such a piece.) Jones points out that “adults rely heavily on rhythmic properties to differentiate melodies; they have difficulty identifying a learned melodic sequence if its original rhythm changes, even when temporal segmentations and statistical pitch properties are unchanged” (2010). Indeed, if recovery of pitch structure is statistical, then we ought to be able to recover the structure of any arbitrary pitch system, simply in virtue of the fact that different pitches occur in it with different frequencies; but there is considerable evidence that we cannot recover twelve-tone pitch structure (Gibson 1995; Krumhansl 1990), just for example.
However acquisition works, the idea that tonal pitch structure is psychologically real stands on firm ground. Recent fMRI experiments provide additional confirmation. In their 2002 paper in Nature, Petr Janata and his colleagues report the discovery of activation patterns in the cortex corresponding to the relationships among tonal keys. They write:
Western tonal music relies on a formal geometric structure that determines distance relationships within a harmonic or tonal space. In functional magnetic resonance imaging experiments, we identified an area in rostromedial prefrontal cortex that tracks activation in tonal space. Different voxels [i.e. three-dimensional pixels] in this area exhibited selectivity for different keys. Within the same set of consistently activated voxels, the topography of tonal selectivity rearranged itself across scanning sessions. The tonality structure was thus maintained as a dynamic topography in a cortical area known to be at a nexus of cognitive, affective, and mnemonic processing (2167). . . . [W]hat changed between sessions was not the tonality-tracking behavior of these brain areas but rather the region of tonal space (keys) to which they were sensitive. This type of relative representation provides a mechanism by which pieces of music can be transposed from key to key, yet retain their internal pitch relationships and tonal coherence (2169).
What Janata and colleagues found is that each key (C major, C minor, D major, etc.) activates a unique assembly of neurons in the frontal cortex in a given hearing. On another occasion (hearing the same music or different music), that assembly may be activated by a different key, but the relationships among the keys are preserved. (See also Brattico et al. 2006 for relevant findings.)
Of particular interest to philosophers of music will be the scientific studies of music and emotion. (See Juslin and Sloboda 2001 for a good overview.) I have already mentioned Meyer’s and Langer’s important work on musical feelings; psychologists have taken their views as a point of departure. Like philosophers of music, psychologists of music disagree as to whether musical emotions are (1) ordinary emotions such as sadness, happiness, and fear, or (2) some sort of thin versions of ordinary emotions, or (3) feelings special to music or to aesthetic experience generally, or (4) more like moods. Obviously there are some non-trivial differences between musical and ordinary emotions, for example with respect to their antecedent causes and behavioral consequences; and musical feelings do not seem to involve any cognitive appraisal, which is required by some theories of (non-musical) emotion. An experiment by Marcel Zentner, Stéphanie Meylan, and Klaus Scherer (2000) suggests that the frequency of some emotions differs as between musical and “ordinary” contexts. For example, their subjects’ (experimental) diaries reflected that nostalgia, awe, and enchantment occurred more often in musical than in ordinary contexts, while the situation was reversed for anger and fear. Also, physiological concomitants of musical emotions coincide only partially with those of ordinary non-musical emotions (Krumhansl 1997).
Until fairly recently most psychological research on musical emotion investigated the “perception” of emotion in, or the expression of emotion by, a musical work, as opposed to the “induction” or evocation of emotion in the listener (e.g. Wedin 1972). (For a helpful overview, see Gabrielsson and Juslin 2002.) In general, happiness and sadness, which are strongly associated with tempo and mode (major vs. minor), are the emotions most consistently said to be expressed by music. Relatively louder music is heard as being relatively more animated, triumphant, and activated, but can also be heard as tense or angry, while relatively softer music sounds more tranquil or melancholy. This bias toward the study of perceived emotion may be explained in part by the fact, noted by philosophers (e.g. Kivy 2001: 147), that perceived emotions often occur in the absence of felt emotions, that is, listeners often attribute an emotion to a piece of music without themselves feeling that emotion; but not, in general, vice versa (Hunter and Schellenberg 2010). Whatever the explanation, psychologists are now devoting more attention to induced musical emotions. Zentner et al. (2000) found that instructing subjects to rate emotions induced by musical stimuli, rather than emotions expressed by those stimuli, produced very different results. Other studies indicate that induced and perceived emotions are correlated (i.e. same music, same emotion whether induced or perceived), but perceived emotions were rated as being stronger than induced or felt emotions (e.g. Evans and Schubert 2008).
Perhaps the most interesting question about music and emotion concerns the relationship between musical feelings (perceived or induced) and specific musical structures. For example, John Sloboda (1991) found that tears accompanied harmonic descent through the cycle of fifths to the tonic; shivers or chills accompanied enharmonic changes, new harmonies, and sudden changes in loudness; and racing pulse went along with repeated syncopation and earlier-than-expected occurrences of important pitch-time events. In a study focusing on listener’s physiological responses, Krumhansl (1997) had subjects listen to three kinds of musical excerpts: sad (i.e. sadness-expressing) ones, characterized by inter alia slow tempos, minor keys, and relatively constant dynamics; scary ones, characterized by faster and more irregular tempos and dynamic levels; and happy ones, characterized by relatively fast tempos, major keys, and fairly constant dynamics. (Classification of the excerpts as sad, scary, or happy was confirmed by uniform and consistent judgments of the subjects.) It turned out that listening to the sad excerpts was associated with felt sadness and also with (inter alia) decreased pulse rates and increased blood pressure; listening to scary excerpts was associated with felt fear and also with increased breathing rates and decreased finger temperature; and listening to the happy excerpts was associated with felt happiness and also with decreased respiration depth (“shallower” breathing). Krumhansl writes:
These results suggest that musical emotions are reflected in psychophysiological measures . . . These psychophysiological changes are behavioural indicators that listeners experience emotions when listening to music. Not only do listeners verbally report emotional responses to music with considerable consistency, music also produces physiological changes that correspond with the type of musical emotion.
(1990: 350–1)
In related research, listeners have been shown generally to prefer musical consonance to dissonance and happy-sounding music to sad-sounding; however, the appeal of sad-sounding music increases when listeners feel tired or sad (Hunter and Schellenberg 2010). This last finding may help to explain why listeners often enjoy listening to sad music – otherwise a puzzle for the view that music induces emotions.
The experiments described above are only a tiny sample. Virtually nothing has been said here about the scientific research on musical performance, composition, or improvisation, on the role of rhythm, meter, and timbre in music perception, or on musical deficits such as amusia, to name a few; and aspects of all of these may be relevant to philosophy. (Popper et al. 2010, Hallam et al. 2009, and Peretz and Zatorre 2003 provide excellent surveys of the scientific literature.) That said, let us now look briefly at some philosophical implications.
Philosophers have advanced a variety of views about the nature of musical understanding; for example, that it consists of feeling certain emotions (Davies 1994), or in imagining that we are feeling certain emotions (Walton 1990), or in recognizing the musical expression of certain emotions (Kivy 2001). And while any of these accounts may be partly correct, the apparent psychological reality of detailed tonal structure and its importance in determining the character of music perception suggest that grasp of tonal structure must play a central role. (As we saw, Lerdahl and Jackendoff define musical understanding in terms of the recovery of musical structure.)
In particular, research in music cognition lends support to the idea that understanding a piece of music involves the representation of movement through a tonal space (e.g. Lerdahl 2001). Philosophers have argued over whether talk of movement in a space is metaphorical when applied to music, and if so, whether the metaphor can be replaced by purely musical terminology (e.g. Budd 1985; Scruton 1983; Kania 2007). Malcolm Budd is surely right that the spatial terminology can be replaced, but this does not mean that musical movement is wholly non-spatial. The scientists’ thought is that the tonal pitch relationships in a musical work are isomorphic to, and hence can be theoretically modeled and psychologically represented as, certain spatial relationships. There is no obvious reason why such a representation must be metaphorical. At the very least, there is no obvious reason why talk of musical movement must be metaphorical. For one thing, surely there is a perfectly literal sense of the word “move” in which it means something like “develop” or “proceed” or “progress” or “grow.” It is hardly coincidental that music theorists use the term “progressions” to refer to transitions among harmonies, or that they characterize fast (slow) changes of harmony as fast (slow) harmonic motion. For another thing, musical motion may be a kind of apparent motion, rather like the apparent motion we experience when looking at a row of lights that flash serially in quick succession. Nothing moves; rather, it appears as if something (a light?) moves.
The observed commonalities between musical and linguistic structure, processes, and neural mechanisms suggest that the understanding of music is also importantly analogous to the understanding of language. One possible view is that, in music as in language, understanding is the result of grammar-driven operations defined over acoustic stimuli; in other words, understanding consists in the grasp of musical or linguistic structure. In the musical case, that grasp of structure is consciously experienced as certain specifically musical feelings of tension, stability, resolution, and so forth. The idea that having an ordinary emotion or mood, even weak versions of them, could constitute musical understanding suffers from the fact that however closely such emotions are correlated with musical events, or even caused by musical events, they do not possess the requisite normativity. In most cases it is hard to see what could justify claiming that a listener (a fortiori a performer or composer) has made a mistake, has misunderstood the music in virtue of feeling or failing to feel a certain mood or emotion in response to it, or in virtue of hearing or failing to hear a certain musical passage as expressing a certain emotion. In contrast, musical feelings of tension and stability and the rest, which result from the recovery of tonal structure, do possess the requisite normativity. If a listener hears an authentic cadence or a 4–3 suspension as increasing in tension or instability, a fortiori if she identifies an authentic cadence as (e.g.) a deceptive one, she is mistaken. An authentic cadence just is, in part, a progression from instability to stability (see Raffman 1993: 37–56, for elaboration).
According to the so-called cognitivist view of musical emotion, endorsed notably by Peter Kivy (1990), listeners recognize the expression of emotions by musical works but do not typically feel those emotions themselves; in the scientific terminology used above, musical emotions are perceived but not induced or felt. In support of this view, Kivy claims that “there are no behavioral symptoms of listeners actually experiencing [emotions] when attending to music” (1990: 151). The psychological, physiological, and neuroscientific research described above suggests otherwise. Listeners are able to make uniform and consistent reports (verbal behaviors) of the emotions they experience in listening to music; they undergo uniform and consistent physiological changes while listening to music; and fMRI studies suggest that the mental representation of tonal pitch structure is underwritten by parts of the cortex that are implicated in affective experience.
No doubt the psychological findings concerning our preferences for consonance over dissonance and for happy music over sad, etc., may have implications for the evaluation of musical works. Also, generally speaking, the artistic merit of a work must depend at least in part upon its comprehensibility: it is difficult to see how a (humanly) incomprehensible work could be a great work. The latter point raises a question about the evaluation of atonal, specifically twelve-tone or serial, pieces of music. As indicated above, research on pitch perception has revealed that even expert listeners are probably not able to recover serial pitch structures to any significant extent. Lerdahl (1988) has suggested that serial pitch structure, which is not hierarchical, does not provide a good “ecological fit” with human perceptual and cognitive systems, and so is difficult or even impossible for us to recover (understand) aurally. Consequently, if musical understanding essentially involves grasp of the structure of a work, a question may arise about the artistic merit of twelve-tone pieces (Cavell 1976; Taruskin 1996; Tymoczko 2000; Raffman 2003; Levitan 2006).
See also Analysis (Chapter 48), Arousal theories (Chapter 20), Evaluating music (Chapter 16), Music and language (Chapter 10), Music’s arousal of emotions (Chapter 22), Psychology of music (Chapter 55), Resemblance theories (Chapter 21), and Understanding music (Chapter 12).
Bernstein, L. (1976) The Unanswered Question: Six Talks at Harvard by Leonard Bernstein, Cambridge: Harvard University Press.
Bharucha, J. (1987) “Music Cognition and Perceptual Facilitation: A Connectionist Framework,” Music Perception 5: 1–30.
Bharucha, J., and Stoeckig, K. (1986) “Reaction Time and Musical Expectancy: Priming of Chords,” Journal of Experimental Psychology: Human Perception and Performance 12: 403–10.
Bigand, E., Tillman, B., Poulin, B., and D’Adamo, D.A. (2001) “The Effect of Harmonic Context on Phoneme Monitoring in Vocal Music,” Cognition 81: B11–B20.
Brattico, E., Tervaniemi, M., Näätänen, R., and Peretz, I. (2006) “Musical Scale Properties are Automatically Processed in the Human Auditory Cortex,” Brain Research 11: 162–74.
Budd, M. (1985) “Understanding Music,” Proceedings of the Aristotelian Society, supp. vol. 59: 233–48.
Cavell, S. (1976) “Music Discomposed,” in Must We Mean What We Say? A Book of Essays, Cambridge: Cambridge University Press, pp. 180–212.
Chomsky, N. (1965) Aspects of the Theory of Syntax, Cambridge: MIT Press.
Davies, S. (1994) Musical Meaning and Expression, Ithaca: Cornell University Press.
DeBellis, M. (1995) Music and Conceptualization, Cambridge: Cambridge University Press.
Deutsch, D. (1982) The Psychology of Music, San Diego: Academic Press.
Dilley, L.C., and McAuley, J.D. (2008) “Distal Prosodic Context Affects Word Segmentation and Lexical Processing,” Journal of Memory and Language 59: 294–311.
Evans, P., and Schubert, E. (2008) “Relationships between Expressed and Felt Emotions in Music,” Musicae Scientiae 12: 75–99.
Fitch, W.T. (2006) “The Biology and Evolution of Music: A Comparative Perspective,” Cognition 100: 173–215.
Fedorenko, E., Patel, A.D., Casasanto, D., Winawer, J., and Gibson, E. (2009) “Structural Integration in Language and Music: Evidence for a Shared System,” Memory & Cognition 37: 1–9.
Gabrielsson, A., and Juslin, P. (2002) “Emotional Expression in Music,” in R.J. Davidson, K.R. Scherer, and H.H. Goldsmith (eds) Handbook of Affective Sciences, Oxford: Oxford University Press, pp. 503–34.
Gibson, D. (1995) “Theoretical Assumptions and Aural Experiences in the Pitch-Class Set Domain,” Music Theory Explorations and Applications 4: 17–25.
Hallam, S., Cross, I., and Thaut, M. (2009) The Oxford Handbook of Music Psychology, Oxford: Oxford University Press.
Helmholtz, H. von (1875) On the Sensations of Tone as a Physiological Basis for the Theory of Music, trans. A. Ellis, London: Longmans (reprint 1954, New York: Dover).
Hunter, P.G., and Schellenberg, E.G. (2010) “Music and Emotion,” in Popper, Jones, and Fay, pp. 129–64.
Huron, D. (1992) Review of Carol L. Krumhansl, Cognitive Foundations of Musical Pitch, Psychology of Music 20: 180–185.
—— (2006) Sweet Anticipation: Music and the Psychology of Expectation, Cambridge: MIT Press.
Janata, P., Birk, J., Van Horn, J.D., Leman, M., Tillman, B., and Bharucha, J. (2002) “The Cortical Topography of Tonal Structures Underlying Western Music,” Science 28: 2167–70.
Jones, M.R. (2010) “Music Perception: Current Research and Future Directions” in Popper, Jones, and Fay, pp. 1–12.
Juslin, P., and Sloboda, J. (2001) Music and Emotion: Theory and Research, New York: Oxford University Press.
Kania, A. (2007) “The Philosophy of Music,” in E.N. Zalta (ed.) The Stanford Encyclopedia of Philosophy, available at http://plato.stanford.edu/entries/music/.
Kivy, P. (1990) Music Alone: Philosophical Reflections on the Purely Musical Experience, Ithaca: Cornell University Press.
—— (2001) New Essays on Musical Understanding, Oxford: Clarendon Press.
Koelsch, S., Gunter, T.C., and Sammler, D. (2005) “Interaction between Processing in Language and in Music: An ERP Study,” Journal of Cognitive Neuroscience 17: 1565–79.
Krumhansl, C. (1990) Cognitive Foundations of Musical Pitch, Oxford: Oxford University Press.
—— (1997) “An Exploratory Study of Musical Emotions and Psychophysiology,” Canadian Journal of Experimental Psychology 51: 336–52.
Krumhansl, C., and Shepard, R.N. (1979) “Quantification of the Hierarchy of Tonal Functions within a Diatonic Context,” Journal of Experimental Psychology: Human Perception and Performance 5: 579–94.
Langer, S. (1941) Philosophy in a New Key: A Study in the Symbolism of Reason, Rite and Art, Cambridge: Harvard University Press.
Lerdahl, F. (1988) “Cognitive Constraints on Compositional Systems,” in J. Sloboda (ed.) Generative Processes in Music, Oxford: Oxford University Press, pp. 231–59.
—— (2001) Tonal Pitch Space Theory, Oxford: Oxford University Press.
Lerdahl, F., and Jackendoff, R. (1983) A Generative Theory of Tonal Music, Cambridge: MIT Press.
Levitan, D. (2006) This is Your Brain on Music: The Science of a Human Obsession, New York: Dutton.
Meyer, L.B. (1956) Emotion and Meaning in Music, Chicago: University of Chicago Press.
Narmour, E. (1990) The Analysis and Cognition of Basic Melodic Structures: The Implication- Realization Model, Chicago: University of Chicago Press.
Patel, A. (2008) Music, Language, and the Brain, Oxford: Oxford University Press.
Patel, A.D., Gibson, E., Ratner, J., Besson, M., and Holcomb, P. (1998) “Processing Syntactic Relations in Language and Music: An Event-Related Potential Study,” Journal of Cognitive Neuroscience 10: 717–33.
Peretz, I., and Zatorre, R. (eds) (2003) The Cognitive Neuroscience of Music, New York: Oxford University Press.
Popper, A., Jones, M.R., and Fay, R. (eds) (2010) Springer Handbook of Auditory Research (Vol. 36): Music Perception, New York: Springer.
Raffman, D. (1993) Language, Music, and Mind. Cambridge: MIT-Bradford Books.
—— (2003) “Is Twelve-Tone Music Artistically Defective?” Midwest Studies in Philosophy, 27: 69–87.
Scruton, R. (1983) “Understanding Music” in The Aesthetic Understanding: Essays in the Philosophy of Art and Culture, London: Methuen, pp. 77–100.
Seashore, C. (1938) Psychology of Music, New York: McGraw-Hill.
Slevc, L.R., and Miyake, A. (2006) “Individual Differences in Second Language Proficiency: Does Musical Ability Matter?” Psychological Science 17: 675–81.
Sloboda, J. (1991) “Musical Structure and Emotional Response: Some Empirical Findings,” Psychology of Music 19: 110–20.
Taruskin, R. (1996) “How Talented Composers Become Useless,” The New York Times, March 10, H31.
Tillman, B., Koelsch, S., Escoffier, N., Bigand, E., Lalitte, P., Friederici, A.D., von Cramon, D.Y. (2006) “Cognitive Priming in Sung and Instrumental Music: Activation of Inferior Frontal Cortex,” NeuroImage 31: 1771–82.
Tymoczko, D. (2000) “The Sound of Philosophy: The Musical Ideas of Milton Babbitt and John Cage,” The Boston Review, October/November, available at http://bostonreview.net/ BR25.5/tymoczko.html.
Walton, K. (1990) Mimesis as Make-Believe: On the Foundations of the Representational Arts, Cambridge: Harvard University Press.
Wedin, L. (1972) “A Multidimensional Study of Perceptual-Emotional Qualities in Music,” Scandinavian Journal of Psychology 13: 241–57.
Zentner, M.R., Meylan, S., and Scherer, K.R. (2000) “Exploring Musical Emotions across Five Genres of Music,” presentation at 6th International Conference of the Society for Music Perception and Cognition, August 5–10, Keele, UK.