Edward W. Large
What is music and why study the “cognition” of it? It is often claimed that music is universal, in the sense that all known human cultures practice some form of music. It is also commonly argued that music is unique to the human species. Therefore, like language, music displays two fundamental features that identify a high level cognitive capacity (see, e.g., Fitch, 2006; Patel, 2008). But music is also a physical process that takes place at an ecological scale. Musicians’ bodies interact with instruments to create vibrations that travel through the air (see Bishop & Goebl, this volume). Once sounds reach a person’s ear, they cause activation in the peripheral auditory system and the brain. Some sound sequences lead to visceral and emotional responses and forms of coordinated movement that we recognize as uniquely musical (Iyer, 2002; Leman & Maes, 2015). Thus, music may be a cognitive capacity but it is one that is embodied to an extent that seems qualitatively different from, say, writing an essay or solving a math problem.
The answer to the question of what music actually is, however, is not straightforward. Although often approached from the point of view of sound, it is not possible to adequately define music based solely on sound patterns (Cross, 2003). Music intrinsically involves qualitative experiences of the brain and body and its meaning depends on the social and cultural context that relates sound to experience (Iyer, 2002). Thus, musicality “is a property of communities rather than of individuals; and music is mutable in its specific significances or meanings” (Cross, 2003, p. 79). Nonetheless people can easily recognize the musical sounds of their native culture, and the study of musical sound is one of the oldest topics in Western science (Hunt, 1992).
The study of musical sound in the West began with Pythagoras (ca. 570–497 B.C.) over twenty-five hundred years ago. The Pythagoreans were mystics who believed in secrecy and left virtually no writings, so what we know about their discoveries comes from followers writing after Pythagoras’s death. However, historians generally attribute several important discoveries and inventions to Pythagoras (Hunt, 1992). Pythagoras discovered an inverse relationship between the length of a vibrating string and the pitch it produced. He may have invented the first experimental apparatus described in Western science, the monochord, to carry out pitch experiments. Most importantly, he observed that the perfect consonances1—the octave, fifth and fourth—correspond to pitches tuned in small integer frequency ratios—2:1, 3:2, and 4:3, respectively. Pythagoras and his successors invented small integer ratio systems for tuning musical instruments, the first known technological innovation in Western musical history.
Pythagorean ideas about tuning and arithmetic ratios dominated the study of musical sound for over two thousand years. However, Pythagoreans lost faith in perceptual experimentation, preferring to interpret phenomena such as musical consonance and dissonance as manifestations of pure mathematics (Hunt, 1992). Later philosophers, including Plato, Aristotle, and Aristoxenus, emphasized the importance not only of perception, but also emotional and physical experiences of music (Gracyk & Kania, 2011). However, the scientific study of the role of the brain and body in musical experience would have to wait for the advent of modern empirical methods.
Music—like all sound—is made up of waves. Complex sound waves, such as those made by vibrating strings or human voices, generally consist of a fundamental frequency, plus a number of other frequencies called harmonics, or overtones (see, e.g., Pierce, 1983; Goodchild & McAdams, this volume). Individual notes and chords occur in repeating patterns such as melodies, rhythms, and phrases. Musical sound patterns extend over many timescales, from the timescales of pitch and pitch combinations (from tens up to thousands of cycles per second, cps ) to the timescales of tonality, rhythm, and form (from 8 or 10 cps down to fractions of one cps ). Music engages the brain and body over this entire range through a phenomenon called entrainment, or synchronization, in which active, nonlinear processes in the nervous system resonate to quasi periodic temporal structures such as individual harmonics, complex pitches, beats, measures, phrases, and so on (Large, 2010).
Sound waves travel through the air and can be heard when they reach a person’s ear. Sounds vibrate the structures of the outer and middle ear, which in turn cause vibrations in the fluid-filled, spiral-shaped hearing organ called the cochlea (see, e.g., Schnupp, Nelken, & King, 2011). Helmholtz (1885/1954) originally proposed that the cochlea decomposes complex sounds into individual frequency components. This hypothesis was confirmed by von Békésy (1960), who demonstrated that the cochlea carries out a frequency analysis of sound, resonating to different frequencies at different places along a structure called the basilar membrane. It soon came to be understood that cochlear resonance is not passive, like the reception of a microphone. Active cochlear processes amplify soft sounds, compress loud sounds (EguÌluz, Ospeck, Choe, Hudspeth, & Magnasco, 2000), and respond to signals from the brain that can tune the system depending on incoming sounds (Cooper & Guinan, 2006).
Auditory nerve fibers carry this signal to the brain in synchronized volleys of action potentials, fast electrical events that lead to the release of neurotransmitters. These volleys reflect the temporal structure of the sounds faithfully up to frequencies of several thousand cps (Langner, 1992), approximately the upper limit of pitch perception (Burns, 1999). In the auditory brain, time-locked signals interact with the intrinsic temporal dynamics of different types of neurons and networks to give rise to structured patterns of activity in multiple subcortical and cortical auditory areas (see Loui & Przysinda, this volume). Sound first reaches the auditory brainstem at the cochlear nucleus and travels up the subcortical pathway to the thalamus and then to primary and secondary auditory cortex. Brainstem responses to sound can be recorded non-invasively in humans using electroencephalography (EEG). EEG signals are synchronized up to one thousand cps or so (for a tutorial review, see Skoe & Kraus, 2010). Auditory brainstem processing is important in the perception of pitch (e.g., Wile & Balaban, 2007) and the perception of consonance and dissonance (e.g., Lee, Skoe, Kraus, & Ashley, 2009).
As the auditory pathway is ascended, synchronization to simple tones deteriorates at higher frequencies (Langner, 1992). However, in higher auditory areas spatial maps of frequency are preserved (Formisano et al., 2003) and synchronization to amplitude modulation emerges (Joris, Schreiner, & Rees, 2004). The auditory cortex is primarily sensitive to the amplitude modulations of sound at slower frequencies (less than about 10 cps ), which is to say events such as musical notes. People synchronize rhythmic movements such as hand clapping, finger tapping, dancing, or swaying to the rhythms created by musical events (Iyer, 2002; Henry & Grahn, this volume). Interestingly, the cortex produces its own intrinsic rhythms, called oscillations (Buzsáki, 2006), and intrinsic neural oscillations synchronize with musical rhythms (e.g., Fujioka, Trainor, Large, & Ross, 2012; Henry & Grahn, this volume).
Although the perception of pitch and consonance are among the oldest topics in Western science, they are still areas of active investigation. One fundamental issue in pitch perception surrounds the “missing fundamental.” If the energy at the fundamental frequency is removed from a complex periodic sound, the perceived pitch remains unchanged (Seebeck, 1841). Helmholtz (1885/1954) proposed that a physical component at the missing fundamental frequency could be generated by nonlinear cochlear processes, while Seebeck (1841) favored a periodicity detection theory. Eventually, it became clear that pitch depends on central auditory processes. Recent theoretical explanations include autocorrelation (Cariani & Delgutte, 1996) and synchronization of oscillatory neurons (Meddis & O’Mard, 2006). However, complex pitch perception is still a matter of active debate among theorists (see Plack, Fay, Oxenham, & Popper, 2005).
Regarding consonance and dissonance, in the eighteenth century the mathematician Leonard Euler hypothesized that the mind directly perceives and aesthetically appreciates the purity of simple integer ratios, a psychological version of Pythagorean ideas (see Helmholtz, 1885/1954 for a discussion). Helmholtz strongly critiqued Euler’s approach, and pointed out that mathematical purity could not explain the perception of consonance in equal tempered tuning systems, where pure integer ratios are approximated by irrational numbers. Instead, he proposed that as the cochlea analyses complex sounds, neighboring frequencies on the basilar membrane interfere with one another and produce a sensation of roughness, which he equated with dissonance. Because small integer ratios have more harmonics in common, fewer harmonics interfere, yielding a smoother, more consonant sound (e.g., Kameoka & Kuriyagawa, 1969). However, as with pitch, recent perceptual studies have implicated the central auditory system in consonance perception (Cousineau, McDermott, & Peretz, 2012). Two mechanisms, harmonicity and synchronization of neural oscillations, have been proposed (Tramo, Cariani, Delgutte, & Braida, 2001). Both theories relate consonance and dissonance to simple integer frequency ratios, and both have received support from auditory brainstem studies (e.g., Lee, et al., 2009; Lerud, Almonte, Kim, & Large, 2014). Thus, fundamental neural processes do seem to prefer simple integer ratios.
Musical sounds do not consist merely of isolated pitches and intervals; they are complex sequences of sound events. A number of authors have suggested that the complexity of musical sound sequences implies the existence of a set of rules that govern musical structure, parallel to the rules of language (Bernstein, 1976; Lerdahl & Jackendoff, 1983; see Besson, Barbaroux, & Dittenger, this volume). Patel (2008) hypothesized that both language and music are processed syntactically, and these computations exploit the same neural resources. There is a great deal of evidence to suggest that the processing of music and language overlap in terms of functional activation of brain regions (e.g., Koelsch, Gunter, Cramon, & Zysset, 2002; Tillmann, Janata, & Bharucha, 2003; see also Jantzen, Large, & Magne, 2016). However, the spatial overlap of functional activation does not necessarily imply that music is processed computationally, as hypothesized by linguistic theory (e.g., Lerdahl & Jackendoff, 1983).
Iyer (2002) points out that linguistic paradigms treat neural and cognitive processes— including musical ones—as computations on abstract symbols. In other words, the brain transforms musical sounds into symbols whose meaning is unrelated to their sound. The comprehension of musical structure then depends on rule-based syntactic computation (Patel, 2008). However, with its deep connection to dance, movement, and other functions, much musical behavior appears to be essentially nonlinguistic in nature (Iyer, 2002; Leman & Maes, 2015; see chapters by Lamont, Tan, and Ashley [Communication], this volume). Important musical experiences, such as groove (Janata, Tomic, & Haberman, 2012), have no analogue in linguistic theory. Music has significant emotional and associative qualities as well (Burger, Saarikallio, Luck, Thompson, & Toiviainen, 2013), seeming to challenge traditional computational theories of mind. As Cross puts it, “music embodies, entrains and transposably intentionalises time in sound and action” (2003, p. 79). It has been argued that embodied (Rosch, Thompson, & Varela, 1992) and ecological (Gibson, 1966) approaches are required to understand these aspects of musical experience (Iyer, 2002). One intereting question is whether such approaches can handle the complexity of musical structure as well.
Two of the most universal and widely studied characteristics of musical structure are tonality and metricality. In its most general form, tonality is the means by which music creates feelings of tension and resolution (Lerdahl, 2001). In Western and non-Western tonal music, certain pitches are felt as more stable than others, providing a sense of completion. Less stable pitches are felt as points of tension; they function relative to the more stable ones and are heard to point toward or be attracted to them (Bharucha, 1984; Lerdahl, 2001). Stability and attraction are the properties that differentiate musical sound sequences from arbitrary sound sequences and are thought to enable simple sound patterns to carry meaning (Meyer, 1956). These musical qualia provide the fundamental elements of musical structure (Huron, 2006), leading to the feeling that some sound sequences make sense while others do not (e.g., Lerdahl & Jackendoff, 1983). The central question is what processes in the nervous system give rise to percepts of tension, resolution, and ultimately of musical pattern.
Metricality is the way in which musical rhythms create feelings of temporal regularity, directing our expectations for when future musical events will occur. Groove is a related feeling of wanting to move and synchronize with the rhythm of the music (Janata, Tomic, & Haberman, 2012). In many Western and non-Western musical styles a perception of periodic or quasi periodic beats emerges (Clayton, Sager, & Will, 2004), and meter refers to a perceived pattern of stronger and weaker beats (Cooper & Meyer, 1960; Lerdahl & Jackend-off, 1983). The main beat, or pulse, of a rhythm is often defined as the frequency at which a listener will synchronize a periodic movement, such as toe tapping or hand clapping, with a rhythm (see, Large, 2008 and chapters by Henry & Grahn and Martens & Benadon, this volume). Pulse is mostly regular, but can exhibit temporally irregular patterns as well (London, 2004). The meaning of individual musical events is impacted by their timing in relation to this emergent pattern of metrical accent (Cooper & Meyer, 1960). Here we must also ask what processes in the nervous system give rise to these percepts.
One way in which music is similar to language is that different musical “languages” develop depending upon cultural and stylistic environments. Syntactic computation approaches have posited musical grammars, analogous to linguistic grammars, that successfully describe various aspects of basic musical structure (Lerdahl & Jackendoff, 1983; Temperley, 2001). However, such approaches either explicitly assume innate mechanisms (Bernstein, 1976) or simply decline to address the question of knowledge acquisition (Lerdahl & Jackendoff, 1983). Perhaps more importantly, even for language there exists an “ontological incommensurability” between the abstract symbols and rule-based computations of linguistic theories on the one hand, and observations of action potentials, synaptic plasticity, oscillations, and synchronization in the physical brain, on the other (Poeppel & Embick, 2005). Thus, questions of learning and neural “implementation” loom large for syntactic approaches.
It has been known for some time now that tonal perceptions such as stability and attraction correlate strongly with the statistics of tonal sequences (Krumhansl, 1990). Listeners develop sensitivity to the statistical and sequential structures consistent with their musical experience (Castellano, Bharucha, & Krumhansl, 1984; Loui, Wessel, & Kam, 2010). Metrical percepts also correlate with the statistics of rhythmic contexts (Palmer & Krumhansl, 1990) and depend on learning (Hannon & Johnson, 2005). Both tonal cognition and rhythmic abilities develop over the first years of life (Kirschner & Tomasello, 2009; Trainor & Trehub, 1994; see Trehub & Weiss, this volume). Information-processing theories have interpreted such evidence to mean that musical structure is internalized based on a form of statistical learning in which notes are treated as abstract symbols (Krumhansl, 2000; Pearce & Wiggins, 2012; Temperley, 2007).
However, an abstract-symbols and learning-only approach raises some perplexing questions. If notes are abstract symbols, why are small integer ratio tuning systems so pervasive across the world and so stable over time (Burns, 1999)? How did the statistics of tonal music arise in the first place, such that they appear to favor small integer frequency ratios across cultures (Large, Kim, Flaig, Bharucha, & Krumhansl, 2016)? Why are simple metric ratios found across the rhythms of the world (Savage, Brown, Sakai, & Currie, 2015)? In fact, why do so many musical features appear to be shared across the world’s cultures (Savage, Brown, Sakai, & Currie, 2015)?
What seems to be needed is a theory that can take into account both the intrinsic dynamics of the brain and body, as well as the influences of culture-specific experience. The dynamical systems approach (Large, 2010) takes into account the intrinsic dynamics of movement coordination (Kelso, deGuzman, & Holroyd, 1990; Loehr, Large, & Palmer, 2011; Treffner & Turvey, 1993) and the intrinsic dynamic patterns that play out in the neurons and networks of the brain (Large, 2010; Large & Snyder, 2009). The theory incorporates neural plasticity, which can, in principle, explain learning of different styles (Large, Kim, Flaig, Bharucha, & Krumhansl, 2016). One key prediction is that small integer frequency ratios produce more stable resonances (Large, 2010), at both tonal and rhythmic timescales (see, e.g., Razdan & Patel, 2016). Although it is a relatively new approach, it has shown to be sufficiently powerful to explain fundamental aspects of musical structure perception (Large, Herrera, & Velasco, 2015; Large, et al., 2016). Importantly, unlike linguistic theory, it makes predictions about physical processes, such as rhythmic synchronization, that can be directly observed in the brain and body (Large, Fink, & Kelso, 2002).
The fundamental questions posed by musical experience provide a unique window onto the question of what it means to be human. Approaches that treat bodies as robots and brains as computers packed with high-tech modules contrived to overcome specific obstacles (Pinker, 1997), do not offer meaningful insights into the ancient questions of musical experience. The study of music, among other things, suggests that the mind does not work that way (Fodor, 2001). Understanding the complex interplay of brain, body, and world may require more sophisticated tools and methods (e.g., Thompson & Varela, 2001); perhaps the brain and body literally resonate to music (Gibson, 1966; Iyer, 2002). Only in confronting these core issues will music cognition be able to adequately address the question of music’s meanings, its origins, and its role in human life.
1. Consonance and dissonance are used here to refer to the perception of pleasantness or unpleasantness of isolated pitch intervals. These same terms are also used to refer to the perception of tension and resolution within a tonal context. Tonality will be discussed shortly.
I have attempted to give a brief introduction to many of the core topics in music cognition with emphasis on their historical significance. Subsequent chapters will bring the reader up to date on research in these areas, and in areas it was not possible to even mention here. For the reader interested in pursuing these topics at a deeper level, this core reading list includes key books in music cognition and related areas, including some historical sources.
Bernstein, L. (1976). The unanswered question: Six talks at Harvard. Cambridge, MA: Harvard University Press.
Cooper, G., & Meyer, L. B. (1960). The rhythmic structure of music. Chicago, IL: University of Chicago Press.
Fodor, J. A. (2001). The mind doesn’t work that way: The scope and limits of computational psychology. Cambridge, MA: MIT Press.
Helmholtz, H. L. F. (1885/1954). On the sensations of tone as a physiological basis for the theory of music. New York, NY: Dover Publications.
Hunt, F. V. (1992). Origins in acoustics. Woodbury, NY: Acoustical Society of America.
Huron, D. (2006). Sweet anticipation: Music and the psychology of expectation. Cambridge, MA: MIT Press.
Kameoka, A., & Kuriyagawa, M. (1969). Consonance theory part II: Consonance of complex tones and its calculation method. The Journal of the Acoustical Society of America, 45(6), 1460–1469.
Krumhansl, C. L. (1990). Cognitive foundations of musical pitch. New York, NY: Oxford University Press.
Lerdahl, F. (2001). Tonal pitch space. New York, NY: Oxford University Press.
Lerdahl, F., & Jackendoff, R. (1983). A generative theory of tonal music. Cambridge, MA: MIT Press.
Meyer, L. B. (1956). Emotion and meaning in music. Chicago, IL: University of Chicago Press.
Patel, A.D. (2008). Music, language, and the brain. Oxford: Oxford University Press.
Pinker, S. (1997). How the mind works. New York, NY: Norton.
Plack, C. J., Fay, R. R., Oxenham, A. J., & Popper, A. N. (Eds.). (2005). Pitch: Neural coding and perception. New York, NY: Springer.
Rosch, E., Thompson, E., & Varela, F. J. (1992). The embodied mind: Cognitive science and human experience. Cambridge, MA: MIT Press.
Bharucha, J. J. (1984). Anchoring effects in music: The resolution of dissonance. Cognitive Psychology, 16, 485–518.
Burger, B., Saarikallio, S., Luck, G., Thompson, M. R., & Toiviainen, P. (2013). Relationships between perceived emotions in music and music-induced movement. Music Perception, 30(5), 517–533.
Burns, E. M. (1999). Intervals, scales, and tuning. In D. Deustch (Ed.), The psychology of music (pp. 215–264). San Diego, CA: Academic Press.
Buzsáki, G. (2006). Rhythms of the brain. New York, NY: Oxford University Press.
Cariani, P. A., & Delgutte, B. (1996). Neural correlates of the pitch of complex tones. I. Pitch and pitch salience. Journal of Neurophysiology, 76(3), 1698–1716.
Castellano, M. A., Bharucha, J. J., & Krumhansl, C. L. (1984). Tonal hierarchies in the music of North India. Journal of Experimental Psychology: General, 113 (3), 394–412.
Clayton, M., Sager, R., & Will, U. (2004). In time with the music: The concept of entrainment and its significance for ethnomuiscology. ESEM CounterPoint, 1, 1–82.
Cooper, N. P., & Guinan, J. J. (2006). Efferent-mediated control of basilar membrane motion. The Journal of Physiology, 576(1), 49–54.
Cousineau, M., McDermott, J. H., & Peretz, I. (2012). The basis of musical consonance as revealed by congenital amusia. Proceedings of the National Academy of Sciences, 109(48), 19858–19863.
Cross, I. (2003). Music and evolution: Consequences and causes. Contemporary Music Review, 22(3), 79–89.
EguÌluz, V. M., Ospeck, M., Choe, Y., Hudspeth, A. J., & Magnasco, M. O. (2000). Essential nonlinearities in hearing. Physical Review Letters, 84(22), 5232.
Fitch, W. T. (2006). On the biology and evolution of music. Music Perception, 24(1), 85–88.
Formisano, E., Kim, D. S., Di Salle, F., van de Moortele, P. F., Ugurbil, K., & Goebel, R. (2003). Mirror-symmetric tonotopic maps in human primary auditory cortex. Neuron, 40(4), 859–869.
Fujioka, T., Trainor, L. J., Large, E. W., & Ross, B. (2012). Internalized timing of isochronous sounds is represented in neuromagnetic beta oscillations. The Journal of Neuroscience, 32(5), 1791–1802.
Gibson, J. J. (1966). The senses considered as perceptual systems. Boston, MA: Houghton Mifflin.
Gracyk, T., & Kania, A. (2011). The Routledge companion to philosophy and music: New York, NY: Routledge.
Hannon, E. E., & Johnson, S. P. (2005). Infants use meter to categorize rhythms and melodies: Implications for musical structure learning. Cognitive Psychology, 50(4), 354–377.
Iyer, V. (2002). Embodied mind, situated cognition, and expressive microtiming in African-American music. [10.1525/mp.2002.19.3.387]. Music Perception 19(3), 387–414.
Janata, P., Tomic, S. T., & Haberman, J. M. (2012). Sensorimotor coupling in music and the psychology of the groove. Journal of Experimental Psychology-General, 141(1), 54–75.
Jantzen, M. G., Large, E. W., & Magne, C. (Eds.). (2016). Overlap of neural systems for processing language and music. Lausanne: Frontiers Media.
Joris, P. X., Schreiner, C. E., & Rees, A. (2004). Neural processing of amplitude-modulated sounds. Physiological Reviews, 84(2), 541–577.
Kelso, J. A. S., deGuzman, G. C., & Holroyd, T. (1990). The self-organized phase attractive dynamics of coordination. In A. Babloyantz (Ed.), Self- organization, emerging properties, and learning (Vol. 260, pp. 41–62).
Kirschner, S., & Tomasello, M. (2009). Joint drumming: Social context facilitates synchronization in preschool children. Journal of Experimental Child Psychology, 102(3), 299–314.
Koelsch, S., Gunter, T. C., Cramon, D. Y., & Zysset, S. (2002). Bach speaks: A cortical “language-network” serves the processing of music. NeuroImage 17(2), 956–966.
Krumhansl, C. L. (2000). Tonality induction: A statistical approach applied cross-culturally. Music Perception, 17, 461–479.
Langner, G. (1992). Periodicity coding in the auditory system. Hearing Research, 60, 115–142.
Large, E. W. (2008). Resonating to musical rhythm: Theory and experiment. In S. Grondin (Ed.), The Psychology of Time (pp. 189–231). Cambridge: Emerald.
Large, E. W. (2010). Neurodynamics of music. In M. R. Jones, A. N. Popper & R. R. Fay (Eds.), Springer handbook of auditory research, Vol. 36: Music perception (pp. 201–231). New York, NY: Springer.
Large, E. W., Fink, P., & Kelso, J. A. S. (2002). Tracking simple and complex sequences. Psychological Research, 66, 3–17.
Large, E. W., Herrera, J. A., & Velasco, M. J. (2015). Neural networks for beat perception in musical rhythm. Frontiers in Systems Neuroscience, 9 (224), 583. doi: papers2://publication/doi/10.1152/ jn.00066.2009
Large, E. W., Kim, J. C., Flaig, N., Bharucha, J., & Krumhansl, C. L. (2016). A neurodynamic account of musical tonality. Music Perception, 33(3), 319–331.
Large, E. W., & Snyder, J. S. (2009). Pulse and meter as neural resonance. The neurosciences and music III—Disorders and plasticity. Annals of the New York Academy of Sciences, 1169, 46–57.
Lee, K. M., Skoe, E., Kraus, N., & Ashley, R. (2009). Selective subcortical enhancement of musical intervals in musicians. Journal of Neuroscience, 29(18), 5832–5840.
Leman, M., & Maes, P. J. (2015). The role of embodiment in the perception of music. Empirical Musicology Review. doi: papers2://publication/uuid/01F207EC-D624-4FDE-AF00-F530540D58D2
Lerud, K. D., Almonte, F. V., Kim, J. C., & Large, E. W. (2014). Mode-locking neurodynamics predict human auditory brainstem responses to musical intervals. Hearing Research, 308, 41–49.
Loehr, J., Large, E. W., & Palmer, C. (2011). Temporal coordination and adaptation to rate change in music performance. Journal of Experimental Psychology: Human Perception and Performance, 37(4), 1292–1309.
London, J. (2004). Hearing in time: Psychological aspects of musical meter. New York, NY: Oxford University Press.
Loui, P., Wessel, D. L., & Kam, C. L. H. (2010). Humans rapidly learn grammatical structure in a new musical scale. Music Perception, 27(5), 377–388.
Meddis, R., & O’Mard, L. P. (2006). Virtual pitch in a computational physiological model. The Journal of the Acoustical Society of America, 120(6), 3861–3869.
Palmer, C., & Krumhansl, C. L. (1990). Mental representations for musical meter. Journal of Experimental Psychology: Human Perception & Performance, 16(4), 728–741.
Pearce, M. T., & Wiggins, G. A. (2012). Auditory expectation: The information dynamics of music perception and cognition. Topics in Cognitive Science, 4(4), 625–652.
Pierce, J. R. (1983). The science of musical sound. Cambridge, MA: MIT Press.
Poeppel, D., & Embick, D. (2005). Defining the relation between linguistics and neuroscience. In A. Cutler (Ed.), Twenty-first century psycholinguistics: Four cornerstones. (pp. 103–118). Mahwah, NJ: Lawrence Erlbaum and Associates, Inc.
Razdan, A. S., & Patel, A.D. (2016). Rhythmic consonance and dissonance: Perceptual ratings of rhythmic analogs of pitch intervals and chords. Proceedings of the 14th International Conference on Music Perception and Cognition (pp. 807–812). Adelaide: Causal Productions.
Savage, P. E., Brown, S., Sakai, E., & Currie, T. E. (2015). Statistical universals reveal the structures and functions of human music. [10.1073/pnas.1414495112]. Proceedings of the National Academy of Sciences USA, 112(29), 8987–8992.
Schnupp, J., Nelken, I., & King, A. (2011). Auditory neuroscience. Cambridge, MA: MIT Press.
Seebeck, A. (1841). Beobachtungen über einige bedingungen der entstehung von tönen. Annalen der Physik und Chemie, 53, 417–436.
Skoe, E., & Kraus, N. (2010). Auditory brain stem response to complex sounds: A tutorial. Ear and Hearing, 31 (3), 302–324.
Temperley, D. (2001). The cognition of basic musical structures. Cambridge, MA: MIT Press.
Temperley, D. (2007). Music and probability. Cambridge, MA: MIT Press.
Thompson, E., & Varela, F. J. (2001). Radical embodiment: Neural dynamics and consciousness. Trends in Cognitive Science, 5(10), 418–425.
Tillmann, B., Janata, P., & Bharucha, J. J. (2003). Activation of the inferior frontal cortex in musical priming. Cognitive Brain Research, 16(2), 145–161.
Trainor, L. J., & Trehub, S. E. (1994). Key membership and implied harmony in Western tonal music: Developmental perspectives. Perception & Psychophysics, 56(2), 125–132.
Tramo, M. J., Cariani, P. A., Delgutte, B., & Braida, L. D. (2001). Neurobiological foundations for the theory of harmony in western tonal music. Annals of the New York Academy of Sciences, 930, 92–116.
Treffner, P. J., & Turvey, M. T. (1993). Resonance constraints on rhythmic movement. Journal of Experimental Psychology: Human Perception & Performance, 19, 1221–1237.
Von Békésy, G. (1960). Experiments in hearing (Vol. 8). E. G. Wever (Ed.). New York, NY: McGraw-Hill.
Wile, D., & Balaban, E. (2007). An auditory neural correlate suggests a mechanism underlying holistic pitch perception. PloS One, 2(4), e369. doi: papers2://publication/doi/10.1371/journal. pone.0000369