6
Musical Art and Brain Damage II

Performing and listening to music

Introduction

Performing musical artists, whether opera or jazz singers, pianists, violinists, or organists, all display their craft, talent, skill, experience, and musical cognition. Both expression and reception are thus exhibited. The accumulated body of experimental data suggests that total musical expression relies on the specialization of both hemispheres (Baeck, 2002b; Wieser, 2003). Currently, the general hemispheric pattern is that, while the left hemisphere specializes in the perception of timing and rhythm, the right hemisphere specializes in pitch and timbre perception. Within each hemisphere, the temporal lobes are predominantly involved with musical perception and the frontal lobes in musical output and expression. The latter feature of music is associated with language and is an important issue for discussion. Music consists of a variety of sub-components and micro-fragments; the sub-components of language, by comparison, appear closely related to each other and to form a unified entity, one that requires single cortical control. They are better understood than those of music. However, the bulk of the discussion in this chapter concerns disruption to musical abilities following brain damage.

Art of music and language

In neurology and neuropsychology the attempt to distinguish between music and language stemmed from observations that patients with language deficits can sometimes show spared musical abilities (Basso, 1993; Critchley & Henson, 1977). It is not uncommon, for example, to find a few intact musical abilities in a patient experiencing even global aphasia. Occasionally patients with Broca’s aphasia can sing, and indeed sing the words that they are normally unable to utter. Similarly, patients with aphasia can also suffer from amusia. Indeed, more often than one would expect, aphasia and amusia arise from damage to the left hemisphere (Polk & Kertesz, 1993). The famous case reported by Dejerine known as Monsieur C., whom Dejerine diagnosed as having alexia without agraphia, was, as it turns out, a trained musician (Hanley & Kay, 2003). He sustained damage in the left hemisphere, in the parietal and occipital lobes, and in the posterior region of the corpus callosum. This patient had as much difficulty reading musical notes as he had in reading words. Yet, he was able to sing both previously familiar and new tunes as well as play the piano and write musical notes. A discussion of musical training can be found in a subsequent section of this chapter. The symptoms illustrate commonalities in expressive aspects of language and music (not only in trained musicians). Unlike language, however, music has components that are specialized in the left hemisphere, with other components specialized in the right hemisphere. Importantly, recent neuroimaging studies confirm the relationship between the left auditory area, language, and music (Patel, 2010).

The principal shared commonality of music and language is temporality; that is, the sound units occur in sequential order and the order gives the sound its meaning. This is unlike the perception of a painting, drawing, or statue, where the whole stimulus is initially perceived at one time. With speech and music, the brain must not only understand the whole from the sequences but also support ongoing memory for the order of occurrence. The caveat is that with speech, time intervals are wide compared to music’s intervals. The speed with which the units of music congeal into a melody, even into a phrase, is faster than the unification of speech sounds into a meaningful whole. And the rhythmic speed of music may increase when produced by a musical instrument as opposed to the human voice box, and even more so when several instruments play together.

With melodies, it is difficult for most of us to know when exactly sub-components—phrases, say—started and when they end. This is not so with speech; we know the beginning and the end clearly. Ultimately, the influence of music on the listener reflects opposing neural computation from what is applied in language-listening. Moreover, the anatomical underpinnings of the parts of language are by far better understood than those of the sub-components of music.

The source of music production and language sounds provides a critical clue to the underlying neural support: the major instrument of language is speech. It is produced solely by one part of the human body, the mouth and larynx. Emitted speech is constrained by the limitations imposed by the very anatomical structures that produce and support it. With music, on the other hand, this does not have to be the case: music can be produced by limbs alone, the mouth–lungs apparatus alone (vocal cords, larynx), by both, or by instruments. The mode of production may have sculpted the nature of music, both neurologically and phenomenologically. Further, the connections to subcortical areas with musically related cortical regions may be more wide and diffuse than for language. This is regardless of whether at the dawn of human brain evolution music started off as a form of semantic communication or not. The fact that music production does not have to rely on the oral cavity alone suggests that it could expand broadly through recruitment of additional cortical areas and neuronal networks.

Amusia and the art of music

Agnosia for music is known as amusia (the loss of receptive knowledge of previously known components of music because of acquired brain damage; Cousineau, Oxenham, & Peretz, 2015). But amusia can also be a congenital disorder (Peretz, Champod, & Hyde, 2003). The amusia condition can include total auditory agnosia, in which musical sounds are not recognized as being musical. A diagnostic tool constructed to identify the components of amusia is known as the Montreal Battery for Evaluation of Amusia (Baird, Walker, Biggs, & Robinson, 2014). Interestingly, there is a dissociation between recognizing melodies and identifying emotions invoked by the music (Cousineau et al., 2015); recognition of the melodies can be impaired, but identification of the emotions cannot. The emotions seem to be subserved by separate pathways than those for structural musical features.

When there is gradation in agnosia for music, two major subtypes are identified. First, tone-deafness is the inability to discriminate tones on a musical scale—that is, to say whether the tone is in the low end of the scale or the high end (or somewhere in the middle). The patient reports that all tones sound the same. Second, melody deafness (amelodia) is the inability to recall a melody, either to name it or to hum it, even after clues are given, or even to sing it in the mind’s ear. There are interesting functional dissociations in this condition: patients can still identify the instrument playing the melody as well as recognize wrong notes, and yet not be able to recognize the melody itself. Also, in some cases, previously familiar melodies sound like mere noise, particularly unpleasant noise (the sound of a screeching car, or of a hammer on a steel sheet). Also, it is possible for a patient to recognize melodies but not recognize wrong notes introduced in it (Basso, 1999). What all of this shows is that these components of music are represented in widely distributed functional neuronal networks in the brain.

On the whole, tone-deafness is seen most often following damage in the left hemisphere while melody deafness is often found following damage in the right hemisphere, and the localization within each hemisphere can vary (Basso, 1999). However, exceptions have been reported as well, and in any case tone or melody deafness alone do not define all there is to music. Attempts to relegate music perception to the right hemisphere alone are fraught with problems given inconsistency in data and heterogeneity of subjects, and of testing methodology. Rather, a recent appealing suggestion based on a study of non-musically trained neurological patients with unilateral focal damage is that there is a great deal more inter-hemispheric integration in music perception whereby the total experience receives selective contributions from each hemisphere (Schuppert, Münte, Wieringa, & Altenmüller, 2000). Moreover, there is neuroimaging support for activation of Broca’s area in the left hemisphere and its homologue in the right hemisphere during music-listening (see Ayotte, Peretz, Rousseau, Bard, & Bojanowski, 2000; Levitin & Menon, 2003).

Tone-deafness can also occur congenitally, with perception of pitch and melody being the most impaired (Tillmann, Nguyen, Grimault, Gosselin, & Peretz, 2011). One important study explored this condition in a group of 11 adults reportedly experiencing tone-deafness (Ayotte, Peretz, & Hyde, 2002). They were administered a battery of six tests that measured various components of music, including pitch, rhythm, melody, temporal judgment, contour, and melody memory. The overall finding was that the worst deficit was in pitch-processing, and there were impairments in memory and recognition of music, singing, and keeping time through tapping. The researchers noted that all of these impairments were restricted to music; the subjects were able to recognize speech prosody, familiar environmental sounds, and human voices, and, interestingly, 10 subjects could recognize and identify familiar songs from hearing the opening lyrics. Indeed, all subjects had been exposed to music from an early age and some had even taken music lessons. Previously, Geschwind and Fusillo (1966) reported the case of a congenitally amusic patient who could not sing, discriminate between two pitches, or keep time, yet was fluent in four foreign languages. No specific brain damage is known to be associated with these cases. (A recent study, however, implies that speaking tonal languages such as Mandarin Chinese can nevertheless be affected by the presence of congenital amusia (Tillmann, Albouy, & Caclin, 2015).)

A detailed and highly illuminating study of a single case of congenital amusia (tone-deafness), Monica, was published by Peretz et al. (2002). Monica was unable to recognize or discriminate melodies, sing, or dance, despite having been given music lessons as a young child. Sensitive music and sound tests uncovered that what may lie at the bottom of her musical disability is a gross deficiency in perceiving changes in pitch, particularly for descending pitches. This was true regardless of whether the tones played to her were pure tones or piano tones, and whether the tone duration was, say, 700 or 350 milliseconds. Somewhat slightly better performance was observed when Monica perceived pure-tone changes (but only if the change was rising, not when falling). Her case reflects a clear-cut dissociation between language sounds and music sounds: she does not appear to have impairments in discerning speech intonations (e.g., she was able to monitor pitch changes in sentences ending in a question mark). Moreover, no brain anomaly has been found that can explain her congenital musical difficulties (see reviews of congenital amusia: Kohlmetz, Müller, Nager, Münte, & Altenmüller, 2003; Takeda, Bandou, & Nishimura, 1990). Importantly, what all of this shows is that one of the critical features of musical understanding is decipherment of pitch, and that the rest of the dedicated neuronal music pathways in the brain are heavily dependent on normal pitch processing.

Music localization in the brain

When neurologists initially attempted to localize music in the brain, they noticed the prevalence of amusia in left-hemisphere-damaged patients. They did not fail to observe, however, that some right-hemisphere-damaged patients suffered from musical disorders as well (Angulo-Perkins et al., 2014). At the time when Salomon Henschen carried out investigations in the 1920s, the right hemisphere was regarded as being incapable of supporting anything worthwhile (by way of human cognition), except sometimes after the left hemisphere was damaged. So it is not surprising that the few right-brain-damaged patients with amusia were not seen as providing important clues about functional brain localization. Henschen concluded, however, that both hemispheres probably play a role in music perception and production. His work was seminal since it spurred comments, criticisms, and debates (see Basso, 1993). Indeed, some views on music and hemispheric specialization point to both hemispheres rather than to only one (Baeck, 2002b).

The next important research period in the neuropsychology of music and the brain came in the early 1960s with the publications of Brenda Milner (1962) and Doreen Kimura (1964), which were based on work in the Montreal Neurological Institute on patients undergoing anterior temporal lobectomy for the relief of epilepsy. Milner administered the Seashore Musical Abilities Test to these patients and a group of control subjects and found asymmetries in their deficits: the right-sided group performed particularly poorly compared to the left-sided group and the control group in discrimination between two brief musical melodies and in timbre recognition; all groups were indistinguishable in rhythm discrimination. The Seashore test provides measures for several components of music, including rhythm, timbre, tonal memory, pitch, loudness, and duration. The strongest result that Milner obtained then was for the poor performance on timbre by the right-sided group. As it turned out, timbre discrimination shows right hemisphere activation in several functional neuroimaging studies carried out more than 40 years afterwards (described later in this chapter). Being able to discriminate which instrument is playing may be similar to telling which person is speaking. With regard to rhythm, however, the bulk of the evidence from functional neuroimaging studies since the early 1990s points to left hemisphere specialization (Wieser, 2003). This is consistent with the long-understood left hemisphere specialization in memory for temporal order (Efron, 1963).

Melodies and the role of musical training

Countless studies have shown neuroanatomical differences in the brains of musicians versus those of non-musicians. These studies cover a range of brain regions and sub-regions, including the auditory cortex in the superior temporal gyrus, the somatosensory cortex, the motor cortex, and the frontal lobes. In some carefully designed studies, there is greater volume or cell density in the right hemisphere whereas in others the greater size or density lies in the left hemisphere (findings on musicians’ hands are described in a subsequent section of this chapter). Even within the superior temporal gyrus itself, asymmetries have been found for sub-regions of the gyrus (Carey et al., 2015). Other studies have found clear differences in musical perception and cognition between musicians and non-musicians (Carey et al., 2015). The tremendous time investment by musicians clearly contributes to neuroanatomical alterations and to perceptual–cognitive enhancement (Zatorre, 2015).

Earlier, Kimura (1964) investigated memory for melodies through the use of the dichotic listening paradigm; two simultaneous but different sounds are heard at the same time, except that one is administered to the left ear and the other to the right ear. This technique relies on the fact that, under competing auditory conditions, the contralateral auditory pathways dominate. In Kimura’s study, normal subjects heard one melody in one ear and simultaneously a different melody in the other ear; the question of interest was which melody would later be recognized from among a series of four melodies heard binaurally. The melodies heard in the left ear were recognized better than those heard in the right ear (Kimura, 1964). This study launched the argument that the right hemisphere specializes in musical perception, since not soon afterwards several other studies confirmed Kimura’s findings. Previously, Kimura administered a dichotic listening task to normal subjects requiring that they say out loud the name of the number that they heard. One ear received one number while simultaneously the other ear received a completely different number; the numbers heard in the right ear were named significantly more accurately than numbers heard in the left ear (Kimura, 1963a, 1963b). These results were interpreted to show the specialization of the left hemisphere in language. On the whole, Kimura’s two studies suggested that while one hemisphere was indeed specialized in language, as was already well known at that time, the other hemisphere, the right, specialized in music. Subsequent tests of musical perception revealed, however, that musical training leads to bi-hemispheric involvement.

The idea that training is important in obtaining information about the neuroanatomical underpinning of music from brain-damaged patients was first suggested in 1930 by Feuchtwanger (explained by Basso, 1993). In 1974 Bever and Chiarello demonstrated in a seminal study the influence of musical training in an experiment in which one group of subjects consisted of trained musicians (who had received extensive training in a music school for at least five years prior to the testing) and another group of non-musicians (Bever & Chiarello, 1974). The task was to listen to melodies recorded on a non-dichotic tape, detect a specific sequence of two notes, and later say whether or not a given melody had already been presented during the two-note sequence-detection stage. There were many such trials. The results indicated a clear-cut difference between the two groups: trained musicians were better able to recognize the melody heard in the right ear while non-musicians recognized the melody better when heard in the left ear. This outcome suggested the opposite involvement of the two hemispheres in musical processing, with the left hemisphere being maximally involved in trained musicians and the right hemisphere being maximally involved in non-musicians (Bever & Chiarello, 1974). These findings were subsequently replicated in other studies and the issue of training remains important in empirical studies of music. One study, for example, has demonstrated that trained pianists process musical notation differently from non-musicians and that, as a result of training and practice in their craft, pianists have a spatial understanding of the world that is different from that of non-musicians (Stewart, Walsh, & Frith, 2004). The current view is that music perception is influenced by extent of musical training and has effects not only on music itself but on other modalities as well, notably vision.

Unilateral brain damage in trained musicians

Professional musicians' instrumental playing following unilateral brain damage has not yielded a clear-cut picture of hemispheric control. As Basso (1999) summarizes, left hemisphere damage resulting in language comprehension impairments (Wernicke-type aphasia) had the following effects: a piano teacher went on to play the piano; a concert pianist in a chamber orchestra continued to play professionally; an organist continued to play the organ. On the other hand, similar hemispheric damage impaired piano-playing in a piano teacher, in a conductor and orchestra director, and in a music student playing the guitar (but, interestingly, not singing). Right hemisphere damage resulted in the inability to play the accordion in an amateur accordion player and in the inability to play the organ in an amateur organist (also in Basso, 1999).

The loss of pleasure in listening to music is more often reported after right hemisphere damage than after left damage. But it should be stressed that there are more left hemisphere cases with amusia than with right damage. Consequently, because the sample numbers are unequal, the issue of pleasure from music in patients with unilateral damage is unclear. There is pleasure and there is pleasure, of course. When musicians listen to music their pleasure is “colored” by their knowledge of sub-components of music and those sub-components’ interplay in the whole musical structure. Experienced music listeners, those who have taken lessons or trained in listening selectively to music, derive listening experience of a particular sort. In non-musicians and untrained listeners, the absence of direct cognitive knowledge gives rise to a different sort of experience (Blood & Zatorre, 2001). Nevertheless, neuroimaging studies show that the right orbitofrontal cortex is active in most people listening to music that they enjoy (Tramo, 2001), as are the dorsal striatum and the ventral striatum (Klebe, Zeitouni, Friberg, & Zatorre, 2013; Zarate, 2013).

Cases of trained musicians with right hemisphere damage are exceptionally rare. However, a neurological report of such a case has been published by Sparr (2002). The most striking deficit was this musician’s amelodia. When first seen at the hospital, he could not recognize melodies that he previously knew, regardless of whether they were presented from recordings, played live on a piano, or sung; nor was he able to recognize tunes with lyrics containing words. By contrast, he could hum a tune from memory and he could correctly clap his hands to the rhythm of the melodies that he could not identify. Thus, he showed dissociation between rhythm and melody identification. At the same time, when given sheet music, he was able to discern the melody represented by the notes, categorize the style, and go on to explain which lines represented the melody; he could also explain the timing of musical instruments in a Stravinsky score. He had no problems in producing the pitch of single notes. He had problems reproducing sequences of four notes or longer, but not of three-tone sequences. He did not show presence of atonality. At the same time, he was unable to identify instruments from their sounds, and could not match voices of famous people to their names, even when they were singing. The inability to associate sounds with their source is not unlike the timbre deficits seen in non-musicians following damage to the right hemisphere. Follow-up tests one month later revealed a 20 percent improvement in melody identification and this rate went up to 70 percent three years later. Still, even at that time, there was a lag between melody presentation and its identification. The neurologist reporting this case notes that, throughout, the patient remained unaware of his deficits and denied having these perceptual difficulties with music. The denial suggests anosognosia (denial of illness), a condition associated with right parietal damage. It is possible, then, that despite the fact that neuroimaging revealed only right temporal lobe dysfunction, there was functional damage in the parietal lobe as well. In all, the patient’s amelodia was not accompanied by disorders in pitch, rhythm, or harmony. The critical contribution of this case is that it shows important dissociations in musical perception and understanding in a professional musician who despite localized brain damage went on to use intact regions in both sides of his brain to process features of music.

The neuropsychology of singing

Singing appears to be controlled by the right more than by the left hemisphere (Klebe et al., 2013; Zarate, 2013). This has been observed in some patients who, despite having aphasia caused by damage in the left hemisphere, were able to sing previously familiar songs. This is further supported by observations of left hemispherectomy patients (left hemisphere is surgically removed) in whom surgery was after age 8 years. In these patients language production was minimal but singing was possible nevertheless. One of these hemispherectomy patients, RS, was noted by several investigators to sing with no disruption to pitch, tone, rhythm, or any other identified feature of music (“she sang like an angel” was one evaluative remark; Gott, 1973; E. Zaidel, 1978). But it is not known if she was able to learn new songs after the hemispherectomy. This is an important distinction: do we need two intact cerebral hemispheres to learn music? Currently, we do not have an answer to this question.

The brain’s control in singing was investigated in eight neurological patients by Gordon and Bogen (1974). The patients were evaluated for surgery to control generalized epileptic seizures that were resistant to medication; they were administered a unilateral anesthetizing drug (sodium amytal), in a process known also as the Wada test, to the left and right hemispheres separately, and were then asked to sing familiar melodies while the unilateral anesthesia was in effect. When the left hemisphere was anesthetized and the right was awake (and able to control the singing), patients could not immediately sing nor utter any words. The singing began as soon as the patient was able to utter a single word. Once that occurred, melody-singing proceeded with clear pitch and rhythm. When the right hemisphere was anesthetized and the left was in control of the singing, the patients were able to speak (because control of language was in the left hemisphere) but their singing was impaired in seven out of eight cases. When patients succeeded in singing, there was rhythm, although it was a bit slow; pitch was poorly executed; and the singing was characterized as being monotonic. (The poor control over tonality confirms the observations from tonal amusia found in patients with unilateral damage in the right hemisphere.) However, there were great individual differences varying from attempted singing to no attempts to sing at all. And the investigators noted that, while tonality in singing was impaired, no such impairment was present for speaking. The current overall view of the brain’s control in singing is that the right hemisphere plays a greater role in rendering the melody, particularly in non-musicians, but the left plays a crucial role as well so that in balance each hemisphere makes a contribution to the total production (Altenmüller, 2001).

The critical role of the frontal lobes in singing in professional musicians becomes apparent after reviewing several reports on neurological patients. A. Jellinek described the rare case of a professional singer who lost his singing skills following removal of a glioma tumor in the left frontal lobe in 1933 (described in Benton, 1977). After surgery the patient suffered from Broca’s aphasia as well as from dyslexia, dysgraphia, and impairment in comprehension of spoken language. He became unable to sing familiar melodies and he had great difficulties applying the correct pitch. He also became unable to sing the notes of a scale, could not perfectly reproduce rhythmic patterns, and completely lost the ability to read music. Another trained musician, not a professional singer, was described in 1926 and 1927 by P. Jossmann (see Benton, 1977). He had an aneurysm in the right bifurcation of the common carotid artery, and the aneurysm was removed surgically. After surgery he was unable to sing or whistle previously familiar tunes, and he lost the ability to read musical scores. At the same time, he retained his ability to recognize pitch (highlighting the dissociation in brain localization of musical components). Other neurological cases reported in the literature seem to point to the frontal lobes as a brain region that modulates the control of singing (Basso, 1999; Benton, 1977; Schwenkreis et al., 2007), showing that expressive components of music are under fine cortical motor control and may be dissociable from receptive circuitry that could include cortical and subcortical structures. If singing were a mere extension of an early biological form of animal communication, subcortical regions would have maximally controlled singing. The fact that cortical regions, particularly in the frontal lobes, are critically involved suggests otherwise.

Brain representation of musicians’ hands

Imaging studies of the brain have revealed clues about neuroanatomical representations in musicians. In addition to the left planum temporale being larger in people with perfect pitch compared to those who have normal pitch perception (Zatorre, 2003), other musically related neuroanatomical findings have been uncovered. Adults with musical training that began earlier than age 7 have a larger region in the corpus callosum than people who were not musically trained; that region is in the trunk of the callosum (the middle section; Schlaug, Jäncke, Huang, & Steinmetz, 1995). The motor cortex is bordered on one end by the central sulcus. The depth of this sulcus is significantly greater in professional musicians who play on the keyboard than in non-musicians, and this is true in both the left and right hemispheres (Amunts et al., 1996). Among musicians, the depth of the sulcus is greater the earlier the age at which the training began. In musicians who play string instruments from an early age (the left hand typically presses on the strings and the right modulates the bow), the postcentral gyrus, which is the somatosensory cortex, in the right hemisphere is larger than normal (Elbert & Rockstroh, 2004; Elbert, Pantev, Wienbruch, Rockstroh, & Taub, 1995). In a separate study of violinists, both the right motor and the right somatosensory cortex were found to be larger than those on the left (Halpern, 2001). These results demonstrate alterations in brain structure that address the issue of brain plasticity.

At the same time, they do not indicate whether or not there is an alteration in neuronal networks, or, at the very least, they do not indicate the nature of the alterations. Large cortical size could mean greater neurons with long axons, or it could mean greater density of neurons than normal. So neuronal networks could vary in size across different individuals. The advantage of greater density is that there is increased ability to control particularly fine finger movements or increased sensitivity in the tips of the fingers. As Schwenkreis and associates (2007) point out, cortical asymmetry in violinists did not lead to better motor performance by musicians as compared to non-musicians in their study.

Alternatively, consider the findings involving a professional organist who suffered from temporal lobe epilepsy and the implications of his condition for the brain’s control of musical manual coordination (Wieser, 2003):

We studied the recorded organ performance of a professional musician with right temporal lobe epilepsy during a right temporal lobe seizure. While playing an organ concert (John Stanley’s Voluntary VIII, Op. 5), he suffered a complex partial seizure. Music analysis of the recorded concert performance during the seizure and comparison with other available exercise recordings and with the composition itself indicated seizure-induced variations. At the beginning of the seizure, the left hand started to become imprecise in time and deviated from the score, whereas the right hand remained faultless at this time. With increasing duration of the seizure discharge, the dissociation of both hands from the score increased, but the right hand compensated for the errors of the left hand in a musically meaningful way, that is, aiming to compensate for the seizure-induced errors of the left hand. The case illustrates untroubled musical judgment during epileptic activity in the right temporal lobe at the onset of the seizure. Whereas the temporal formation of the performance was markedly impaired, the ability of improvisation, in the sense of a “perfect musical solution” to errors of the left hand, remained intact … the left hand performs with imprecise tone lengths, whereas the right hand performs perfectly. Deviations from the notation are evident in both hands, but the right hand (directed by the “healthy” = unaffected left hemisphere) compensates for the errors of the left (affected) hand in a musically meaningful way.

(Wieser, 2003, pp. 85–86)

Music brain activation in fMRI and PET studies

The findings from some neuroimaging studies have uncovered several regions that illuminate the neural substrates of music. In 2003 a review of 44 functional imaging studies (functional magnetic resonance imaging (fMRI) and positron emission tomography (PET)) in non-musicians discovered a great deal of overlap (but not perfect consistency) in regional brain activation for musical sequencing tasks and music perception (Janata & Grafton, 2003). The bilateral areas found to be active during reading and playing music involved maximal activation of the primary motor cortex, pre-motor cortex, superior parietal lobe, lateral prefrontal cortex, and cerebellum. Musical score interpretation activated the superior parietal lobe and intraparietal sulcus. In tasks where subjects were required to discriminate or detect the parts played by different instruments, or to isolate a target from a background with music, the pre-motor region in the prefrontal lobe was maximally active as were the parietal lobe, cerebellum, and basal ganglia. If subjects were required to imagine in their mind’s ear a specific melody after being given a cue, the maximal activation areas were seen in the parietal, ventrolateral prefrontal area, and the pre-motor area. These same regions appeared especially active during the perception of musical stimuli. When the inferior and superior portions of the parietal lobe were activated, there seemed to be left–right asymmetry in activation, while in other regions the asymmetry was not so consistent.

Using PET scans in non-musicians, Halpern (2001) discovered that the right temporal lobe and the right supplementary motor area in the prefrontal cortex were maximally active in musical imagery (imagining a tune in the head) or in perceiving music, but that the left prefrontal regions were active when the music had lyrics. Also with PET, while subjects listened to pieces of music, Platel and colleagues discovered that maximal activation in the left hemisphere was associated with familiarity, pitch, and rhythm, and maximal activity in the right hemisphere was uncovered for timbre identification (Platel et al., 1997). Specifically, familiarity of melodies recruited the left inferior frontal gyrus and the anterior region of the superior temporal gyrus. The rhythm task maximally activated Broca’s area as well as the insula (which is buried deep inside the Sylvian fissure). These findings clearly reveal that several widely spread brain regions are activated when subjects (non-musicians) listen to music, as is confirmed by numerous studies (Meister et al., 2004).

A particularly revealing study investigating brain activation upon hearing musical tones in non-musicians found maximal recruitment of neuronal activation in several bilateral regions, but when laterality was detected it was associated more clearly (but not exclusively) with the right rostromedial prefrontal cortex (Janata et al., 2002). It was in this region of the brain that an unusual activation pattern occurred as different tones were played to the subjects:

We found that the mapping of specific keys to specific neural populations in the rostromedial prefrontal cortex is relative rather than absolute. Within a reliably recruited network, the populations of neurons that represent different regions of the tonality surface are dynamically allocated from one occasion to the next. This type of dynamic topography may be explained by the properties of tonality structures. In contrast to categories of common visual objects that differ in their spatial features, musical keys are abstract constructs that share core properties. The internal relationships among the pitches defining a key are the same in each key, thereby facilitating the transposition of musical themes from one key to another. However, the keys themselves are distributed on a tonus at unique distances from one another.

(Janata et al., 2002, p. 2169)

Critical factors that influence such dynamic activity involve rhythm and tempo, to name but two, and motoric cognition and control involving the motor cortices in both hemispheres as well as the cerebellum maximally (Popescu et al., 2004). The wide reaction in the brain to music contributes to realizing the widespread coordination of the involved neural substrates.

Summary

Early on, neurologists did not fail to observe that music disorders followed both left and right hemisphere damage. Attempts to distinguish between music and language stemmed from observations that patients with language deficits sometimes had spared musical abilities.

Now, with various neuroimaging techniques we know that widely distributed brain structures are activated in musical artists, and no single “music center” has thus far been identified. The general hemispheric picture is that, while the left hemisphere specializes in the perception of timing and rhythm, the right hemisphere specializes in pitch and timbre perception.

Brain-imaging studies of non-musicians and musicians alike suggest that music triggers multiple cortical neural regions; this highlights the broad and highly distributed spectrum of neural activation in music. A spatial map of the brain’s reaction to music as a function of tempo, rhythm, context, format, contour, and other identified “primitives” of music-listening and music-performing reveals that professional musicians have slightly different activated brain regions than non-musicians, especially in music execution regions.

Within each hemisphere, the temporal lobes are predominantly involved with musical perception and the frontal lobes in musical output and expression. However, previous training in music shapes memory and reactions to music. Musical training plays a crucial role in hemispheric involvement, with the left hemisphere being maximally involved in trained musicians and the right hemisphere maximally involved in non-musicians. Cases of trained musicians with right hemisphere damage are exceptionally rare, and thus have not allowed a complete understanding of right hemisphere contribution to musical composition.

Further readings

Koelsch, K. (2012). Brain and music. Oxford: Wiley-Blackwell.

Levitin, D. J. (2007). This is your brain on music: The science of a human obsession. London: Plue/Penguin.

Patel, A. D. (2010). Music, language, and the brain. Oxford: Oxford University Press.

Peretz, I., & Zattore, R. J. (Eds.). (2003). The cognitive neuroscience of music. Oxford: Oxford University Press.