25
Performing Music

Humans, Computers, and Electronics

Elaine Chew and Andrew McPherson

Introduction

Performing music, communication through music, is a ubiquitous human activity. As tools to assist in human activities have evolved from the mechanical to the electronic, music making has not escaped the relentless tide of technological innovation. This chapter explores forms of technology-mediated performance, and the ways in which the intervention of computers and electronics has changed, and continues to change, our processes and understanding of music making. Advances in computing power and speed now allow machines to simulate aspects of human-like intelligence and behavior so that computers can more closely and dynamically partner with humans in creative performance, improvisation, and composition. Technological innovations have led to new instruments and traditional instruments augmented by electronics that lead to new ways of performing, improvising, and composing.

The technologies considered are broadly divided into two categories: digital instrument systems that extend the sound-producing capabilities of the performer, and generative or intelligent systems that offer control at a higher level of musical abstraction, extending the mind of the performer and/or composer. The two categories, the first of which draws on a combination of hardware and software and the second of which is primarily software-based, are far from clear divisions. Music software is inextricably tied to electronic capabilities and not infrequently coupled with physical instruments. Many digital music instruments embed and take on the capacity of intelligent software. The physical manifestations and constraints of an instrument necessarily impact mental representations of the music; and, the desired high-level representations and handles on musical parameters inevitably shape the design of an instrument. What is clear is that the advent of electronics and computers has introduced profound shifts in musical thought, changing our understanding of the performer’s, composer’s, and listener’s roles and our ideas of what an instrument should be, what a performance can be, and the very notion of creativity itself. The two categories will be treated in sequence in the following sections.

Extending the Musician's Body: Electronic and Digital Instruments

Through the ages, human invention has adapted the tools at its disposal to serve artistic purposes. The rise of digital technology has led to digital musical instruments (DMIs), also known as new interfaces for musical expression (NIMEs) after the conference of the same name (Jensenius & Lyons, 2017), and traditional instruments augmented by electronics. This section will cover trends and innovations in the design and use of digital musical instruments, with a particular focus on tools which fit a traditional instrumental paradigm where physical actions and sounds are closely coupled in time and energy.1 The history of music-making with electronics—see Chadabe (1997), Miranda and Wanderley (2006), and Emmerson (2013)—is a story of technical innovation, competing artistic forces of tradition and experimentation, and creative appropriation of technology in often unexpected ways. Jordà (2004) describes the process of creating DMIs as digital lutherie: not a strict science but “a sort of craftsmanship that sometimes may produce a work of art, no less than music.”

Technical Foundations

Here we summarize common technical principles and taxonomies in digital musical instrument design before moving on to human factors in the next section. The archetypical DMI consists of sensor inputs, output parameters (usually sonic, though sometimes including modalities such as visuals or haptics), and a mapping layer (Wanderley & Depalle, 2004). The mapping layer attracts particular interest (e.g. Hunt, Wanderley, & Paradis, 2003, Magnusson, 2009): On acoustic instruments, the action-sound relationship is fixed by mechanical design, but DMIs allow arbitrary relationships to be created, including complex mappings involving stochastic or generative processes, or mapping-by-demonstration using machine learning (e.g. Fiebrink, 2011).

Several taxonomies of DMIs have been proposed, with a distinction often made between instruments, which integrate the input, mapping and output into a single system, and controllers, which perform input and mapping only and leave the sound generation to an external device. The MIDI (Musical Instrument Digital Interface) standard, published in 1983, remains the dominant protocol for digital controllers, even though its keyboard-focused paradigm makes assumptions that are not valid for all music.2 Controllers can be versatile, but they lack the tactile or kinesthetic feedback of acoustic instruments (Jordà, 2004), and frequently changing mappings can be an impediment to developing expertise.

Wanderley and Depalle (2004) classify DMIs according to their relationship to traditional musical instruments and playing techniques. Instrument-like controllers (say, MIDI keyboards and electronic drum pads) inherit the form and playing interface of familiar acoustic instruments while replacing the means of sound production with an electronic process. Instrument-inspired controllers borrow aspects of a familiar instrument interface but seek a fundamentally different musical purpose. Alternate controllers do not deliberately resemble traditional instruments.

The final category in Wanderley and Depalle’s classification, extended instruments (also known as augmented instruments), comprises of existing familiar instruments fitted with new sensors and actuators to extend their capabilities. The typical goal of an augmented instrument is to maintain all the capabilities and nuances of the underlying instrument while enlarging the musical vocabulary. Augmentations have been developed of most common Western instruments including piano (McPherson, 2015), guitar (e.g. Lähdeoja, 2015), violin (Overholt, 2012) and trumpet (Thibodeau & Wanderley, 2013). Augmentations typically take two forms: sensors on the instrument that control digital audio processing or synthesis, and electromechanical actuation of the instrument’s resonating structures. Actuation is often applied to the body of the instrument through embedded loudspeakers or vibration transducers, or it can be applied to vibrating metal strings using electromagnets (Overholt, Berdahl, & Hamilton, 2011).

Human Factors

What makes a “good” instrument depends less on any particular technical specification and more on a player’s ability to make creative use of it. This section considers the relationship between player and instrument in DMI performance.

On acoustic instruments, performers commonly experience the feeling “that the musical instrument has become part of the body” (Nijs, Luc, Lesaffre, & Leman, 2009). The theory of embodied music cognition (Leman, 2008) holds that the musical instrument is a “mediation technology” between the mind and a musical environment. At expert levels, the instrument becomes transparent to the performer; the bodily operations of manipulating the instrument become automatic, so the performer’s full attention can focus on the action of creating music. In other words, the challenge of understanding performer-instrument interaction is precisely that the performer is not consciously thinking about the instrument while playing.

An open question in DMI design, aesthetically and cognitively, is the amount of control the instrument should give to the performer. It is far easier to make a computer system capable of controlling many simultaneous sonic dimensions than it is for a performer to learn to play it. Momeni and Wessel (2003) suggest that three simultaneous dimensions are the limit of what is easily controlled in real time, although many DMIs possess more degrees of freedom in practice. Adding more control dimensions, possibly even extending the instrument to control multiple media, does not necessarily improve the artistic result (Tanaka, 2000). On the contrary, the role of constraints in fostering creativity has long been noted. In one study of a simple DMI (Zappi & McPherson, 2014), a version of the instrument with two degrees of freedom resulted in less diverse usage and more negative performer feedback than an otherwise identical instrument with only a single degree of freedom.

Another important consideration is the learning curve for a new instrument. Wessel and Wright (2002) propose that the ideal instrument should have a “low entry fee with no ceiling on virtuosity,” but how one reliably achieves such an outcome remains unknown. Given the importance of extended practice in acquiring instrumental expertise, it is unlikely that shortcuts exist to performer virtuosity on a new and unfamiliar instrument. Building on existing instrumental expertise through augmented instruments or instrument-inspired controllers is one possible approach. An open challenge here is to develop strategies for maximizing musical novelty while minimizing the amount of re-learning that is required to achieve proficiency.

Cultural Considerations

A DMI’s technical design or even its relationship to the performer tells but part of the story. Broader cultural factors, including repertoire, pedagogy, and the existence of well-known virtuoso players can be crucial to the instrument’s success. For the most groundbreaking instruments, a creative tension emerges over how much to embrace or challenge established musical culture. A microcosm of such creative tensions can be found in the theremin, an early 20th-century instrument played by waving the hands in the air without touching the instrument, and its contemporary the Ondes Martenot, where pitch is controlled by either a keyboard or a sliding ring worn on the finger. Partly on account of its striking technical innovation, the theremin remains better-known in popular culture, while the Ondes Martenot maintains a regular presence in the concert hall through its championing by Olivier Messiaen and other composers.

One of the most surprising developments in popular electronic music emerged from the use of the turntable (Katz, 2010). Originally designed as a home playback device, hip-hop DJs reimagined it as a performance tool for constructing music from repeated loops of recorded material. Many present-day digital performance tools draw on metaphors from this practice. The turntable’s emergence as a musical instrument highlights two important phenomena about technology in music. First, nearly anything can become a musical instrument in the right hands, from a washboard to a pair of spoons to an oil drum (the Caribbean steelpan). Instruments may also be used in ways their creator did not imagine, as with the growling tones and pitch bends used by jazz saxophonists, or distortion and feedback on the electric guitar. New techniques are passed from one player to the next, often through emulating role models (see Moran, and Green & Smart, this volume), such that techniques that were once unusual or even subversive quickly become widespread.

The turntable-as-instrument is also an example of how no tool or technology can be aesthetically neutral. Magnusson (2009) observes

the piano keyboard “tells us” that microtonality is of little importance . . .; the drum-sequencer that 4/4 rhythms and semiquavers are more natural than other types; and the digital audio workstation, through its affordances of copying, pasting and looping, assures us that it is perfectly normal to repeat the same short performance over and over in the same track.

(p. 171)

In this sense, observing the construction and evolution of digital musical instruments can yield insight into the artistic values and priorities of the people creating them.

Instrument or Composition?

Many DMIs maintain the familiar paradigm of acoustic instruments, where a single action produces a single sound. However, unlike acoustic instruments, some DMIs give the user control over higher-level musical structures rather than individual events. Magnusson (2009) calls such instruments “epistemic tools,” explaining that the performer can delegate part of the cognitive process to the instrument (“extensions of the mind rather than the body”). The symbolic instructions of the machine, rather than the resonating body or quality of sound production, form the musical core of these instruments. Accordingly, the performer’s interaction takes place on a symbolic rather than embodied level.

No musical instrument design can claim to be aesthetically neutral. But the aesthetic intention of the design perhaps becomes more explicit when moving to more abstract levels of musical control. Schnell and Battier (2002) discuss the idea of “composed instruments” which incorporate complex artistic processes into the instrument design. Playing such an instrument is akin to navigating a route through a particular piece, representing cooperation between composer and performer on different terms than traditional acoustic instruments. In the next section, we will consider more such composed instruments, along with other systems in which musical events are generated or shaped at a higher level of abstraction by the human musician through a computer.

Extending the Musician's Mind: Generative and Intelligent Systems

Before the advent of the first computers, when Babbage conceived of the general-purpose computing machine, Lovelace (1843) predicted that computers would one day be capable of creative intelligence, that they would be able to compose “elaborate and scientific pieces of music of any degree of complexity or extent” (p. 694). From the era of the first computers, Hiller and Isaacson (1958) composed a string quartet, the Illiac Suite, using material generated by the ILLIAC I. The same year also marked the birth of computer performance of music, when Max Mathews’ Music I program played a 17-second composition on an IBM 704.3 Thus began explorations into the digital computer as a musical instrument and as a composer (Mathews, 1963). Inventions of fast computer chips and algorithms like that for FM synthesis (Chowning, 1973) made it possible to perform computations to synthesize musical notes and timbres in less than the time it takes to play them. Such responsiveness has enabled the design of interactive systems that can participate in live performance.

Early notions of computational intelligence ascribe this ability only to creative systems capable of generating music. Increasingly, the ability to perform, to interpret and shape, scripted (or generated) or improvised music is also recognized as musically intelligent, even creative. Intelligent systems come in many forms: from those that synthesize music from scratch to those that emulate style, from systems that generate a simple motif to those that create orchestrated pieces, from systems that can entrain to a pulse to those that can engage in ensemble dialog, from conducting systems to those that mimic performance style. This section discusses three classes of generative and intelligent systems: conductor programs, accompaniment systems, and improvisation systems. While the role of intelligent systems in composition, improvisation, and performance is surveyed elsewhere (e.g. de Mantarras & Arcos, 2002, see also Dean, this volume), the focus here is on interactive systems deployed in performance.

Conductor Programs

Machines built to control the playback of music date back to the middle ages, when music box-like devices were used to drive organs and harmoniums—see review by Malinowski (2016). Known as conductor programs, these machines extend the performer’s mind, allowing users to control expressive parameters such as tempo, timing, and dynamics.

For the discussion, we shall borrow some of Rowe’s terminology. In his seminal book on interactive music systems, Rowe (1993, Chapter 1) describes them as systems “whose behavior change in response to musical input.” He further classifies interactive music systems along three dimensions: score-driven (aligns events to score representation) vs. performance-driven (responds to performed properties); transformative (transformations applied to input), generative (serial procedures applied to elementary material), or sequenced (playback of prerecorded fragments); and, as following the player paradigm (for ensemble results) or instrument paradigm (producing solo performances). In Rowe’s terminology, conductor programs constitute score-driven, sequenced-response, instrument paradigm systems.

Conductor programs allow performers to focus on the timing and other expressive factors without having to worry about getting the notes right. Composers may determine the pitches—the “most important musical parameter” but “the least expressive factor” (Mathews, 1991, in title)—but it is performers who shape expressive factors and control the music experience. For example, prompted by Pierre Boulez’s request for an interface to control the playback speed of tape recordings, Max Mathews invented the Radio Baton, a wireless three-dimensional controller based on technology developed by robotics engineer Bob Boie that can shape the timing and dynamics of music performance in real time. The decoupling of parameters through the use of two independent batons allows for fine expressive control, such as of individual note shapes and extraordinary time suspensions.

Traditionally, physical skill at an instrument is gained through years of practice. By removing the need for dexterity at the instrument, conductor programs allow their users to bypass these years of grind to focus early on higher-level issues of musical control and musicality. This control can be realized through known gestures or by creating new mappings. In Borchers, Lee, Samminger, Mühlhäuser’s (2004) Personal Orchestra, the user controls the tempo, dynamics, and instrument emphasis of the video playback of a Vienna Philharmonic performance with an infrared baton, while the Air Worm by Dixon, Goebl, and Widmer, (2005) uses Langner’s two-dimensional perceptual space to control tempo and loudness. To preserve micro expressive nuances that are still difficult for machines to synthesize, the program flattens then re-introduces tempo and loudness properties, leaving intact details such as articulation, chord asynchronies, and inter-voice dynamics. Leveraging the analogy between music and motion, Chew, François, Liu, and Yang’s (2005) Expression Synthesis Project (ESP) takes the driving metaphor for music performance to the next level to create a driving (wheel and pedals) interface for controlling expressive parameters through driving on a virtual road.

Conducting programs’ focus on direct control over expressive nuance presents immediate impact on music pedagogy. Following the Radio Baton’s event triggering paradigm, Schnell (2013) with the Atelier des Feuillantines uses a chess game to enact a performance of Bach’s Goldberg Variation 18 (“Canone a la Sesta”)—alternating chess moves correspond to half-bar onsets. In another game, players catching a ball trigger pizzicato chords in the orchestra accompaniment in Bach’s Concerto in F-minor (2nd movement). Similarly, Wang’s (2014) Magic Piano uses finger taps on falling balls of light on an iPad or iPhone to trigger notes in a piece of music. These new interfaces nudge the performer’s role closer to active listening, but with relatively less physically engagement, and raise natural questions on the roles instrumental virtuosi play in the digital age.

Accompaniment Systems

Like conductor programs, automated accompaniment systems control the playback of music, except they do this through an instrument rather than through a digital interface. The live instrumentalist thus serves as the conductor. Automated accompaniment systems are considered intelligent because, distinct from their precursor the music-minus-one system, they can entrain with the soloist so as to synchronize with the live performer, like traditional accompanists. According to Rowe’s (1993) terminology, these systems constitute score-driven, sequenced-response, player paradigm systems.

There are a number of real challenges to machines synchronizing to humans in scripted performance. In practice, note-perfect performances are rare except for the simplest pieces, and performance errors and other deviations can make MIDI note or audio input-to-score alignment imprecise. In Vercoe’s (1984) Synthetic Performer, an instinctive listening model extracted a sense of tempo from audio and sensor input for linking between performer and machine. Dannenberg (1984) experimented with real-time score-matching algorithms that allowed for extra and missing notes. In Raphael’s (2010) Music Plus One (MPO) system, rehearsals allow the system to learn probable expressions and a Kalman filter-like algorithm predicts timings for the near future. Cont’s (2008) Antescofo was created originally for Marco Stroppa’s “. . . of Silence” for saxophone and chamber electronics. Designed for synchronization and interaction between an electronic score and live player, the electronic score may embed generative sequences or transformations triggered by live events. Further systems are reviewed in Rowe (1993, chapter 3), Cont 2004, and Dannenberg and Raphael (2006).4

Improvisation Systems

Improvisation systems generate music on the fly in collaboration with live performer(s). They represent, in Rowe’s parlance, performance-driven, player paradigm systems, and their responses can be transformative or generative. Traditional composition and improvisation both involve creating new music material, but differ in terms of the immediacy of the output, and in the time allowed to review, reflect, and revise decisions. When computers are involved, it takes little to no time to generate new material and all options can be evaluated in the blink of an eye. Thus, while the capabilities may resemble that of composition, the speed with which computers are able to produce music makes their creating more akin to improvising, thus blurring the line between improvisation and composition. However, because the ability to reason and make common sense judgments still eludes even the best computer algorithms, computers still lag behind humans when it comes to the ability to reflect on and revise past decisions.

Improvising machines typically perform with a live musician(s). In human-machine improvisation, the performance results from in-the-moment dialog between the live player(s) and the machine. Lewis’ (2000) Voyager represents one such interactive experiment. Voyager analyses the musician’s input to generate complex musical responses based on the playing as well as rules encoding domain-specific knowledge. This is an example of an expert system -type approach to intelligent music systems, where the programmer hard codes specific rules that represent aspects of music knowledge into the system. This is in contrast to data analytics-type approaches, which includes machine learning, in which the system automatically derives insights from data for decision making. The logical and systematic structure of Voyager’s program leads Lewis to refer to it as a composition.

Later systems sought to emulate improvisation styles as encapsulated by note sequences, learning from human input through a data analytics approach (see also Vuust & Kringelbach, this volume). The OMax family of systems (Assayag, Bloch, Chemillier, Cont, & Dubnov, 2006) accomplishes this by reducing improvised music sequences to factor oracles, a data structure capturing repeated patterns and continuation links; traversing the resulting network recombines the patterns to generate believable sequences resembling the original input. An independent operator determines system parameters such as the sampling area of the network and the recombination rate, which determines the degree of novelty of the output. Pachet’s (2003) Continuator uses Markov models to represent style; traversing the networks of the Markov models then produces sequences having the same statistical properties as the input. Interactions with OMax and the Continuator tend to be reactive as it is hard to predict the system’s output until it has sounded; the musician must then adapt rapidly to make the performance a success. Mimi by François, Chew, and Thurmond (2011) allows for a more reflective mode of interaction by providing visual feedback; the improviser also controls Mimi’s parameters, including when she learns, plans, plays, and starts afresh.

Structure is widely regarded to be a desirable musical trait; compositions are typically associated with form and structure, and improvisation with lack of structure (Lewis, 2000). However, analyses of improvisations produced with Mimi showed that the system’s functions and constraints result in music following common classical forms (Schankler, Chew, & François, 2014). The creation of structure in improvised performance is further reinforced in experiments with Mimi4x (François, Schankler, & Chew, 2013), which allows users to engage directly with high-level structural improvisation, thereby taking on some of the role of the composer.

Although some improvisations have been transcribed for re-performance, the spontaneity of improvising means that most improvisations have traditionally been far removed from notation. Advances in computing technologies now allow compositions to be generated and notated on the fly for live performance. In Didkovsky’s (2004) “Zero Waste,” the pianist starts by reading an initial difficult-to-read two bars of music; each subsequent set of two bars are based on the sight reading of the previous two bars like a game of “Chinese whispers”. In Hoadley’s (2012) “Calder’s Violin,” the violin part is generated and notated in real-time so that the details of the piece change with each performance. Such serendipitous and ephemeral real time scores are starting to close the traditional gap between improvisation and composition, creating what Hajdu (2016) calls a breed of “comprovisations.”

Numerous variations on the human-machine collaborative improvisation paradigm exist. In live coding, the computer program itself is generated on the fly, and the programmer becomes both performer and composer (see Collins, McLean, Rohrhuber, & Ward, 2003; Wang & Cook, 2004). Robotic improvisation takes place when the performance of the machine-generated music is realized through an anthropomorphic robot (e.g. Weinberg & Driscoll, 2006). When, in addition, the audience is allowed to influence the generated outcomes (see Freedman, 2008), listeners also serve the function of the composer.5

Conclusions

The chapter has covered a spectrum of machine interventions in musical performance ranging from electronic and digital instruments that extend the musician’s body to generative and intelligent systems that extend the musician’s mind, showing the wealth of creative possibilities engendered by computers and electronics.

The new interaction paradigms call to question the roles of the performer, composer, improviser, and computer, and the distribution of creativity amongst the various agents involved in music making. When a live musician partners with a computer to make music, is the computer a (co-)creative agent? Is the programmer the creative agent? Is the computer a sophisticated mirror for the performer’s intentions? If some degree of intelligence is embedded in the system, is this done using an expert system approach incorporating the programmer’s music knowledge, or a data analytics approach where the computer assumes some of the roles usually associated with musical insight? If the computer is simply executing pre-coded instructions, or even if it is autonomously recognizing patterns, does that represent true creative intelligence? These are but some of the questions that have emerged in the age of musical interaction with computers.

What might the future hold? Forecasting may be hard, but one thing is for sure: Musical styles do not remain fixed over time; they respond to external forces such as the available tools for musical expression, and constantly evolve and adapt to new opportunities and situations. We may be designing digital tools to push the envelope with regard to existing genres and scenarios, but the most artistically transformative uses of digital technology may be yet to come.

Notes

1. This chapter will not examine the tools used to create electronic dance music, though this is an area of significant technical and artistic innovation; further discussion on the aesthetics of those genres can be found in Demers (2010).

2. For example, MIDI assumes that music can be divided into discrete notes, each at a particular semitone with a discrete onset and release. Continuous, independent control within each note is a persistent challenge which there have been periodic attempts to solve—the most of recent of which is the Multidimensional Polyphonic Expression (MPE) extension to the MIDI standard.

3. From Max Mathews’ summary of his work in computer music for the program for the festival “Horizons in Computer Music,” held 8–9 March, 1997, at the Simon Recital Center of the School of Music, Indiana University, Bloomington, Indiana.

4. This chapter does not cover harmonization systems, which are largely composition systems although some are realized as on-the-fly systems for specific classes of music. For a review that is part of an overview of music generation systems, see Herremans, Chuan, & Chew (under review).

5. This chapter does not cover Internet performance where network delay affects both performance and composition (see Carôt, Rebelo, & Renaud, 2007) and performers can become the instrument (Hajdu, 2007).

Core Reading

Assayag, G., Bloch, G., Chemillier, M., Cont, A., & Dubnov, S. (2006). Omax brothers: A dynamic topology of agents for improvisation learning. In X. Amatriain, E. Chew, J. Foote (Eds.), Proceedings of the Workshop on Audio and Music Computing for Multimedia, Association for Computing Machinery (ACM) Multimedia (pp. 125–132). Santa Barbara, California. New York, NY: ACM.

Jensenius, A. R., & Lyons, M. (Eds.) (2017). A NIME reader—Fifteen years of new interfaces for musical expression. Berlin: Springer.

Jordà, S. (2004). Instruments and players: Some thoughts on digital lutherie. Journal of New Music Research, 33, 321–341.

Lewis, G. (2000). Too many notes: Computers, complexity and culture in Voyager. Leonardo Music Journal, 10, 33–39.

Magnusson, T. (2009). Of epistemic tools: Musical instruments as cognitive extensions. Organised Sound, 14 (2), 168–176.

Mathews, M. (1991). The Radio Baton and conductor program, or: Pitch, the most important and least expressive part of music. Computer Music Journal, 15 (4), 37–46.

Rowe, R. (1993). Interactive music systems: Machine listening and composing, Cambridge, MA: MIT Press.

Schankler, I., Chew, E., & François, A. R. J. (2014). Improvising with digital auto-scaffolding: How Mimi changes and enhances the creative process. In N. Lee (Ed.), Digital Da Vinci (pp. 99–125). Berlin: Springer Verlag.

Wanderley, M. M., & Depalle, P. (2004). Gestural control of sound synthesis. In Proceedings of the Institute of Electrical and Electronic Engineers (IEEE), 92(4), 632–644.

Wessel, D., & Wright, M. (2002). Problems and prospects for intimate musical control of computers. Computer Music Journal, 26(3), 11–22.

Further References

Borchers, J., Lee, E., Samminger, W., & Mühlhäuser, M. (2004). Personal orchestra: A real-time audio/ visual system for interactive conducting. Multimedia Systems, 9(5), 458–465.

Carôt, A., Rebelo, P., & Renaud, A. (2007). Networked music performance: State of the art. In Proceedings of the Audio Engineering Society (AES) 30th International Conference: Intelligent Audio Environments (pp. 16–22). Saariselka, Finland. Audio Engineering Society.

Chadabe, J. (1997). Electric sound, the past and promise of electronic music. New York, NY: Prentice-Hall, Inc.

Chew, E., François, A. R. J., Liu, J., & Yang, A. (2005). ESP: A driving interface for expression synthesis. In Proceedings of the International Conference on New Interfaces for Musical Expression (NIME) (pp. 224–227). Vancouver, B.C.

Chowning, J. (1973). The synthesis of complex audio spectra by means of frequency modulation. Journal of the Audio Engineering Society, 21(7), 526–534.

Collins, N., McLean, A., Rohrhuber, J., & Ward, A. (2003). Live coding in laptop performance. Organised Sound, 8(3), 321–330.

Cont, A. (2004). Improvement of observation modeling for score following. PhD dissertation, Université Pierre et Marie Curie, PARIS VI, France.

Cont, A. (2008). ANTESCOFO: Anticipatory synchronization and control of interactive parameters in computer music. In Proceedings of the International Computer Music Conference (ICMC), (pp. 33–40). Belfast, Ireland. International Computer Music Association.

Dannenberg, R. (1984). An on-line algorithm for real-time accompaniment. In Proceedings of the International Computer Music Conference (ICMC) (pp. 193–198). IRCAM, France. International Computer Music Association.

Dannenberg, R., & Raphael, C. (2006). Music score alignment and computer accompaniment. Communications of the Association for Computing Machinery (ACM), 49(8), 38–43.

Demers, J. (2010). Listening through the noise: The aesthetics of experimental electronic music. New York, NY: Oxford University Press.

Didkovsky, N. (2004). Recent compositions and performance instruments realized in Java Music Specification Language. In Proceedings of the International Computer Music Conference (ICMC). Miami, USA. International Computer Music Association.

Dixon, S., Goebl, W., & Widmer, G. (2005). The “air worm”: An interface for real-time manipulation of expressive music performance. In Proceedings of the International Computer Music Conference (ICMC). Barcelona, Spain. International Computer Music Association.

Emmerson, S. (2013). Living electronic music. Farnham, UK: Ashgate Publishing Ltd.

Fiebrink, R. (2011). Real-time human interaction with supervised learning algorithms for music composition and performance. PhD dissertation, Princeton University, New Jersey, USA.

François, A. R. J., Chew, E., & Thurmond, D. (2011). Performer-centered visual feedback for human-machine improvisation. Association for Computing Machinery (ACM) Computers in Entertainment, 9(3). DOI: 10.1145/2027456.2027459

François, A. R. J., Schankler, I., & Chew, E. (2013). Mimi4x: An interactive audio-visual installation for high-level structural improvisation. International Journal of Arts and Technology, 6(2), 138–151.

Freedman, J. (2008). Extreme sight-reading, mediated expression, and audience participation: Realtime music notation in live performance. Computer Music Journal, 32(3), 25–41.

Hajdu, G. (2007). Playing performers—Ideas about mediated network music performance. In Proceedings of the Music in the Global Village Conference (pp. 41–42). Budapest, Hungary.

Hajdu, G. (2016). Disposable music. Computer Music Journal, 40(1), 25–34.

Herremans, D., Chuan, C.-H., & Chew, E. (under review). A functional taxonomy of music generation systems. Association for Computing Machiner (ACM) Computing Surveys.

Hiller, L. A., & Isaacson, L. M. (1958). Musical composition with a high speed digital computer. Journal of the Audio Engineering Society, 6(3), 154–160.

Hoadley, R. (2012). Calder’s violin: Real-time notation and performance through musically expressive algorithms. In Proceedings of the International Computer Music Conference (ICMC) (pp. 188–193). Ljubljana, Slovenia. International Computer Music Association.

Hunt, A., Wanderley, M. M., & Paradis M. (2003). The importance of parameter mapping in electronic instrument design. Journal of New Music Research 32 (4), 429–440.

Katz, M. (2010). Groove music: The art and culture of the hip-hop DJ. New York, NY: Oxford University Press.

Lähdeoja, O. (2015). An augmented guitar with active acoustics. In Proceedings of the International Conference on Sound and Music Computing.

Leman, M. (2008). Embodied music cognition and mediation technology. Cambridge, MA: The MIT Press.

Lovelace, A. A. (1843). Notes by the translator of “Sketch of the Analytical Engine Invented by Charles Babbage, Esq.” by L. F. Menabrea of Turin, Officer of the Military Engineers. In R. Taylor (Ed.), Scientific Memoirs, Selected from the Foreign Academies of Science and Learned Societies and from Foreign Journals. London: Richard and John E. Taylor.

Malinowski, S. (2016). The conductor program—Computer-mediated musical p[erformance. Url: www.musanim.com/tapper (accessed 8 August 2016).

de Mantarras, R. L., & Arcos, J. L. (2002). AI and music: From composition to expressive performance. Artificial Intelligence (AI) Magazine, 23(3), 43–57.

Mathews, M. (1963). The digital computer as a musical instrument. Science, 142, 553–557.

McPherson, A. (2015). Buttons, handles, and keys: Advances in continuous-control keyboard instruments. Computer Music Journal, 39(2), 28–46.

Miranda, E. R., & Wanderley, M. M. (2006). New digital musical instruments: control and interaction beyond the keyboard (Vol. 21). AR Editions, Inc.

Momeni, A., & Wessel, D. (2003). Characterizing and controlling musical material intuitively with geometric models. In Proceedings of the International Conference on New Interfaces for Musical Expression (NIME) (pp. 54–62). Singapore, Singapore.

Nijs, Luc, Lesaffre, M., & Leman, M. (2009). The musical instrument as a natural extension of the musician. In Proceedings of the 5th Conference of Interdisciplinary Musicology (pp. 132–133). Paris: LAM-Institut Jean Le Rond d’Alembert.

Overholt, D. (2012). Advancements in violin-related human-computer interaction. International Journal of Arts and Technology 2, 7(2–3), 185–206.

Overholt, D., Berdahl, E., & Hamilton, R. (2011). Advancements in actuated musical instruments. Organised Sound, 16 (2), 154–165.

Pachet, F. (2003). The continuator: Musical interaction with style. Journal of New Music Research, 32, 333–341.

Raphael, C. (2010). Music Plus One and machine learning. In Proceedings of the 27th International Conference on Machine Learning (pp. 21–28). Haifa, Israel.

Schnell, N. (2013). Playing (with) sound. PhD dissertation, University of Music and Performing Arts Graz, Austria.

Schnell, N., & Battier, M. (2002). Introducing composed instruments, technical and musicological implications. In Proceedings of the International Conference on New Interfaces for Musical Expression (NIME) (pp. 1–5). Singapore, Singapore.

Tanaka, A. (2000). Musical performance practice on sensor-based instruments. In M. M. Wanderley and M. Battier (Eds.), Trends in Gestural Control of Music (pp. 389–405). Paris, France. IRCAM.

Thibodeau, J., & Wanderley, M. M. (2013). Trumpet augmentation and technological symbiosis. Computer Music Journal, 37(3), 12–25.

Vercoe, B. (1984). The synthetic performer in the context of live performance. In Proceedings of the International Computer Music Conference (ICMC) (pp. 199–200). IRCAM, France. International Computer Music Association.

Wang, G. (2014). Principles of design for computer music. In Proceedings of the International Computer Music Conference (ICMC) (pp. 391–396). Athens, Greece. International Computer Music Association.

Wang, G., & Cook, P. (2004). On-the-fly programming: Using code as an expressive musical instrument. In Proceedings of the International Conference on New Interfaces for Musical Expression (NIME) (pp. 138–143). Hamamatsu, Japan.

Weinberg, G., & Driscoll, S. (2006). Toward robotic musicianship. Computer Music Journal, 30(4), 28–45.

Zappi, V., & McPherson, A. (2014). Dimensionality and appropriation in digital musical instrument design. In Proceedings of the International Conference on New Interfaces for Musical Expression (NIME) (pp. 455–460). London, United Kingdom.