2012: Artificial Acoustic Agencies 24
There are these other forms of life, artificial ones, that want to come into existence. And they are using me as a vehicle for its reproduction and its implementation.
—Chris Langton, Artificial Life: An Overview (1997)
At an elementary scale of the sensual mathematics of sonic warfare, digi-sonic matter is marked by the granular texture of microsampled sound. Another question of sonic digitality and power, operating on the higher level of morphological mutation, is occupied with evolutionary algorithms and cellular automata. Computers have upgraded both what it means to be a musician and a military strategist.1 Yet the celebration of “decontrol” (setting up rule-based systems and letting them do all the work) and the simulation and modeling advantages these offer have a flip side. Picture, for a moment, a convergence between preemptive capital future-casting the desires of consumers, the acoustic intimacy of either directional audio spotlights or iPods, personalized targeting by retinal scans or implanted chips and adaptive Muzak systems running generative, randomizable algorithms. Here the experience of the shopping mall takes on a particularly predatory disposition. Artificial acoustic agencies or audio viruses would track your movements, continuously modulating your behavior with suggestions, mood enhancements, memory triggers, and reassurances. To be effective, the algorithms of these adaptive systems would have to traverse code, hardware, and the wetware of the body, the digital and the analog. But how would this mode of sensual mathematics work?
As well as new textures that enhance sound’s sensual contagiousness, digitization has, through generative music software based on cellular automata and genetic algorithms applied to music, injected vibrations with a contagious mathematical dimension, giving them an agency all of their own to evolve, mutate, and spread. These sonic algorithms, or artificial acoustic agencies, are abstract machines—sets of rules that have become independent of their specific physical embodiments, thereby intensifying their powers of transmission, replication, and proliferation. Key musical processes are distilled to formalized equations that are generalizable and transferable. Algorithmic or generative music, whether analog or digital, claims to develop bottom-up approaches to composition. As Nyman points out, they understand systemically the context of composition and production and are “concerned with actions dependent on unpredictable conditions and on variables which arise from within the musical continuity”2 Examples from the history of experimental music can be found in the oft-cited investigations of rule-centered sonic composition processes in the exploration of randomness and chance. Think, for example, of Cage’s use of the I Ching, Terry Riley’s “In C,” Steve Reich’s “It’s Gonna Rain” and “Come Out,” Cornelius Cardew’s “The Great Learning,” Christian Wolff’s “Burdocks,” Frederic Rzewski’s “Spacecraft,” and Alvin Lucier’s “Vespers.”3
More recent approaches centering on the digital domain make use of software programs such as Supercollider, MaxMsp, Pure Data, Reactor, Camus, Vox Populi, and Harmony Seeker. In addition to the sonic simulations drawn from chaos physics, recent generative sound design projects also draw from evolutionary biology, in particular, artificial life research. These deploy mathematical algorithms to simulate the conditions and dynamics of growth, complexity, emergence, and mutation, and they apply evolution to musical parameters. Placing these experiments in digital sound design in the historical context of earlier experiments with, for example, out-of-phase tape recorders, it becomes clear, Eshun argues, that tape loops already constituted “social software organized to maximise the emergence of unanticipated musical matter.” He continues that the “ideas of additive synthesis, loop structure, iteration and duplication are pre-digital. Far from new, the loop as sonic process predates the computer by decades. Synthesis precedes digitality by centuries.”4 While generative music predates the digital, once on computers, these sonic agencies assume some of the powers of computer viruses to evolve, mutate, and spread. How do these virulent algorithmic forms function?
According to Miranda, software models for evolutionary sound generation tend to be based on engines constructed around cellular automata or genetic algorithms.5 Instead of messy biochemical labs deployed to probe the makeup of chemicals, cells, and so forth, these sonic evolutions take place in the artificial worlds of the CPU, hard disk, computer screen, and speakers. Specifically, the scientific paradigm of artificial life marks a shift from a preoccupation with the composition of matter to the systemic interactions between components out of which nature is under constant construction. Alife uses computers to simulate the functions of these interactions as patterns of information, investigating the global behaviors that arise from a multitude of local conjunctions and interactions. In addition to cellular automata and genetic algorithms, other Alife techniques for analyzing emergent complexity include adaptive games and neural networks. The application of biological patterns of information has been taken up within robotics, the social sciences, humanities, and, most pertinent here, musicology6
The analysis of digital algorithms within the cultural domain of music is not limited to composition and creation. Recent Darwinian evolutionary musicology has attempted to simulate the conditions for the emergence and evolution of music styles as shifting ecologies of rules or conventions for music making. These ecologies, it is claimed, while sustaining their organization, are also subject to change and constant adaptation to the dynamic cultural environment. The suggestion in such studies is that the simulation of complexity usually found within biological systems may illuminate some of the more cryptic dynamics of musical systems.7 Here, music is understood as an adaptive system of sounds made use of by distributed agents (the members of some kind of collective; in this type of model, typically none of the agents would have access to the others’ knowledge except what they hear) engaged in a sonic group encounter, whether as producers or listeners. Such a system would have no global supervision. Typical applications within this musicological context attempt to map the conditions of emergence for the origin and evolution of music cultures modeled as “artificially created worlds inhabited by virtual communities of musicians and listeners. Origins and evolution are studied here in the context of the cultural conventions that may emerge under a number of constraints, for example, psychological, physiological and ecological.”8 Miranda, despite issuing a cautionary note on the limitations of using biological models for the study of cultural phenomena, suggests that the results of such simulations may be of interest to composers keen to unearth new creation techniques, and asserts that Alife should join acoustics, psychoacoustics, and artificial intelligence in the armory of the scientifically upgraded musician.9
The two most common tools used by these technically enhanced musicians are cellular automata and genetic algorithms. Cellular automata were invented in the 1960s by John von Neumann and Stan Ulan as simulations of biological self-reproduction. Such models attempted to explain how an abstract machine could construct a copy of itself automatically. Cellular automata are commonly implemented as an ordered array or grid of variables termed cells. Each component cell of this matrix can be assigned values from a limited set of integers, and each value usually corresponds with a color. On screen, the functioning cellular automata are a mutating matrix of cells that edge forward in time at variable speed. The mutation of the pattern, while displaying some kind of global organization, is generated only through the implementation of a very limited system of rules that govern locally.
Their most famous instantiation relates to John Conway’s game of life as taken up within the domain of generative music by Brian Eno. The focus of such generative music revolves around the emergent behavior of sonic life forms from their local neighborhood interactions, where no global tendencies are preprogrammed into the system. In the software system CAMUS, based on Conway’s model, the emergent behaviors of cellular automata are transposed into musical notation.
As in the case of cellular automata and artificial neural networks, models based around genetic algorithms also transpose a number of abstract models from biology, in particular the basic evolutionary biological processes identified by Darwin and updated by Dawkins.10 These algorithms are often used to obtain and test optimal design or engineering results out of a wide range of combinatorial possibilities. Simulations so derived allow evolutionary systems to be iteratively modeled in the digital domain without the inefficiency and im-practicality of more concrete trial-and-error methods. But as Miranda points out, by abstracting from Darwinian processes such as natural selection based on fitness, crossover of genes, and mutation, “genetic algorithms go beyond standard combinatorial processing as they embody powerful mechanisms for targeting only potentially fruitful combinations.”11 In practice, genetic algorithms will usually be deployed iteratively (repeated until fitness tests are satisfied) on a set of binary codes that constitute the individuals in the population. Often this population of code will be randomly generated and can stand in for anything, such as musical notes. This obviously presupposes some kind of codification schema involved in transposing the evolutionary dynamic into sonic notation, which, as Miranda points out, will usually seek to adopt the smallest possible “coding alphabet.” Typically each digit or cluster of digits will be cross-linked to a sonic quality such as pitch, or specific preset instruments as is typical in MIDI.
This deployment consists of three fundamental operations that in evolutionary terms are known as recombination (trading in information between a pair of codes spawning offspring codes through combining the “parental” codes), mutation (adjusts the numerical values of bits in the code, thereby adding diversity to the population), and selection (chooses the optimal code based on predetermined precoded fitness criteria or subjective or aesthetic criteria). One example of the application of genetic algorithms in music composition is Gary Lee Nelson’s 1995 project, Sonomorphs, which used
genetic algorithms to evolve rhythmic patterns. In this case, the binary-string method is used to represent a series of equally spaced pulses whereby a note is articulated if the bit is switched on ... and rests are made if the bit is switched off. The fitness test is based on a simple summing test; if the number of bits that are on is higher than a certain threshold, then the string meets the fitness test. High threshold values lead to rhythms with very high density up to the point where nearly all the pulses are switched on. Conversely, lower threshold settings tend to produce thinner textures, leading to complete silence.12
This research intersection between artificial life and evolutionary music usually culminates, when fleshed out, in prototypes of artificial acoustic agencies composed of voice synthesizers, a hearing apparatus, a memory device, and a cognitive module as host to the algorithms.13 Algorithmic patterns or sets of rules derived from processes of biological evolution are transcoded into digital information that serves as instructions for sound software. The activation of these rules may produce some emergences analogical to biological phenomena such as evolution and mutation.
All of these projects hint at the unpredictable digital contagion, mutation, and proliferation of vibration through code. They sketch an initial outline of the nonhuman agency of artificial acoustic entities. These algorithmic agencies, Kodwo Eshun describes as UAOs (unidentified audio objects).14 The UAO is a kind of mutant acousmatic or schizophonic vector, a contagious pulse of experience without origin. For Eshun, a UAO is “an event that disguises itself as music, using other media as a Trojan horse to infiltrate the landscape with disguised elements of timeliness and atopia.”15
But what happens when these viral audio forms leak out of the digital sound lab, beyond the quarantined spaces of sound art and find themselves physical host bodies? Picture, for example, an unholy alliance between sonic branding and the digital sound design of generative music—a situation in which music was able to respond and mutate in order to preempt the movements and desires of consumers. What if these artificial sonic agencies became parasitic, feeding off your habits and quirks, always one step ahead, modulating your needs? Can this predatory urbanism of responsive, anticipatory branding environments within the surround sound of ubiquitous music media itself be preempted by an approach tuned to both the digital and analog contagiousness of sound, or audio viruses? The algorithmic contagion of generative music would be only one aspect of the sensual mathematics monitored by an audio virology. Tracking algorithms across the auditory mnemonics of populations, these unidentified audio objects can already be found infesting the sonic ecologies of capitalism.