19   François Pachet and His Computers That Improvise and Compose Songs

How do you compose a compelling song, a song that remains in your head and the heads of millions of people?

—François Pachet40

When I ask François Pachet what the future holds for him and his research, he replies, “I will stop working the day I take a taxi and hear a song composed by myself with my technology.”41

Pachet is one of the pioneers of computer music using AI, and particularly machine improvisation. He is a scientist, composer, and director of the Spotify Creator Technology Research Lab in Paris.

Pachet started as a musician, studying at the prestigious École Normale de Musique de Paris, where he specialized in the guitar. He went on to study jazz and improvisation at the Berklee College of Music in Boston. “Jazz and improvisation are like a sport,” he says. “It’s an activity that keeps your mind in shape.”42 He likens improvisation to a conversation in which one person starts a sentence and another finishes it.

Pachet’s second love was engineering, which he studied at the Université Pierre et Marie Curie in Paris, where he was an assistant professor of artificial intelligence until 1997. It was only when he moved to Sony Computer Science Laboratory, also in Paris, that he was finally able to combine his two loves and establish a team that explored AI and music.

The conundrum Pachet set out to solve was how to improvise with a computer and have it respond in the same style. For this he decided to use Markov models, based on Markov chains. To recap, Markov models predict the most probable note to follow the previous one, based only on previous events—and they are at the core of Pachet’s work.

Pachet finds Markov models simpler and easier to control than neural networks. Neural nets, he says, can’t compute fast enough, whereas Markov models, although less powerful, are more creative, particularly when a machine is improvising with a musician.43

In 2003, Pachet developed a system that can improvise with a musician. He called it the Continuator.44 The musician starts by playing a few phrases on a piano hooked up to a MIDI, which transmits the notes to the Continuator. Then he sits back. In a flash, the Continuator’s learning module learns the input note by note, breaks it into phrases, then sends each phrase in turn to a phrase analyzer to pull out the patterns and form the database. The machine then responds to the pianist with some phrases of its own. Thus the system instantaneously learns to play in the same style as the musician, without needing to be programmed to do so. It immediately responds to the riffs the pianist has just played with a new musical phrase, fulfilling Pachet’s view of improvisation as conversation. Sometimes the performer is amazed at the machine’s response.45

Pachet built in constraints so that the Continuator could respond to unexpected riffs or chord changes that it hadn’t learned. One way of doing this was to weight the points at which it had to make a choice of going one way or another, taking into account how well that choice matched the piano accompaniment, rather than strictly following a Markov chain.

He then set up a musical Turing test to assess the Continuator’s success, asking two music critics to decide when the pianist was playing and when the machine had taken over. In most cases, they couldn’t differentiate between the pianist and the machine.

The Continuator can also be programmed to accompany a pianist, selecting the most harmonious chords from a database of chord sequences.

For Pachet, however, the Continuator was a first step. He feels that deep neural networks are useful in well-defined areas such as facial recognition, medical diagnosis, and games like Go, but less successful in the artistic domain, in which problems are less well defined, such as the question of what makes a good song. The big challenge he wanted to confront was how to use AI not just to respond to human musicians but to compose music.

The Flow Machine

To do this, Pachet developed the Flow Machine. He was inspired by the American psychologist Mihaly Csikszentmihalyi’s concept of “flow,” the state of mind when a person is so completely absorbed in the activity they’re engaged in that time and place seem to fall away.46 They lose track of where they are and how much time has gone by. For it to occur, there has to be a perfect balance between the challenge of the task and the person’s skills—not too difficult and not too easy.

For Pachet, high-level creators respond to their challenges by developing their own unique style, which immediately identifies their work. The Flow Machine allows composers to confront challenges by increasing their skills so that a new style emerges.47 It makes music alongside musicians, providing them with a new way to create. The Flow Machine’s database is made of musical scores written on lead sheets, standard manuscript pages for music but with the notes digitized. Musicians can write music directly onto a blank lead sheet in the same way that one writes on a word processor and hear it played back by a computer.

The Flow Machine uses Markov models to analyze the sequences of notes and phrases in its database and identifies the patterns, then writes new music on lead sheets, following the same style. The problem, however, was to find Markov models that could handle long sequences of notes, unlike the Continuator, which could only deal with short-term sequences.

To do so required a little mathematical magic. Pachet is the magus of Markov models.48 He took the Markov chains and manipulated the mathematics to introduce limitations so he could control the structure of a score while optimizing the search for how one note succeeds the next. He called these limitations Markov constraints.49

Markov constraints identify patterns. The computer calculates the probability of certain chord progressions, melodic sequences, and rhythms and uses these probabilities to generate new plausible variations. In this way, Pachet can retain the style of a piece of music while manipulating it as a computational object. He can rearrange a piece of music in a certain style—a bossa nova orchestration of Beethoven’s “Ode to Joy” or a random pattern of notes played in the style of the Beatles, selecting the notes and harmonies.

A fundamental aim is to relate creativity to style. For Pachet, style is what separates great composers, writers, and painters from everyone else—and people develop their style by imitating the style of others.50

Benoît Carré, who has written songs for Johnny Halliday and Françoise Hardy and is the artistic director of the Flow Machine project, showed me how the Flow Machine works—a fascinating process.51 The Flow Machine has thirteen thousand songs in its database, mainly jazz, Western popular music, and Brazilian music, based on harmonized melodies that can be reduced to notes and chords. You start by choosing tempo, signature, and notes, then select a style, such as American songwriters, including songs by George Gershwin and Duke Ellington. The machine proceeds to create a melody. Once it has done so, you can edit the composition.

In September 2016, the Flow Machine unveiled its first recording, the first ever complete song composed by AI. “Daddy’s Car” is a cheerful, upbeat little ditty based on a selection of Beatles’ tunes, with the lyrics composed by Carré. At this stage, all the machine can do is put together the basic tune. Carré provided all the flourishes—harmonies, instrumentation and lyrics.

Pachet is fond of the phrase “stylistic cryogenics”—fixing style in a time warp.52 In the future, he muses, by using algorithms like his constrained Markov models you will be able to resurrect Bach and play in his style or create variations on Bach and Mozart that those composers would have approved of. Both fervently believed that their music ought not to be frozen in time but should reflect the era in which they were performed.

Says Pachet, “Music has to be created by humans because machines cannot discern between good and very good. Humans have intentions, while machines cannot curate what they do.” Of Project Magenta’s simple melody, he agrees that “machines can produce music of their own volition,” but, he adds: “The problem is that they do not generate very interesting stuff.”53

Eck, the head of Project Magenta, has great respect for Pachet and his work. He says that at present Pachet is doing more interesting work than him, but that Pachet’s models “bring a lot more structure to the table.”54 Whereas Project Magenta’s computers can create music without needing to be programmed, Pachet uses complex software.

In 2017, Pachet moved from Sony to Spotify, where he directs the Spotify Creator Technology Research Lab. He sees the mathematical part of his odyssey as nearly over and the experimental as about to begin. At the beginning of 2018, Pachet, along with the French musical collective SYGGE led by Carré, released an album entitled Hello World. This is the first multiartist music album composed by musicians using AI tools. A group of musicians, including recent sensations Kiesza and Stromae, descended on the lab, took control of the Flow Machine tools, and generated fifteen songs. The album’s title signifies AI stepping out of the lab to greet the world. The music is at times otherworldly, as are the lyrics. Thus one goal of AI has been achieved: scientific research has morphed effortlessly into an album of music.

One reviewer wrote, “I was worried the songs would sound too robotic, but I was pleasantly surprised.”55 Indeed they don’t. But would it matter if they did?

“For me creativity is pretty much a social thing, not an objective thing, especially in music,” says Pachet. “Society will decide whether someone is creative or not.”

Notes