8   Blaise Agüera y Arcas Brings Together Artists and Machine Intelligence

When we do art with machines I don’t think there is a very strict boundary between what is human and what is machine. We have a cyborg nature that is fundamental to the whole enterprise of being human.

—Blaise Agüera y Arcas24

Inspired by the popularity and importance of DeepDream, Blaise Agüera y Arcas, with Tyka’s help, formed Artists and Machine Intelligence (AMI) at Google in Seattle, with the goal of fusing machine intelligence with art. The program includes engineers on site who wish to pursue an interest in art and technology, and it also gives grants and residencies to artists to work with machine learning. They prefer artists who already know about machine learning but are also prepared to link artists with engineers.

Agüera y Arcas is currently head of Google’s Machine Intelligence Group in Seattle and is a star in Google’s firmament. He sports designer stubble and de rigueur t-shirt, and he speaks with an unassuming air on physics, computer science, AI, philosophy, history, and art. He has a keen interest in music theory, especially Bach, and is a good cello player to boot. “I don’t think of myself as a professional in anything,” he tells me.25

Like many people interested in emergent technology, Agüera y Arcas was active for a while in the demoscene, an international computer art subculture that produces demos, self-contained computer programs that power flashy audiovisual presentations, showing off programming, visual art, and musical skills.

Agüera y Arcas studied physics at Princeton, then did a master’s in applied mathematics. In 2003, he started Seadragon Software, a company that develops equipment to enable viewers to browse high-resolution images for use in mapping, visual books, and telemedicine. In 2006, Microsoft bought Seadragon, and Agüera y Arcas was given the position of distinguished engineer. Then, in 2013, he was headhunted and joined Google.

The current head of AMI is Kenric McDowell, who focuses on “spinning narratives around machine intelligence and how we can relate to it.”26 Like many people at Google, McDowell’s background is eclectic. He began programming at the age of nine, majored in classical music composition at San Francisco State University, then switched to the Conceptual Information Arts program, in which he learned software for installations and game development. Then he changed directions again and completed an MFA in photography at Bard College in New York State. After a twelve-year stint in advertising, McDowell went to work for Google doing speculative design, which brought him back to his original dream of combining art and technology.

When DeepDream went viral, Agüera y Arcas assembled a few people at Google who he knew were interested in art and had a background in it. “Let’s do something cool and interesting,” McDowell recalls him saying.27 McDowell was in a position to play a key role because he had an MFA and had shown his work in an art exhibition in New York. He advised against stepping forward and saying, “Hey, we’re at Google and doing art.” “I was afraid that some people might get offended and not take us seriously. There’s an established dialog that exists outside the tech world,” he says. Google had to signal that it had a serious engagement with art.

First the group contacted artists they thought might want to get involved. Then, on February 26, 2016, they held an auction of artworks at the Gray Area Foundation for the Arts, a nonprofit organization supporting art and technology, in the heart of San Francisco’s Mission District. The exhibition was called DeepDream: The Art of Neural Networks. Over eight hundred attended, including many curious geek hipsters eager to see the new art. Mike Tyka curated. Agüera y Arcas set the stage with an electrifying oration: “In addition to being a really cool art show, this is also the inaugural event of a collaboration that we are launching at Google between scientists, researchers, engineers, artists, thinkers—that we are calling ‘artists and machine intelligence.’ This is really a beginning and a seed and something that we hope is going to be going on for a long time.”28 He added that “some artists will embrace machine intelligence as a new medium or partner, while others will continue using today’s media and modes of production,” and stated his own belief that machine-generated art is cutting edge. It’s the new avant-garde, which will transform society, our understanding of the world in which we live and our place in it.

He continued: “Like the invention of applied pigments, the printing press, photography, and computers, we believe machine intelligence is an innovation that will profoundly affect art. As with these earlier innovations, it will ultimately transform society in ways that are hard to imagine from today’s vantage point; in the nearer term, it will expand our understanding of both external reality and our perceptual and cognitive processes.”29

He gave as an example Hans Holbein’s 1533 painting The Ambassadors, with its famous distorted image of the human skull—the anamorphic skull. Holbein undoubtedly used mirrors to project the skull’s image onto the canvas before tracing the outline. In his talk, Agüera y Arcas also reviewed the history of photography and the public resistance to its being a form of art, through to David Hockney’s work today and his fruitful collaboration with scientists.

“We’re witnessing a time of convergences,” he continued, “not just across disciplines, but between brains and computers; between scientists trying to understand and technologists trying to make; and between academia and industry. We don’t believe the convergence will yield a monoculture, but a vibrant hybridity.”30 In short, “We are fundamentally technological beings.”31

The exhibition of DeepDream art was a huge success. Several Silicon Valley barons attended, including Clay Bevor, the head of Google’s virtual reality project. Twenty-nine artworks were sold, including four of Tyka’s, and raised almost $100,000 for the Gray Area Foundation. As Wired magazine’s Cade Metz put it, “It was also a night to reflect on the rapid and increasing rise of artificial intelligence. Technology has now reached the point where neural networks are not only driving the Google search engine, but spitting out art for which some people will pay serious money.”32

The following June, the group held a symposium called Music, Art, and Machine Intelligence. This was a joint meeting of Google’s AMI, headed by Agüera y Arcas, and Google’s Project Magenta, headed by Douglas Eck, which explores the role of machine learning in creating art, literature, and music. Twenty-nine presenters spoke to an audience of eighty about their research on machine-generated art, literature, and music. To heighten the drama, each presentation lasted precisely ten minutes.

An up-and-coming artist, Anna Ridler, attended and was bowled over and inspired by the luminaries she met at the symposium. “It was art like you’ve never seen before,” she says.33 She saw a new field opening up, “including a lot of people who thought philosophically about their fields.”

Despite its success, the event was a one-off. McDowell and his fellow curators decided that “setting up art exhibitions is not Google’s strength.”34 They decided it was better to “support other artists that have an interesting approach to technology” in a way that allowed them to use AI in their own way, rather than have Google “frame the conversation.”35 The new scheme allows for a few artists in residence, as well as financial and technical support for artists working on their own.

Memo Akten Educates a Neural Network

#DeepDream is blowing my mind.

—Memo Akten36

The first work to be snapped up at the Gray Area art exhibit was Memo Akten’s extraordinary image of GCHQ as “seen” by a machine (figure 8.1).37

Figure 8.1

All Watched Over by Machines of Loving Grace: DeepDream Edition, 2015.

Government Communications Headquarters (GCHQ) in Cheltenham, deep in the English countryside, is an all-seeing eye that scours the ether for intelligence signals on behalf of security organizations in the United Kingdom. Akten is wary of its powers. DeepDream offered an excellent way to subvert it. The finished work is, he wrote, “an artificial hallucination seeded by a satellite view from Google Maps, and reimagined through deep neural networks developed by Google.”

He wrote in a blog post, “#DeepDream is blowing my mind.”38 It is, after all, a neural network that has been trained on certain information that makes up its entire base of knowledge, which makes it a lot deeper than something that merely produces hallucinations. “When you show it something new,” like an undefined image of a cloud, “it tries to make sense of what it’s seeing in terms of what it already knows, which is”—and this is the crux of matter—“how we make sense of the world.” Perception is in the brain, whether it be our brain or that of a deep neural network. “I find that really poetic,” says Akten.

Memo Akten’s dress reflects his Turkish origins. With his moustache and beard, long hair, necklaces, bracelets, earrings, and brightly colored clothing, at times he takes on the aura of a pirate of the Caribbean. He began programming at the age of ten and took, as he puts it, the typical Turkish educational trajectory—into engineering.

He studied civil engineering, but never used it. Instead, he hankered to make sci-fi films, like those of Ridley Scott and Stanley Kubrick. He had no access to a camera, but he “didn’t need one,” he says. “I had computers.”39 Besides English, Turkish, and French, he also had the Pascal programming language “as a way to express [himself].”40

Then he discovered Oscar Fischinger, the great German-born nonfigurative animator whose works the Nazis condemned as “degenerative art.” Fischinger emigrated to the United States, where his talent was recognized by Paramount Pictures and Walt Disney. Akten’s interests also included the music of Steve Reich, John Cage, and Edgar Varèse.

He recalls that when he was growing up, he thought that art was just painting and sculpture. “I didn’t realize I was doing art already while messing around on my computer.”41 He is now working on a PhD at Goldsmiths University, trying to create semiautonomous intelligent creative systems that musicians can collaborate with. But Akten is not someone to tie himself down to one research topic. He is interested in a deeper understanding of what happens when machines store knowledge about the data they are trained on—knowledge that is essentially a collection of numbers. “So what is that knowledge? That,” he says, “is what my actual PhD is about.”

In 2017, at Ars Electronica, the giant electronic art exhibition in Linz, Austria, he showed a piece he called Learning to See: Hello, World!42 This is one of an ongoing series of works that explore how neural networks come alive as they begin to observe the world around them through the data they are fed, much as babies do when they first move about and absorb images. Computer scientists refer to a machine awakening as a “hello, world” moment. In his piece, Akten trains a deep neural network on images from a video camera pointed at himself. As the neural network is fed more and more images, it begins to recognize a pattern—Akten’s face. Then Akten moves, and the machine has to start all over again. Soon it can recognize his face from any angle.43

“By saying a machine can be creative you are not anthropomorphising the machine,” Akten asserts, “but liberating it by expanding the term ‘creativity’ to go beyond humans.” Creativity is not limited to people. “I’m a biological machine,” he continues. “Humans can create art. Why not machines?”

Notes