INTRODUCTION

GIVING FORM TO THE FUTURE

Designers work at the crux of accelerating technological change. We spend so much time straining to keep up that we rarely have a moment to reflect upon how we got to where we are. How did we get here? How has computation brought us to this point? This collection attempts to answer these questions. Our story begins in the late mid-twentieth century: the 1960s.

In 1963 computer scientist Ivan Sutherland wrote a computer program called Sketchpad (also known as Robot Draftsman), through which he introduced both the graphical user interface (GUI) and object-oriented programming, proving that not only scientists but also engineers and artists could communicate with a computer and use it as a platform for thinking and making.1 In the same year computer scientist J. C. R. Licklider, director of Behavioral Sciences Command and Control Research at the Defense Department’s Advanced Research Projects Agency (ARPA), began discussing the “intergalactic computer network,” an idea that fueled ARPA research and developed into the ARPANET, an early version of the Internet. Soon thereafter, in 1964, IBM released a new mainframe computer family, called System/360, a family of computers capable of meeting both commercial and scientific needs. It was the first general-use computer system. Four years later engineer and inventor Douglas Engelbart, assisted by Stewart Brand, conducted the so-called “Mother of All Demos,” in which he presented the oN-Line System, a computer hardware and software system that included early versions of such fundamental computing elements as windows, hypertext, the computer mouse, word processing, video conferencing, and a collaborative real-time editor. Although mainframe computers were still inaccessible to most artists and designers in the 1960s and ’70s, the idea of computation began to inspire visual experiments. The zeitgeist of the computer was in the air.

Two key inventions for designers—and indeed for everyone—happened in the incredibly fertile period that followed: the development of the Macintosh in 1984, the first personal computer sold with a GUI; and the creation of the Internet, used by academia in the 1980s and adopted for widespread use in the ’90s. As we entered a new millennium, these two inventions became the defining tools of designers’ practice, not just practically but also ideologically. The personal computer brought computation to the masses while the Internet networked both mind and information on a large scale. Since the 1960s these tools have spawned technology-oriented approaches that continue to shift the foundations of our practice to focus on parameters rather than solutions, an aesthetics of complexity, and a culture of hacking, sharing, and improving the status quo. Now we move toward a fresh visual language, one driven not by gears and assembly lines but by connective tissues that bind the organic and the digital together.

STRUCTURING THE DIGITAL (1960–80)

During the 1960s, programmers of mainframe computers had to clearly articulate and translate a series of logical steps into the unequivocal language of the computer. They fed these steps, the “program,” into the machine using a punch card or punched tape. Artists and designers of the same period began to experiment with this idea by breaking down the creative process into set parameters and then structuring those parameters into a series of steps to be followed by either a human being or—theoretically at the time—a computer.

Manipulating a limited number of aesthetic parameters to enact a design project was not a new idea. Earlier in the twentieth century, avant-garde artists at the Bauhaus—and advocates of the New Typography movement that followed—developed the modular grid. Widespread codification and commercial application of this concept took off after World War II as designers including Josef Müller-Brockmann, Emil Ruder, Max Bill, and later Ladislav Sutnar and Karl Gerstner began to grapple with the onslaught of information thrust at them by mid-twentieth-century society. These Swiss style designers organized information into graphic icons, diagrams, tabbed systems, and grids that could be quickly comprehended by a busy twentieth-century citizen. The post–World War II industrial boom demanded that they develop such efficient systems for organizing and communicating information.

Grids, in particular, supported efficiency. Along with corresponding style guides, they allowed the designer to create new layouts by selecting from a limited number of choices rather than starting from scratch each time. This constraint sped up the process, encouraging designers to translate intuitive decisions into specific parameters such as size, weight, proximity, and tension. The result was a series of visually unified designs that could accommodate a wide variety of data.

In his 1964 book Designing Programmes Gerstner translated the resulting design parameters into a logical language that, he believed, a computer could understand and then combine and recombine to create design solutions.2 The same year Italian designer Bruno Munari organized an exhibition titled Art Programmata for the Italian information technology company Olivetti. In the exhibition catalog Munari explained that programmed art “has as its ultimate aim the production not of a single definitive and subjective image, but of a multitude of images in continual variation.” The desired end of a project was no longer a single solution but rather a series of “mutations.”3

Many artistic movements delved into processes of input, variation, and randomization during the 1960s and ’70s: concrete art, serial art, op art, the New Tendencies movement, conceptual art. Sol LeWitt’s Wall Drawing series is one of the most familiar examples. For each drawing, LeWitt devised a set of instructions to be followed by another human. “All of the planning and decisions are made beforehand,” he explained, “and the execution is a perfunctory affair. The idea becomes a machine that makes the art.”4 In this way, the instructions are the core of the project: the algorithm. An assistant, full of his or her own subjective intuition, completes the project by following the instructions. LeWitt builds unique iterations into his system through the subjectivity of each human participant. This focus on crafting parameters and randomizing input to produce a variety of solutions—rather than just one perfect form—privileges behaviors over static relationships of form and meaning. Such behavior-oriented systems were precursors to interactive design approaches in the 1980s, the ’90s, and beyond.

Alongside these process-oriented artistic movements, the counterculture exploded in the United States during the 1960s and ’70s, questioning traditional modes of authority over such sweeping political issues as civil rights, the Vietnam War, feminism, and the environment. Proponents began to envision what society could become through social engineering. Stewart Brand’s Whole Earth Catalog, part magazine, part product catalog, was a nexus of the counterculture and technologists. The catalog advocated “access to tools” as an avenue for sustainability and individual freedom, pushing readers to hack and tinker their way beyond the reach of “the Man.”5 The appearance of the catalog and the DIY mentality it advocated fed into a broader cultural attitude toward the computer as an impetus for peer-to-peer communication, nonhierarchical power structures, freedom of information, and personal empowerment.6 These concepts took on greater significance in the subsequent decades as they became embedded in the collaborative, open-source software development culture that started to influence the creative process of many graphic designers.

image

P. SCOTT MAKELA AND LAURIE HAYCOCK MAKELA
Spread from Whereishere, 1998. A collaboration with writer Lewis Blackwell, Whereishere expressed the multimedia frenzy spreading through the design world in the 1990s. At the time they were writing, the Makelas were resident co-chairs of 2-D design at Cranbrook Academy of Art.

RESISTING CENTRAL PROCESSING (1980–2000)

Once personal computers entered the creative arena in the mid-1980s, artists and designers could get their hands on real computers and interact with actual machines. The greater art and design scene began to embrace aesthetic complexity. Poststructuralist theories of openness and instability of meaning permeated graphic design, and the modernist focus on streamlined, objective forms wavered. New Wave in Los Angeles, the postmodern experiments led by Katherine McCoy and P. Scott and Laurie Haycock Makela at Cranbrook Academy of Art, and David Carson’s work for Ray Gun magazine witnessed the objective, efficient forms of modernism give way to complex, layered aesthetics that asked users to determine the message for themselves. Graphic designers began to engage with technology to construct rich visual worlds through active exchanges with users.

The first Macintosh personal computer also ushered in the first mass-market laser printer: the HP LaserJet. Together these two tools of 1984 started to destabilize mass production and its corresponding design methodologies, which had emerged in the late 1800s and early 1900s, the decades following the Industrial Revolution, when mass production divorced design from manufacturing. The expense and therefore the risk of a project fell on the production stage under these conditions. For that reason designers pored over each precise detail of a project before releasing their ideas to professional printers and manufacturers.7 The weighty expense of labor and materials pressured graphic forms into streamlined, efficient, standardized units. The early-twentieth-century mass-production model was thus determining both the typical design process and the resulting aesthetic. In the 1980s, however, designers such as Sharon Poggenpohl and Muriel Cooper recognized that emerging technologies could provide an escape from these restrictions.

As director of the Visual Language Workshop at the Massachusetts Institute of Technology (MIT), Cooper urged her students to hack and tinker with production equipment—at first offset printers, later photocopiers, laser printers, and computers. What happens, she wondered, when production is put back into the hands of the designer? What happens when communication is no longer “controlled, centralized” for distribution to mass audiences?8 Cooper saw computers as a liberating force that would empower creatives to work more collaboratively and intuitively. Emerging technologies would free designers to iterate and test their work more easily, an integrated work style she considered more akin to the intuitive inquiries of the sciences. Cooper’s ideas later flourished in the work of cultural theorists such as Yochai Benkler, Henry Jenkins, and Pierre Lévy.

Both inside and outside the professional design world, the desktop publishing industry thrived during this period. Despite fears of professional redundancy, many writers and designers reveled in their ability to put together layouts on the computer and then produce them on desktop printers. Rudy VanderLans and Zuzana Licko epitomized this movement with the launch of Emigre Fonts and the popular magazine Emigre.9 Licko designed typefaces directly on the Mac for immediate application by VanderLans in the latest issue of Emigre. For a long time designers had been restricted by expensive type foundries and typesetters, so the immediacy of computer-aided production captured the imagination of type designers, in particular.

A typographic renaissance resulted, including the creation of a bevy of radical digital typefaces as well as explorations of mutating form that built upon the algorithmic approaches of the 1960s. In 1989 Just van Rossum and Erik van Blokland, collaborating as LettError, began experimenting with “programming-assisted design” and released their RandomFont typeface Beowolf. Using radical postscript technology, they set parameters and then asked the computer to randomly vary those parameters.10 Such experiments resulted in aesthetic form that had not been practical prior to the existence of personal computers. Complexity no longer equated with expense. Large production runs were no longer needed to justify setup costs. Laser printers joined with computation to make one-off forms economically feasible.

Many creatives took on the mantle of designer/programmer in the 1990s. These inquisitive souls believed that if software shaped their creative process and aesthetics, then to truly pursue their creative path, they had to build their own computational tools. John Maeda, director of the MIT Media Lab Aesthetics and Computation Group (ACG) from 1996 to 2003, inspired a generation of such designers/programmers, including Casey Reas, Ben Fry, Golan Levin, Peter Cho, and Reed Kram. In 1999 Maeda released his book Design by Numbers, in which he insists that computation is a unique medium, akin to pure thought, “because it is the only medium where the material and the process for shaping the material coexist in the same entity: numbers.”11 Maeda advocates for artists’ and designers’ direct engagement with raw computation and attempts through his Design by Numbers project to make the medium more accessible.

Inspired by Maeda’s work, Casey Reas and Ben Fry went on to release Processing, an open-source language and environment, in 2001. Their language realizes the dream of a computing environment attainable by visual thinkers. Processing gave creatives access to a programming language, encouraging users to build their own tools and develop an aesthetics only possible through computation. Open-source development, which provides free access to the source code of computer programs, fed a large portion of the Processing project. Communities of artists and programmers pooled resources and knowledge to make the powerful tool freely available to all.12 The project exemplifies a twenty-first-century shift in working style from individual and small team–based creative efforts to distributed, network- based projects in which unrelated individuals work together across time and space. Such efforts bring to fruition the egalitarian “Access to Tools” concept Brand propagated with the Whole Earth Catalog and other endeavors earlier in the century. The culture of software development was permeating the creative methods of the design world.13

ENCODING THE FUTURE (2000 TO PRESENT)

In the early 1990s, the Internet spread beyond academia and into everyday people’s lives. The personal computer morphed into a large networked mind through which creatives could think, make, collaborate, and distribute. Users commonly experienced content through active engagement online: pressing a button, scrolling down a page, uploading content, customizing interfaces. Interactivity took over.

The new millennium saw social media magnify the shareability of content. Designers built upon their discipline’s understanding of systems thinking—which had been so popular in the 1960s—to create parameters for rich, welcoming environments. Such environments—whether a website, a digital publication, a game, or an app—scaffold user experience. Behavior trumps visually appealing fixed formats. As Khoi Vinh notes in “Conversations with the Network,” “[I]n this new world designers are critical not so much for the transmission of message but for the crafting of the spaces within which those messages can be borne.”14 Monologues morph into conversations. Users actively participate in designs through a many-to-many communication model rather than passively receiving one-to-many broadcast messages.

Hugh Dubberly, co-creator of Apple’s well-known technology-forecast film of 1987, Knowledge Navigator, asserts that we are moving from a “mechanical-object ethos” to an “organic-systems ethos.” He points out that in contrast to the rigid mechanical brain of the last century, we now describe our computer networks in flexible biological terms, such as “bugs, viruses, attacks, communities, social capital, trust, identity.” The modernist design methodology of the 1900s coalesced around reducing complex, chaotic information into simple, orderly forms by forcing materials and layouts into streamlined, efficient designs of our choosing. In the current century, Dubberly emphasizes, the massive increase in computer-processing power has enabled us to look instead to biology as a model for growing complex systems out of simple elements.15

Paola Antonelli, senior curator of art and design and director of research and development at the Museum of Modern Art (MoMA), considers biomimicry and nanotechnology to be natural steps in the move toward organic, systems-based work. She explains: “Nanotechnology, in particular, offers the promise of the principle of self-assembly and self-organization that one can find in cells, molecules, and galaxies; the idea that you would need only to give the components of an object a little push for the object to come together and reorganize in different configurations.”16 We are moving beyond twentieth-century systems thinking into a period in which we frame systems that can evolve on their own. This change in process—simple to complex rather than complex to simple—is only possible through the processing power of computation and the connectivity underpinned by the Internet.

Emergent behavior, a topic long discussed in computer science circles, has become a buzzword of the design disciplines. In the 2000s, creatives including Luna Maurer, Edo Paulus, Jonathan Puckey, and Roel Wouters of the collective Conditional Design expressed their desire to produce work appropriate to the now, exhibiting a passion akin to that of the avant-garde. They build upon the work of other generative designers, including Karsten Schmidt and Michael Schmitz, to delve purposefully into processes. Through a combination of rigorous process, logic, and organic input from “nature, society, and its human interactions,” Conditional Design hopes to identify emergent patterns.17 In such work, the ideology of John Conway’s cellular automaton, the famous Game of Life, combines with algorithmic design thinking and making to physically and digitally produce artifacts of unexpected behavior.18

The Internet of Things (IoT), also referred to as “ubiquitous” or “pervasive computing,” currently inspires fresh design directions as well. We see inklings of a world beyond the screen as the objects around us slowly come to life through networks of embedded sensors. Virtual reality pioneer Brenda Laurel envisions ubiquitous computing as a way to become more closely connected to biosystems, deepening our knowledge so that we might behave more responsibly.19 Embedding computation in the environment provides clear opportunities for engaging more fully with the human body and mind, thereby escaping from what developer Bret Victor sarcastically refers to as “pictures under glass.”20

Futurists such as Hans Moravec and Ray Kurzweil see pervasive connectivity as a step in the evolution of transhuman intelligence: the technological singularity. Kurzweil predicts that around 2045 we will be forced to merge with intelligent machines—becoming a hybrid of biological and nonbiological intelligence—to keep up with the accelerating pace of change.21 With such forecasts in mind, interaction experience designer Haakon Faste, in an essay written especially for this volume, urges designers to reexamine what it means to be human, and by doing so take a long, hard look at how our practice could affect this looming vision of a society predicated on intelligence beyond the bounds of biological evolution.

Biomimicry, nanotechnology, emergent behavior, ubiquitous computing, and the specter of the transhuman: this is the designer’s current environment of practice. There is no going back. In the face of exponential technological growth, we have changed our process. We prototype, iterate, and respond instantly to user participation. Our methodology now mimics that of software developers as we release early and often. Influenced by open-source models of collaborative making and peer-to-peer production, we hack, think, make, and improve our discipline, a discipline vibrantly embedded within, rather than set apart from, everyday life. To quote Keetra Dean Dixon, designers today “walk the line between knowing and not knowing.”22 After all, isn’t giving form to the yet-to-exist what designers do best?

The spelling and formatting of essay footnotes in this collection appear as they did in the original essays, except for some minor spelling changes for consistency. Please note that all original footnotes appear in black while additions by the author appear in red.

1 Ivan E. Sutherland, “The Ultimate Display,” Proceedings of the IFIP Conference (1965), 506–8.

2 Karl Gerstner, Designing Programmes (New York: Hastings House, 1964), 21–23.

3 Bruno Munari, Arte programmata. Arte cinetica. Opera moltiplicate. Opera aperta. (Milan: Olivetti Company, 1964).

4 Sol LeWitt, “Paragraphs on Conceptual Art,” Artforum 5, no. 10 (1967): 79–83.

5 Stewart Brand, The Updated Last Whole Earth Catalog: Access to Tools (New York: Random House), 1974.

6 See Fred Turner, From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism (Chicago: University of Chicago Press, 2006).

7 See Hugh Dubberly’s discussion of manufacturing versus software development in “Design in the Age of Biology: Shifting from a Mechanical-Object Ethos to an Organic-Systems Ethos,” Interactions 15, no. 5 (2008): 35–41.

8 Muriel Cooper, “Computers and Design,” Design Quarterly 142 (1989): 4–31.

9 To read more about Emigre, see Rudy VanderLans and Zuzana Licko, Emigre: Graphic Design into the Digital Realm (New York: Van Nostrand Reinhold, 1993).

10 Just van Rossum and Erik van Blokland, “Is Best Really Better,” Emigre 18 (1990): n.p.

11 John Maeda, Design by Numbers (Cambridge: MIT Press, 1999).

12 By resisting traditional twentieth-century copyright, which prevents programmers from sharing resources, activist Richard Stallman’s free software movement, founded in 1983, the copyleft movement, which began around the same period, and activist Lawrence Lessig’s Creative Commons licenses made open-source development possible.

13 To learn more about how collaborative-making models influenced contemporary development models, see Eric S. Raymond, The Cathedral and the Bazaar, ed. Tim O’Reilly (Sebastopol, CA: O’Reilly & Associates, 1999).

14 Khoi Vinh, “Conversations with the Network,” in Talk to Me: Design and Communication Between People and Objects (New York: Museum of Modern Art, 2011), 128–31.

15 Hugh Dubberly, “Design in the Age of Biology: Shifting from a Mechanical-Object Ethos to an Organic-Systems Ethos” Interactions 15, no. 5 (2008), 35–41.

16 Paola Antonelli, “Design and the Elastic Mind,” in Design and the Elastic Mind (New York: Museum of Modern Art, 2008), 19–24.

17 Luna Maurer, Edo Paulus, Jonathan Puckey, and Roel Wouters, “Conditional Design Manifesto,” Conditional Design, April 3, 2015, http://conditionaldesign.org/manifesto.

18 British mathematician John Horton Conway developed the cellular automaton called Game of Life in 1970. Conway’s game is often cited in discussions of emergence and self-organization.

19 Brenda Laurel, “Designed Animism,” in (Re)Searching the Digital Bauhaus (New York: Springer, 2009) 251–74.

20 Bret Victor, “A Brief Rant on the Future of Interaction Design,” Worrydream.com, April 3, 2015, http://worrydream.com/#!/ABriefRantOnTheFutureOfInteractionDesign.

21 Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology (New York: Viking, 2005). Hans Moravec, “Robots, After All,” Communications of the ACM 46, no. 10 (2003), 90–97.

22 Keetra Dean Dixon, “A Little Knowledge and Other Minor Daredeviling,” (presentation, TYPO San Francisco, April 12, 2013).