4 The Participatory Turn That Never Was

Digital mass customization may well have been one the most important architectural inventions of all times: originally intended to change the way we manufacture physical objects of daily use (teapots, tables, buildings), the technical logic and the culture of mass customization have already changed—or at least subverted, upended, and disrupted—almost every aspect of the world in which we live. Regardless of these epoch-making consequences, however, the most conspicuous avatar of the first digital turn in architecture and design was the creation of a new architectural style—the new digital style of smooth and curving, “spliny” lines and surfaces. This style, now called parametricism,1 continues to this day, with ever-increasing degrees of technical mastery and prowess: ideas and forms that twenty years ago were championed by a handful of digital pioneers today engender architectural masterpieces at a gigantic, almost planetary scale. Yet, inherent in the very same technical definition of digital mass customization is a crucial authorial conundrum, which from the start has accompanied and challenged the rise of the parametric worldview. Any parametric notation contains by definition an infinite number of variations (one for each value of a given parameter). Who is going to design them all? Who is going to choose the best among so many options? Early in the new millennium, with the rise of the participatory spirit of the so-called Web 2.0,2 many thought that collaboration and interactivity could be the answer: the customer, client, or any other stakeholder would be called on to intervene and participate in the design process, to “customize,” or co-design, the end product within the framework and limits set by the designers or administrators of the system.3 At the time of writing (2016), it already seems safe to conclude that this transition from mass customization to mass collaboration didn’t happen in architecture and design. But while the design professions mostly rejected this option, digitally driven mass customization has taken over the world.

4.1 The New Digital Science of the Many

Francis Galton was a late-Victorian polymath, half-cousin of Charles Darwin, a descendant of Quakers, and, among other things, one of the fathers of eugenics, the inventor of the weather map, and of the scientific classification of fingerprinting. In one of his last writings he also tried to account for a statistical quirk he had stumbled upon: in some cases, it seems possible to infer an unknown but verifiable quantity simply by asking the largest possible number of random observers to guess it, and then calculating the average of their answers. In the case studied by Galton, a group of farmers tried to estimate the weight of an ox put up for auction at a country fair, and the arithmetical mean of all answers came closer to the actual weight than each individual guess. In modern statistical terms, the accuracy of the average increases in proportion to the number of opinions expressed, regardless of the expertise of, or the specific information available to, any of the observers.

Galton’s experiment suggests that, if there is a way to gather the knowledge of many, a group may end up knowing more than even the most knowledgeable of its members; and if this is true for the weight of an ox displayed at a country fair, this may also apply to more complex cases, including queries with no known answers. Possibly due to his eugenic creed (and the related belief, still shared by many, that sometimes the opinion of some should count more than the majority’s), Galton failed to remark that his theory of collective intelligence validates the most radical interpretations of the principle of universal suffrage: the more people vote, and the more votes are counted, the better the decisions a democratic community can make, even on subjects about which most voters know nothing at all. In the case of elections, the technology needed to tap the wisdom of crowds is an easy one, and relatively time-tested: each person should simply cast a vote (by a show of hands in a public assembly or, in absentia, by ballot, mail, etc.).

The classical theory of the marketplace offers another case in point, which Galton similarly failed to take into account: market prices are determined by the fluctuations of supply and demand, and they are more reliable when more market-makers can more easily and freely interact with one another. In Adam Smith’s classic formulation, this interaction (“the invisible hand of the market”) transforms the individual interests of each into the best decisions for all (in this instance, the price that will best allocate limited resources in a free market economy). To help buyers and sellers meet, the world of commerce has always been quick to adopt all kinds of new information and communication technologies, but the spirit of the game has not changed significantly since the beginning of time: traders reflect and express all the information they have in the prices they agree to, and the crying out of bids on an exchange (or any other technologically mediated version of the same) is the market’s classic tool to garner the knowledge of many—from as many sources as possible.4

For the last twenty years or so, computers and the Internet have offered unprecedented possibilities for gathering and exploiting the wisdom of crowds, and—as vividly recounted in a recent best-seller by the New Yorker economist James Surowiecki—in more recent times Galton’s curious experiment has become a common reference for adepts of the so-called Web 2.0, or the participatory web.5 Before the economic meltdown of the fall of 2008, many also thought that the newly deregulated financial markets, enhanced by digital technologies, had attained an ideal state of almost perfect efficiency, where the frequency and speed of limitless and frictionless transactions would make market valuations more reliable than ever before.6 Later events have proven that those new digital markets were no more infallible than all previous ones, but in other instances digital technologies may have made a better use of Galton’s theory. The success of Google’s search engine is famously due less to its power of information retrieval—any computer can do that—than to the way Google ranks its findings. These are now increasingly personalized (adapted to the customer’s profile, web history, or location, for example), but the original PageRank algorithm7 prioritized Google’s search results based on the sheer quantity and quality of links between Internet (HTML) pages. As in the scholarly system of footnote cross-referencing, which is said to have inspired the PageRank algorithm, these links were first established by the authors themselves, then added to by many others; hence Google’s celebrated claim to use “the collective intelligence of the Web to determine a page’s importance.”8

Taken to the limit, the apparently banal combination of search and ranking has staggering epistemic implications, which have been discussed in the preceding chapters: in a world where all events are recorded and retrievable, the search for an exact precedent may better predict future events than an analytic calculation of consequences deducted from general causal laws, rules, or formulas. Indeed, in many cases the search for a social precedent (rather than for a material one, as seen in chapter 2, section 2.4) has already replaced the traditional reliance on the rules or laws of a discipline: for example, when we choose a linguistic expression or syntagm based on the number of its Google hits, we trust the wisdom of crowds instead of the rules of grammar and syntax. Of course, the rules of grammar and syntax themselves are born out of the authority of precedent, as for the most part they formalize and generalize the regularities embedded in the collective or literary use of a language—a process that in the case of living languages unfolds over time and continues forever (the most notable exception being the invention ex nihilo of the rules of classical Latin in the Renaissance). But today a simple Google search on an incommensurably vast corpus of textual sources can effectively short-circuit the laborious scientific process of the constitution of the rules of a language, thus making all traditional sciences of language unnecessary. Not by science, but by search we can draw on the collective intelligence of a group, be apprised of the frequency of an event (in this instance, a linguistic occurrence within a community of speakers), and act accordingly.

4.2 The Style of Many Hands

The making of objects appears to be, intuitively, a quintessentially participatory endeavor, because large or complex objects in particular must often be made by many people, and their design must draw from the skills of many specialists. Yet contrary to this matter of fact, starting with early modern humanism, Western culture has built a cultural system where works of the intellect, regardless of their material complexity, are expected to be ideated by an individual author and to be the expression of just one mind.9 This applies to media objects (texts, images, music, etc., which today exist primarily as digital files) as well as to physical objects (chairs, buildings, clothing, cookies, or soft drinks), as since the rise of the modern authorial paradigm in the Renaissance it is expected that physical objects should be designed prior to being made; all their creative value being thus ascribed to their design, recipe, or formula, which is a pure piece of information—a media object like any other.10 But the ways to solicit, collect, and marshal the opinions of many in the making of intangible informational objects, such as drawings or scripts, is not made any easier by the immateriality of the process. The collective creation of a piece of intellectual work cannot be reduced to the expression of a vote or a number to be counted or averaged, respectively—even though many of today’s “social media” are in fact reduced to doing just that. This is where many thought that today’s digital technologies could be a game changer.

Unlike documents in print, digital notations can change anytime, and every reader of a digital file can, technically, write on it or rewrite it at will: in the digital domain every consumer can be a producer.11 Moreover, unlike radio or television, the Internet is a symmetrical information technology—whoever can download a file from the Internet can, theoretically, upload a similar one. This technical state of permanent interactive variability offers unlimited possibilities for aggregating the judgment of many, as theoretically anyone can edit and add to any digital object at will. But if interventions are to be open to all, and open-ended in time, as per Galton’s model, any change may randomly introduce faults or glitches (the equivalent of a statistical deviation), which will in turn be edited out only by some subsequent intervention. The development of any such participatory object will hence inevitably be erratic and discontinuous. Fluctuations will diminish as the number of interventions accrue, correcting one another, and the object converges toward its state of completion (the analogue of a statistical mean), which will be achieved at infinity, when all participants have interacted with all others in all orders, and the knowledge of all has ideally merged into a single design.

As infinity is seldom a viable proposition for human endeavors, most models of aggregatory versioning must introduce a cut-off line somewhere and maintain some form of moderation or supervision all along. Yet, no matter the varying degrees of authority that may be exerted to curb its randomness, the logic of convergence to the mean of the statistical model still defines most practical strategies deriving from it. In order to self-correct, the process must remain open to as many agents as possible and for as long as possible. This is easy for some media objects, such as a Wikipedia entry or open-source software, which may remain variable forever: so, just as in Adam Smith’s model of the marketplace, in the case of Wikipedia the “invisible hand” of an infinite number of minute, frictionless interactions will convert the ambition and limits and occasional folly of many individual writers into a globally reliable, collective, and anonymous source. But that open-ended model can hardly apply to design notations, for example, as their development must necessarily stop before building can start. It also follows that at each point in time any of such collaborative objects will be to some extent deviant from its theoretical state of completion, hence, to some extent, faulty. Any digital object in a state of permanent development is, by definition, never finished or ever stable, and will be forever functioning only in part—hence it is destined to be, in some unpredictable way, always in part nonfunctioning.

Indeed, daily experience confirms that most digital objects seem to be in a state of permanent drift. Beta versions, or trial versions, are the rule of the digital world, not the exception—even commercial software, which after testing is marketed and sold, is constantly updated, and paying customers are often referred to users’ forums (that is, to the wisdom of crowds) to address malfunctions in proprietary, commercial software. This is not accidental: most things digital are permanently evolving into new versions that may not be in any way more stable or permanent than the earlier one, as the open-ended logic of “aggregatory” design must by definition apply at every step and at all times. It may appear counterintuitive that the iron law of digital notations (at its basis a binary system that consists only of zeroes and ones, and nothing in between) should have spawned a new generation of technical systems deliberately meant to work by hit-or-miss, and a technical logic based on an intrinsic and quintessential state of ricketiness. Yet, against the dangers of unpredictable mutations, contemporary digital culture has already largely, albeit tacitly, integrated the good old precautionary principle of redundancy: most digital systems offer an often confusing variety of ways to achieve the same result—evidently hoping that glitches may not affect all at the same time.12 As most things today are digitally made, designed, or controlled, this evolutionary, “aggregatory” logic of permanent interactive versioning may well be one of the most pervasive technical paradigms of our age, of which traits can already be detected in every aspect of our technical environment—and, increasingly, in our social practices as well.

There was a time, not long ago, when every household in Europe and North America had a fixed telephone line. Barring major calamities, that telephone line was expected to work—and even in a major calamity, it was the last technology to fail. For some years now, synchronous voice communication has been delivered by a panoply of different means: analog fixed lines, where still extant; fixed lines that look like telephones but are in fact VoIP in disguise; GSM cell phones, 3G or above “smart” phones, DSL or cable-connected computers, wirelessly connected computers (via Wi-Fi, local broadband, WiMax, etc.), each using a different network and protocol to communicate. We need so many options because some of them, when called upon, will most certainly not work. Not surprisingly, in the digital domain, versioning and redundancy are far from being simple technical strategies. Given the technical conditions described above, they are fast becoming a mental and cultural attitude, almost a way of thinking.

The logic of the “aggregatory” mode of composition posits that each agent be allowed to edit at will, but anecdotal evidence, corroborated by a significant body of texts and software created by interactive accrual, suggests that each new edit may more easily add new data than erase the old, except to the extent that the erasure is functionally indispensable. As a result, obsolete data are often carried over and neglected, rather than deleted; in software writing whole chunks of script are simply shunted, but left in place, so they may eventually be retrieved. Some authors seem to be applying the same principle, more or less deliberately, to texts—and not only for electronic publication. In the humanistic and modern authorial tradition most objects of design (whether physical or media objects) bear the imprint of an intelligence that has organized them with rigor and economy, and the precise fitting of their parts is often the most visible sign of an invisible logic at work, followed by the author who conceived the object and nurtured its development. According to a famous essay by Alexandre Koyré, first published in 1948, precision is the hallmark of modernity in all aspects of life and science.13 But collaborative design environments seem to be following a different logic. Particularly in the digital model of open-ended aggregation, the effectiveness of the result is achieved not by dint of authorial precision, but through approximation, redundancy, and endless participatory revisions.

Objects that are made by many hands show the signs of many styles. Many Wikipedia entries, for example, are extraordinarily informative, and most open-source software works well, or no worse than any equivalent commercial software. But regardless of their use value, the textual writing of most Wikipedia entries, as well as the mathematical writing of most open-source software, is redundant, circuitous, and fragmentary. In contemporary software parlance, a quick fix that will debug or update a piece of script is commonly called a “patch,” and patchiness is fast becoming a common attribute of all that is composed digitally: patches of script are added when and where needed, and so long as the resulting text does what it was meant to do, no one cares to rub off the edges or to smooth out its texture any further. Sometimes, as in the case of software scripting, the patchiness of the script will be seen only by specialists; sometimes the end product itself may appear patched up, as most Wikipedia entries do. The typical Wikipedia entry is written by many people and by no one in particular, but by each one individually.14

4.3 Building: Digital Agencies and Their Styles

This new, “aggregatory” way of digital making may equally affect buildings, insofar as digital technologies are implied in their design, realization, or both.15 Indeed, digitally designed architecture is even more prone to participatory modes of agency, as from its very beginning the theory of digital parametricism has posited a distinction in principle between the design of some general features of an object and of some of its ancillary, variable aspects: digital mass customization (as defined by Gilles Deleuze’s and Bernard Cache’s theory of the objectile) implies a model of layered authorship, or “split agency,” where the primary author designs a generic (parametric) object, and one or more secondary authors, or interactors, adjust and adapt some variable aspects of the original notation at will. This authorial model has accompanied digital design theory from its beginnings in the early 1990s and has prompted several forms of hybrid agency, some more open to collaboration, some less so.16

Participatory agencies are particularly prominent in a family of software known as BIM, or Building Information Modeling, which for the most part was developed independently of the more tectonically oriented CAD/CAM of the 1990s. The spirit of BIM posits that all technical agents participating in design and construction should collaborate using a shareable information model throughout all stages of a project, and that design decisions should be agreed upon by all parties (clients, designers, and contractors).17 In such instances, architectural authorship might take the form of some consensual “leadership,” curiously resembling the organization of labor that prevailed in late-medieval building sites before the rise of Alberti’s modern authorial paradigm.18 Architects frequently blame BIM for its bureaucratic bias, and the technology was indeed developed primarily for managerial, not design, purposes. The participatory logic of BIM also differs from the digital “aggregatory” model in some essential aspects. Participation in BIM-based design is by invitation only, and invited participants are limited to technical agents—even though one could imagine a variety of other parties interested in the development of a building, including end users, communities, and even citizens. Moreover, contrary to the principle of aleatoric accrual from independent contributors in the digital, open-ended model, BIM decision making is based on a negotiated consensus among few parties, a practice that may remind some of the more traditional modes of “design by committee,” which few architects ever cherished—even the less Ayn Randian of them.

According to a proverb still popular among the design professions, “A camel is a horse designed by a committee.” Apparently implying that camels do not look good and that committees are not good designers, the proverb is also curiously attributed to Sir Alec Issigonis, the idiosyncratic engineer who conceived one of the most remarkable camels in the history of the automobile, the 1959 Mini. In fact, on closer scrutiny, it seems that committees are more likely to endorse consensual, generic solutions than a specific, and possibly brilliant, but unconventional idea, such as a camel or the Mini (the former strange looking if compared to a horse, the latter if compared to British cars of 1959, yet both more suitable than their standard mainstream counterparts for traveling in the desert or facing fuel shortages in London after the Suez Canal crisis, respectively). BIM design processes, as currently envisioned, encourage technical agents to come to some form of middle-ground compromise at all stages of design and invite expert feedback and social collaboration, but for the same reason they may also favor team leadership over competence, or safe and bland solutions to the detriment of riskier but innovative ones—including unusual, bold, or assertive formal solutions, shapes, and styles. This regression toward a consensual mean may be the most significant contribution of BIM technologies to architectural visuality—a leveling effect that may apply to the making of the most iconic, monumental buildings (Gehry Technologies has developed its own BIM platform, which it also sells to third parties) as well as to countless nondescript utilitarian buildings, where BIM technologies are already employed by the construction industry—without fanfare but with significant time and cost savings.

The digital model of open participation by accrual does not work that way, and in architecture it might lead to very different visual consequences. Following the time-tested example of open-source software, architectural authorship might be succeeded by some form of mentoring or supervision, where one agency or person initiates the design process, then monitors, prods and curbs, and occasionally censors the interventions of others. The social, technical, and theoretical implications of this way of design by curatorship are vast and mostly unheeded.19 Its effects on designed objects, and on our designed environment at large, could be equally momentous. Regardless of the amount of supervision that must be brought to bear at some point, either by human or by machinic intervention, physical objects designed by participatory aggregation will most likely show some signs of the approximation, redundancy, patchiness, and disjointedness that are the hallmark of all that is designed by many hands. And there are reasons to assume that most designers today would not like that—much as they may not be happy to relinquish the legacy of the authorial privileges the design professions have so laboriously struggled to acquire over time. After all, it took centuries, starting with Renaissance humanism, to establish modern architecture as an authorial, allographic, notational art of design, which medieval and classical architecture never were; and which non-Western architecture—that is, architecture outside of the influence of Western humanism—is often not to this day. In the course of the twentieth century architects were remarkably successful in adapting the Albertian, humanistic idea of design to the industrial mode of production, and they managed to impose that authorial, notational way of making as the dominant model of the architectural profession throughout the world. Many of today’s designers may just not be inclined to give that up without a fight.

Of course, disjunctions and fragmentation are not unknown in contemporary design. Aggregation was a distinctive stylistic trait of architectural deconstructivism. Deconstructivist theories in architecture, particularly through the well-known Derrida-Deleuze connection, were not only the precedent but also the direct cause, and almost the midwife of the digital revolution in architecture as we know it. Traces of the fractured disjunctions of deconstructivism can be found in much of the digitally inspired architecture that followed in the 1990s and beyond—in works of Zaha Hadid, Wolf Prix, Frank Gehry, or Peter Eisenman himself. Yet this purely visual analogy, or even the historical continuity it portends, may be misleading. The paratactical disjointedness of Eisenman’s deconstructivist formalism of the late 1980s was quintessentially authorial. It followed from and was based on the excruciatingly detailed and finicky execution of the design of one mind, in the purest Albertian notational tradition.20 Deconstructivists may aim at designing and notating complexity or even at representing or interpreting indeterminacy; but few designers trained in the Western tradition would aim at or even consider letting complexity just happen. Indeed, most of the design strategies discussed in this book are, indeed, design strategies—strategies for staying in charge.

With different nuances, and with some exceptions—of which adaptive fabrication and hard-core morphogenetic design may be the most conspicuous—the second digital turn in architecture is largely about finding new ways to design, not about finding ways to not design. Not designing may be an ideological statement or a profession of faith, but it is unlikely to ever become a well-paid profession; while some digitally intelligent designers pride themselves on using open-source software, few or none author open-ended design—architectural notations that others could modify at will.21 As will be seen in chapter 5, the ideas of permanent variability, parametric mass customization, and digitally driven mass collaboration that designers test-drove during the age of the first digital turn are now spreading in all areas of contemporary society, economy, and politics. Designers do not need to follow that trend because they started it, and they already know what it is about. As it happens, right now, they are busy inventing something else.

Notes

A version of this essay was published as “Digital Style,” Log 23 (Fall 2011): 41–52.