2 The Second Digital Turn

The collection, transmission, and processing of data have been laborious and expensive operations since the beginning of civilization. Writing, print, and other media technologies have made information more easily available, more reliable, and cheaper over time. Yet, until a few years ago, the culture and economics of data were strangled by a permanent, timeless, and apparently inevitable scarcity of supply: we always needed more data than we had. Today, for the first time, we seem to have more data than we need. So much so, that often we do not know what to do with them, and we struggle to come to terms with our unprecedented, unexpected, and almost miraculous data opulence. As always, institutions, corporations, and societies, the cultural inertia of which seems to grow more than proportionally to the number of active members, have been slow to adapt. Individuals, on the contrary, never had much choice. Most Westerners of my generation (the last of the baby boomers) were brought up in the terminal days of a centuries-old small-data environment. They laboriously learned to cope with its constraints and to manage the endless tools, tricks, and trades developed over time to make the best of the scant data they had. Then, all of a sudden, this data-poor environment just crumbled and fell apart—a fall as unexpected as that of the Berlin Wall, and almost coeval with it. As of the early 1990s, digital technologies introduced a new culture and a new economics of data that have already changed most of our ways of making, and are now poised to change most of our ways of thinking.

My first memorable clash with oversized data came, significantly, in or around May 1968, and it was due to an accident unexplained to this day. I was then in primary school, and on a Wednesday afternoon our teacher sent us home with what appeared from the start to be a quirky homework assignment: a single division problem, but between two very big numbers. As we had no school on Thursdays (a tradition that preceded the modern trend for longer weekends), homework on Wednesday tended to be high-octane, so as to keep us busy for one full day. That one did. Of the two numbers we had to tackle, the dividend struck the eye first, as it appeared monstrously out of scale; the divisor was probably just three or four digits, but this is where our computational ordeal started. It soon turned out that the iterative manual procedure we knew to perform the operation, where the division was computed using Hindu-Arabic integers on what was in fact a virtual abacus drawn on paper, became very unwieldy in the case of divisors larger than a couple of digits.

This method, I learned much later, was more or less still the same one that Luca Pacioli first set forth in 1494.1 I do not doubt that early modern abacists would have known how to run it with numbers in any format, but we didn’t; besides, I have reason to suspect that our divisor might have been, perversely, a prime number (but as we did not know fractions in fourth grade, that would not have made any difference). So on Thursday morning, after some perplexity, I tried to circumvent the issue with some leaps of creative reckoning, and as none came to any good, during lunch break—which at the time still implied a full meal at home for everyone working in town—I threw in the towel, and asked my father. He looked at the numbers with even more bemused perplexity, mumbled something I was not meant to hear, and told me to call him back in his office later in the afternoon. Did he not have a miraculous instrument in his breast pocket, a slide rule that I had seen him use to calculate all sorts of stuff, including a forecast for a soccer match? It would be of no use in this instance, my father answered, because using the slide rule he could only get to an approximate result, and my teacher evidently expected a real number—all digits of it, and a leftover to boot.

So I waited and called his office in the afternoon. I dictated the numbers and I heard them punched into an Olivetti electromechanical Divisumma calculator, at the time a fixture on most office desks in Europe. I knew those machines well—I often played with them when the secretaries were not there. Under their hood, beautifully streamlined by Marcello Nizzoli, a panoply of levers, rods, gears, and cogwheels noisily performed additions, subtractions, multiplications, and, on the latest models, even divisions. But divisions remained the trickiest task: the machine had to work on them at length, and the bigger the numbers, the longer the time and labor needed for number crunching. After some minutes of loud clanging there was a magic hiatus when the machine appeared to stand still, and then a bell rang, and the result was printed in black and red ink on a paper ticker. That day, however, that liberating sound never came, as the dividend in my homework was, I was told, a few digits longer than the machine could take. I could get an approximate result, which once again was likely not what I should bring to school on Friday morning. Then my father stepped in again: are you not at school with young X, the son of the bank director, he asked—call him, for they likely have better machines over there. And so I did, and young X told me he had indeed called his father, and he was waiting to hear back. I thought then, as I do now, that he was lying.

Early on Friday morning, while waiting in front of the school gates, some of us tried to compare results. Among those who had some to show, all results were widely different. Young X gloated and giggled and did not participate in the discussion. He went into politics in the 1980s, and to jail in the 1990s, one of the first local victims of the now famous “Mani Pulite” (clean hands) judicial investigation. Back then, however, when the bell rang and the teacher came in to the class, all we wanted to know was the right result. The teacher stood up from his desk and looked around, somewhat ruffled and dazzled, holding a batch of handwritten notes. Then, before he could utter a word, he fainted in front of us all, and fell to the floor. Medics were called, and we were sent to the courtyard. When we were let back in a few hours later, an old retired teacher, hastily recalled, told us jokes and stories for the rest of the day. We finished the school year with a substitute teacher; our titular teacher never came back, and nobody knows what his lot was after that day. There were unconfirmed rumors in town that he had started a new life in Argentina. And to this day, I cannot figure out why on his last day as a schoolteacher he would give us an assignment that evidently outstripped our, and probably his own, arithmetical skills—but also far exceeded the practical computational limits of all tools we could have used for that task. Hindu-Arabic integers still worked well around 1968, precisely because nobody tried to use them to tackle such unlikely magnitudes, which seldom occurred in daily life, or, indeed, in most technical or financial trades.

Hindu-Arabic numerals were a major leap forward for Europe when they were adopted (first, by merchants) in the fifteenth century. Operations with Hindu-Arabic numerals—then called algorism, from the Latinized name of the Baghdad-based scientist who first imported them from India to the Arabic world early in the ninth century2—worked so much better than all other tools for quantification then in use in Europe that they would soon replace them all (Latin numerals were the first to go, but algebra and calculus would soon phase out entire swaths of Euclidian geometry, too). Number-based precision was a major factor in the scientific revolution, and early modern scientists were in turn so successful in their pursuit of precision that they soon outgrew the computational power of the Hindu-Arabic integers at their disposal. Microscopes and telescopes, in particular, opened the door to a world of very big and very small numbers, which, as I learned on that memorable day in May 1968, could present insurmountable barriers to number-based, manual reckoning. Thus early modern algorism soon went through two major upgrades, first with invention of the decimal point (1585), then of logarithms (1614).3

A masterpiece of mathematical ingenuity, logarithms are one of the most effective data-compression technologies of all time. By translating big numbers into small numbers, and, crucially, by converting the multiplication (or division) of two big numbers into the addition (or subtraction) of two small numbers, they made many previously impervious arithmetical calculations much faster and less error-prone. Laplace, Napoleon’s favorite mathematician, famously said that logarithms, by “reducing to a few hours the labor of many months, doubled the life of the astronomer.”4 Laplace went on to claim that, alongside the practical advantages they offer, logarithms are even more praiseworthy for being a pure invention of the mathematical mind, unfettered by any material or manual craft; but in that panegyric Laplace was curiously blind to a crucial, indeed critical, technical detail: logarithms can serve some practical purpose only when paired with logarithmic tables, where the conversion of integers into logarithms, and the reverse, are laboriously precalculated. Logarithmic tables, in turn, would be useless unless printed. If each column of a logarithmic table—an apparently meaningless list of millions of minutely scripted decimal numbers—had to be copied by hand, the result would be too labor-intensive to be affordable and too error-prone to be reliable. Besides, errors would inevitably accrue over time, from one copy to the next, as in all chains of scribal transmission.5

The efficacy of logarithms is thus dependent on what today we would call economies of scale in supply-chain management: many repetitive calculations that no end user would perform in person are ingeniously centralized, executed once and for all by a team of specialists, double-checked, then distributed in ready-made form to each customer through print. By copying numbers from the printed tables, each user acquires in fact a prefabricated, mass-produced modular item of calculation, which he imports and incorporates into his own computational work. Economies of scale are proportional to the number of identical items produced: in this instance, the profit for all rises if the same tables can be printed and handed out to many customers. Failing that, logarithms would not have any reason to exist—except as a mathematical oddity, much as the binary notation was before the invention of electric computers.6 Logarithms are the quintessential mathematics of the age of printing; a data-compression technology of the mechanical age.

9976_002_fig_001.jpg

2.1 Henry Briggs, Arithmetica Logarithmica, sive Logarithmorum chiliades triginta (London: William Jones, 1624), one page from the tables.

As it happens, my own encounter with logarithms in high school, circa 1973, coincided with my first acquaintance with a handheld electronic calculator, which was probably a 1971 Brother ProCal. Although it was, in theory, possible to hold it in hand, that machine was the size of a brick and almost as heavy; it did nothing beyond the four operations and, the display being limited to eight digits, it could not compete with the logarithmic tables I was then learning to use. As a result, I dutifully kept practicing with logarithms for all of the three or so months that our high school curriculum still devoted to that baroque (literally) cultural technology.7 I did not know then that only three or four years later, during my second year in college, I would buy (at a price then approximately equal to that of ten pizzas) a Texas Instruments TI-30, which included trigonometric, exponential, and logarithmic functions. That miraculous machine wiped out, overnight, most of the chores that had encumbered our math and engineering classes.

Just like divisions in the old electromechanical Olivettis, the exponential and logarithmic functions stretched the performance of that calculator to its limits: most operations were calculated instantaneously, but sometimes, when dealing with very large numbers, the machine hesitated: there was a momentary lag, and the red display blinked for a few instants before the result was shown—a delay we generously interpreted as a machinic analog, and almost an empathic reenactment, of the suffering and effort intrinsic to human computing. That quirk aside, the calculator worked magnificently, and the first moment of cultural awareness of a digital revolution in the making—the first ceci tuera cela epiphany: in this case, of the computer killing the book—came when we realized that we would no longer need any trigonometric or logarithmic tables in print, as all logarithms, for example, could be instantaneously calculated by the machine at all times.

But that was only the start, as we soon realized that the logarithm function itself must have been put on the machine’s keyboard as a practical joke, or a trap for fools. Why would anyone use logarithms to convert big numbers into small numbers before feeding those numbers back into the machine, when the same machine could work with any numbers of any size almost at the same speed? Was there any evidence that processing fewer big numbers, rather than many smaller ones, would wear out the machine, damage it, or drain its 9V battery? Did the machine have a limited computational life span, and die after performing a given number of operations? That was certainly the case for the electromechanical Olivettis: their cogs and rods did wear out, and had to be kept greased, maintained, and replaced after a certain number of cycles, like all mechanical tools. But there were no moving parts in the solid-state TI-30, nor did its processor make any noise when at work. Aside from the blinking screen, which indicated some degree of machine stress in certain conditions, one was inclined to conclude that, by and large, using the machine in full would not cost more than using it sparingly. If so, the use of logarithms and antilogarithms to compress and decompress numbers before and after use, so to speak, would only add time and errors. Logarithms, as Laplace had pointed out, had indeed extended the life of astronomers—and, we could add today, of many nineteenth- and twentieth-century engineers; but do logarithms extend the life of a solid-state processor? Given the power of even the cheapest of today’s computing tools, the cost of electronic number crunching is, in most cases, not significantly affected by relatively small variations in the size of the numbers we type in, and for most applications occurring at normal technical scales there is no need to laboriously process the numbers we use in order to shave a few digits off any of them. Not surprisingly, today’s engineers, and even astronomers, no longer use logarithms. Computers have de-invented logarithms; logarithms are a technology for data compression that digital computers simply do not need any more.

Today’s computers are not, in essence, different from those of 1978: they still use binary numbers and electric impulses to process them. But they are far more powerful, faster, and cheaper due to steady technical progress in computing hardware. Moore’s Law, first formulated in 1965, noted that the numbers of transistors per square inch of an integrated circuit doubles every two years, a growth rate that has been more or less maintained to this day. Most measures of speed, capacity, and even price in electronic computing, whether directly determined by the performance of a silicon chip or not, have moved on a similar scale. Yet this oddly regular pace of quantitative advancement seems to have recently crossed some crucial threshold, as proven by the popular currency and universal appeal of the idea of Big Data, which has been mooted in all kinds of contexts and peddled for all kinds of purposes since the late 2000s.

The term “Big Data” originally referred simply to our technical capacity to collect, store, and process increasing amounts of data at decreasing costs, and this meaning still stands, regardless of hype, media improprieties, or semantic confusion.8 There is nothing inherently revolutionary in this idea, which per se could equally refer to the advantages of writing over orality, of print over scribal transmission, or to each incremental technical improvement of digital technologies for at least the last fifty years. What is new, however, and specific to our time, is the widespread and growing realization that today’s economy of data may be unprecedented in history. This is what has brought Big Data front and center; and there may be more to this than media spin. For this cultural perception may well be the main divide between the first digital revolution, in the 1990s, and the second, in the 2010s. In the 1990s the old logic of small data, laboriously crafted through so many centuries of limited data supply, still applied to computing in full. Computers were already processing data faster and more effectively than any other technology in history, but data were still seen as expensive ware, to be used as parsimoniously as possible. Now, for the first time ever, data are seen as abundant and cheap, and, more important, as becoming more abundant and cheaper every day.

If this trend continues, one may reasonably project the evolution of this cost curve asymptotically into the future and conclude that, at some point, an almost infinite amount of data will be recorded, transmitted, and retrieved at almost no cost. Evidently, a state of zero-cost recording and retrieval will always be impossible; yet this is the tendency as we perceive it—this is where today’s technology seems to be heading. If this is so, then we must also come to the inevitable conclusion that many technologies of data compression still in use will at some point become unnecessary, as the cost of compressing and decompressing the data (at times losing some in the process) will be greater than the cost of keeping all raw data in their pristine state for a very long time, or even forever. If we say digital data-compression technologies, we immediately think of JPEG or MP3. But as the example of logarithms suggests, many cultural technologies that humankind has developed over time, from the alphabet to the most sophisticated information retrieval systems, should be seen as primarily, or accidentally, developed in order to cope with what was, until recently, an inevitable material condition of data penury affecting all people at all times and in all places—a chronic shortage of data, which today, all of a sudden, has just ceased to be.

2.1 Data-Compression Technologies We Don’t Need Anymore

The list of cultural technologies being made obsolete by today’s new data affluence is already a long one. To some extent, all of the most successful cultural technologies in history must have been at their start, either by design or by chance, data-compression technologies. As information processing was always expensive and technically challenging, only media that could offer good data-compression rates—that is, that could trim the size of messages without losing too much content—could find a market, take root, and thrive. The alphabet, an old and effective compression technology for sound recording, is a case in point.

The alphabetic notation converts the infinite variations of sounds produced by the human voice (and sometimes a few other sounds and noises too) into a limited number of signs, which can be easily recorded and transmitted across space and time.9 Anyone trained to literacy in a given alphabetical language can record the sound of someone else’s voice through writing (taking dictation, for example), as well as reproduce a sound that was notated by someone else without having heard the voice of the original speaker, and regardless of meaning—even when that sound has no meaning at all (supercalifragilisticexpialidocious, for example). This strategy worked well for centuries, and it still allows us to read transcripts from the living voices of famous people we never listened to and who never wrote a line, such as Homer, Socrates, or Jesus Christ. In time, the alphabet was adapted to silent writing and silent reading, and its original function as a voice recorder was lost, but its advantages as a data-compression technology were not. The Latin alphabet, for example, records all sounds of the human voice, hence most conveyable thoughts of the human mind, using a combination of less than thirty signs. At some point these signs began to be mass-produced in metal, which allowed for lines of type, and entire books, to be reproduced from mechanical matrixes that in turn were made from the combination of a very limited number of interchangeable parts: the alphabet was the software that made the hardware of print from moveable type possible.10 Moveable type could be reproduced as standard metal type pieces precisely because the pieces were limited in number, thus allowing for economies of scale in the mass production of each; printing with moveable type would never have caught on if the number of characters had been in the thousands (as in Eastern ideogrammatic languages).11

In the early days of computing, when the iron law of small data still ruled, the leanness of the alphabetical notation still offered considerable advantages: the first American Standard Code for Information Interchange (ASCII), back in the early 1960s, allowed for 128 slots—far more than the Latin alphabet needed, even taking into account a separate slot for capital letters, and one for each punctuation or diacritic mark; the same code was later extended to eight bits, or 256 characters, and further declined into regional variations, thus allowing for such tidbits as the euro sign (ASCII 128) or a Cross of Lorraine (ASCII 135, both in the version known as ISO Latin-1). Most keyboards today can directly generate many more signs above and beyond the double set they show, inherited from mechanical typewriters (minuscule and majuscule, or lowercase and uppercase, which originally referred to the cases where compositors stored the respective fonts).

That, however, is the past. For some years now the Gmail online interface has been supporting, alongside letters and numbers, a list of pictorial icons, known by the Japanese term emojis (pictograms) to distinguish them from the old emoticons (icons representing facial expressions, such as :-) or :-( , obtained through the combination of typographical signs from a standard keyboard). Unlike emoticons, emojis are ready-made pictorial images, but like letters of the alphabet in the ASCII code, each emoji is transmitted across platforms as a coded number, so each emoji number 1F638 (Unicode) will show up as something similar to a grinning cat, regardless of platform or operating system—just like the alphabet letter number 65 (ASCII) will show up as something recognizable as a capital A in all fonts, software, and graphic interfaces. In technical terms, each emoji is hence functionally similar to a letter of the alphabet, except emojis do not convey sounds but ideas (and emojical ideas are conveyed through icons, not symbols). There is, however, another purely quantitative difference: the letters of the Latin alphabet, born and bred in times of small data, were around thirty. The number of emojis in my Gmail interface, at the time of writing, is more than seven hundred and counting; my Android-based Samsung phone already offers almost 1,000 (for use in text messages as well as email); the Japanese smartphone application Line lists 46,000 (although most do not correspond to any standard code, hence they may not work outside of that company’s proprietary application).

There was a time when we thought (and many feared) that electronic communications would replace print and that the digital would kill the book. The jury on that may still be out, but the frontier has already shifted: today’s digital tools are phasing out alphabetical script and reviving ideogrammatic writing in its stead. For many centuries, the small-data logic of the alphabet offered its (mostly Western) users overwhelming competitive advantages: with the alphabetical notation came print with moveable type, then the typewriter. We used to think that one cannot easily type from a keyboard with thousands of keys. Today, many people—particularly young people, and not only in the Far East—are cheerfully doing just that. The alphabet is a data-compression technology that today’s digital tools no longer need. With the decline of the alphabet, one of the arguments often cited by Whig historiography to account for the technical supremacy of the West in modern and contemporary history is gone too—never mind if the argument was ever sound, it simply does not apply in today’s postmechanical and data-opulent environment.12 When information technology can process almost any amount of data instantaneously and at the same cost, a notational system limited to a combination of thirty signs may seem an unreasonable constraint—a solution to a problem that no longer exists.

But the logic of Big Data will not stop there, and today’s revenge of the data-opulent ideogrammatic writing against its ascetic alphabetic antagonist may in turn be short-lived. Even keyboards with thousands of virtual keys, and search tools to navigate them, may soon be replaced by speech recognition technologies. In many simple cases (searches on keywords, multiple-choice questions) the keyboard has already been replaced by voice commands, and in some cases by gesture recognition: the conversion from voice or gesture to writing is still carried out by the machine, somewhere—at least for the time being—but unbeknownst to the end-user, and without human intervention. Indeed, the keyboard (whether real or virtual, simulated by screens or tactile tools), is no longer the only human–machine interface, and soon it may no longer be the principal one. And when all script is phased out, digital tools will have gone full circle: after demolishing—in an orderly chronological regression from the more recent to the earliest—all small-data cultural technologies invented over time, humankind will be restored to something akin to the auroral primacy of gesture and the word. Digitally enhanced orality and gesture will be different from the ancestral ones, however, because voice and motion can now be recorded, notated, transmitted, processed, and searched, at will and at once; thus making all cultural technologies developed over time to handle one or the other of these specific tasks equally unnecessary.

2.2 Don’t Sort: Search

On April 1, 2004, Google rolled out a free, advertising-supported e-mail service. We now know that was not one of the April Fools’ Day pranks for which Google was, and still is, famous: twelve years after its launch, by invitation only, to a beta (test) version, Gmail offers fifteen GB of free storage to almost one billion users around the world. When Gmail started, however, Google was still a one-product company, known primarily for the almost miraculous efficacy of its search algorithm; so it stands to reason that Gmail was released touting full-text searchability as its main asset: Gmail’s original logo, next to the beta version disclaimer, featured the now famous tagline: “Search don’t sort. Use Google search to find the exact message” (emphasis in the original). Gmail remained, nominally, in beta version until 2009, which is also when the “search don’t sort” motto appears to have been removed from the banner on its portal.13 We can only assume that by that time most Gmail users had gotten used to searching instead of sorting, even though they were never really coerced into doing so; Gmail has always offered some discreet sorting tools (called “labels” rather than “folders”), and more have been introduced recently. Why, then, would Gmail users search rather than sort, and what does the philosophy of searching instead of sorting betoken, imply, and portend?

In the early days of personal computing, email programs downloaded messages from a central server to the user’s machine, where messages were stored in resident folders. As often happens in the history of technical change, the new technology at first simulated the older one it was replacing, and most users simply carried over their sorting habits from their physical offices to their new virtual desktop: the interfaces of early email services reproduced a traditional filing cabinet, with folders or drawers, where users would sort and keep their documents following an order they knew, so they knew where to look for them when needed. Electronic folders could be labeled, just like paper ones, and inside each folder items were automatically sorted chronologically or alphabetically, by sender or by the title in the subject line. But early personal computers offered limited searching tools, if any, so finding a document after putting it in a folder was a time-consuming and hit-or-miss affair, much dependent on memory or other idiosyncratic factors. When the number of email messages started to surge, well beyond the traditional limits of scribal or even mechanical letter writing, manual sorting struggled to cope. Then came Gmail.

Gmail made the full text of all email messages, sent or received, searchable for words and numbers using a simplified Google syntax: to this day, search results are not ranked by relevance, but can be filtered by domain or chronology. Alphanumerical Gmail searches can only be performed online, hence users are encouraged to use Gmail primarily as a web-based service. At the beginning Google also prompted users never to delete messages, and to this day it is not known precisely what happens to messages after they are permanently deleted, or how long they survive on the company’s proprietary backup systems—an open-endedness that many find philosophically and legally disturbing.14 Anything not permanently deleted, however, is searchable, and the “search, don’t sort” logic suggests that an automated full-text search for words or numbers on a whole corpus of sources, left raw and unsorted, is a more powerful retrieval tool than the traditional manual process of first sorting items by topic and then looking for them in their respective folders.

I cannot find any scientific enquiry to validate that assumption, but plenty of anecdotal evidence does. “Search don’t sort” certainly works for me. Yielding to the new technical logic of the tool, I diligently phased out most folders over time (while still using a handful of Gmail “labels,” occasionally and for specific projects). Now, when I need to find a message, I do not look for it in a given place (or folder), but search for a combination of words, numbers, dates, and people, which I remember as more or less related to the matter at hand. While I wish at times that the search would offer a more advanced syntax, most of the time it works—and it certainly works better than my own sorting would, even if I had adopted a perfect taxonomy from the start, and unremittingly and consistently complied with it for the last ten years. Thus, over time, I have become well advanced in the new digital art of finding stuff without knowing where it is.

From the beginning of time, humankind has conceived and honed classifications for two main reasons: as a way to find or make some order in the world—an idea often cherished by philosophers, theologians, and thinkers and which some see as a universal human yearning—and, more simply, to assuage the basic human need to put things in certain places, so we know where they are when we need them. We used to think that sorting saves time. It did; but it doesn’t any more, because Google searches (in this instance, Gmail searches) now work faster and better. So taxonomies, at least in their more practical, utilitarian mode—as an information retrieval tool—are now useless. And of course computers do not have queries on the meaning of life, so they do not need taxonomies to make sense of the world, either—as we do, or did.

Taxonomies are as old as Western philosophy: one of the most influential dates back to Aristotle, who divided all names (or all things, the jury is still out) into ten categories, known in the Middle Ages as predicamenta.15 Aristotle’s classifications do not seem to have directly influenced the first encyclopedists (Varro, Pliny, or Isidorus), but with Medieval Scholasticism the division of names into classes merged with another Aristotelian tenet, the diairetic structure of universals and singulars, genus and species, which today we more often interpret in terms of set theory: classes are constituted by all items that have something in common, with smaller sets sharing more predicates, all the way to the individual, or set of one (which is where, in modern terms, maximum definition meets minimum extension).16 With classifications construed as cascades of divisions and subdivisions, encyclopedias (which originally, and etymologically, referred to a circle, or cycle, of knowledge) started to look less like circles and more like trees, with nested subdivisions represented as branches and sub-branches.

The first “tree of knowledge” is attributed to the Majorcan philosopher and Catalan writer Ramon Llull (Raymond Lull, or Lullus, late thirteenth century), also known as a precursor of Leibniz’s ars combinatoria.17 Branching taxonomies flourished in the sixteenth century, both as a philosophical instrument and as an indexing tool for librarians: the French logician and pedagogist Pierre de la Ramée (Petrus Ramus, which, incidentally, means “branch” in Latin) thought that every argument, subject, or idea could be visualized as an arborescent diagram of definitions and subdivisions, and he started to implement a universal program of pedagogical reform where every discipline was represented as a tree, endlessly divided into branches. Ramus also thought that the same treelike structure should apply to every discourse on every subject, in poetry and in prose, in writing as well as in speech. This may seem a bit extreme today,18 and Ramus’s “universal method” (Methodus Unica) may have proven less than popular among some students and staff back then: Ramus himself was stabbed and beheaded during the first night of the Saint Bartholomew’s massacre, his body thrown out of the window of his room at the Collège de Presles and dumped into the Seine. Almost at the same time, very similar treelike diagrams were used by the Swiss physician, botanist, and bibliographer Conrad Gessner, who published several versions of a general catalog of all printed books he knew (and some manuscripts to boot), some ordered alphabetically and some by subject.19 By the mid-sixteenth century, with book production surging due to print, librarians and book dealers might have been facing the first big data crisis in history, as—much like today—many of their traditional tools and practices were being overwhelmed by technological change. To help keep track of this unprecedented wave of authors and titles, “instantly taken up, multiplied and spread far and wide by printing as by a superhuman war machine,”20 Gessner devised a universal method of classification by treelike subdivisions pegged to a numeric ordering system—an index key which could also have served as a call number, although it does not seem it ever was. That could hardly have happened in the south of Europe anyway, as all of Gessner’s books, particularly his bibliographies, were immediately put in the Index of Prohibited Books by the Roman Inquisition.21

One generation before, another early visionary of the mechanization of knowledge found other ways to cope with the growing inadequacy of the information retrieval tools of his day. Giulio Camillo’s famous “Memory Theatre” was a filing cabinet where all extant writings by Cicero were sorted by subject and further subdivided by size; Camillo’s idea was that modern writers could compose all sorts of new texts—indeed, all possible texts—by cutting and pasting suitable snippets excerpted from this systematic archive of Cicero’s perfect language and universal wisdom. Camillo designed some ingenious data retrieval tools for that purpose, but as none of these really worked he resorted to occultism and divination, suggesting that his Ciceronian machine would only work in full for the initiated. Camillo was also known during his lifetime as an accomplished magician and a hypnotizer of lions.22

9976_002_fig_002.jpg

2.2

9976_002_fig_003.jpg

2.3 Arborescent diagrams (figures 2.3 and 2.4) showing the classification of the art of logic in Pierre de la Ramée’s (Ramus’s) Dialectica, from Ramus’s Professio Regia, published and edited posthumously by Johann Thomas Freige, where similar diagrams are also used to classify the history and timeline of Cicero’s main life events (here, 2.3, outlining Cicero’s prateorian election at the age of 40). Petrus Ramus (Pierre de la Ramée), Professio Regia, Hoc est Septem artes liberales, in Regia Cathedra, per ipsum Parisiis apodictico docendi genere propositae, et per Ioan. Thomam Freigium in tabulas perpectuas … relatae (Basle: Sebastian Henricpetri, 1576).

9976_002_fig_004.jpg

2.4 Conrad (Konrad) Gessner (Gesner), diagram showing part of the classification of the science of theology, from Partitiones theologicae: pandectarum universalium Conradi Gesneri liber ultimus , (Zurich: Froschauer, 1549). © Zentralbibliothek Zürich, (Shelf mark: 5.13, 3).

Thus it will be seen that our dominant sorting tool, the alphabetical classification, was somewhat of a late bloomer: early dictionaries in print were often ordered thematically, not alphabetically (i.e., they were in fact thesauri), and Ramus’s and Gessner’s arborescent classifications survive in all the bibliographic tools still in use today, such as the Universal Decimal Classification (UDC), Library of Congress, or Dewey classification systems. Paul Otlet’s UDC was also the springboard for the project of the Répertoire Bibliographique Universel—an open-ended index of everything, which Otlet kept expanding until 1934, when it reached fifteen million index cards, and never served any practical purpose.23 A few years ago Google announced a partnership with the Mundaneum, the museum in Mons, Belgium, dedicated to the legacy of Otlet’s life project,24 and there may be some logic in the fact that Google, which has already changed the world with digital search engines, should celebrate the memory of the man who brought mechanical sorting technologies to their all-time, perfectly useless zenith.

In 1530, when Camillo needed a suitable taxonomy to sort his archive of Cicero’s words, phrases, and arguments, he looked high and low for a classification scheme that would be, in his words, “discrete and capacious as needed, and furthermore able to keep the mind awakened and to imprint itself into memory.”25 Taxonomies are made by and for the human the mind, so they should be intelligible and memorable. From today’s big data perspective, however, it is easy to see that classifications also function by structuring and formalizing a random stock of information, providing speedier access to all data in an inventory by way of index keys. Thus, by the logic it provides, an arborescent structure is easier to manage, use, and memorize than the raw aggregate of data it refers to; but the same structure also functions as an orientation tool that allows end-users to calculate the coordinates of each item in stock. So, for example, in the Library of Congress classification, N is the general call number for all the fine arts, NA for architecture, NA 190–1555 for architectural history, and NA 510–589 for the history of architecture in the Renaissance (etc.); if we know how the code works, we can walk straight to the shelf where the subject we are looking for is kept, without having to read the titles of all the books on all shelves. And, even if we do not often think of it that way, the coordinates of each word in a dictionary are calculated by running each letter in that word against a very simple code—the alphabetical sequence. Thus the word “abacus,” for example, has coordinates 1,2,1,3,21,19; and those coordinates, applied sequentially, lead us straight to that word in the dictionary without having to read any other. Thanks to alphabetic sorting, instead of remembering the order of every word in a dictionary, we only need remember the order of the alphabet itself.

But computers do not work that way. To search for the word “abacus” in a corpus of textual data, computers will scan the whole corpus looking for a given sequence of forty-eight 0s and 1s, and stop whenever that sequence shows up—regardless of how that corpus may or may not have been sorted. Computers search, they don’t sort. More and more often, so do we.26 Gmail is training us to leave our documents unsorted because computers search faster than we sort, but this principle need not be limited to media objects; it can be extended to physical objects of all kinds, which can be tagged and tracked using Radio Frequency Identification (RFID). This may apply to random junk in a garage, to books in a library, or to the full inventory of Amazon.com. Indeed, all kinds of items in Amazon warehouses, including books, are not sorted by subject or category, but only based on the frequency of sale, following an order that would be meaningless to a human mind.27 Using the same technical logic, in our houses we could keep potatoes and socks and books and fish in the same drawers, or indeed anywhere. We would not need to remember where we put things, or where things are, because whenever we need them we could simply Google them in physical space—and see the result on a screen, on a wristwatch, or through augmented reality eyewear. Let’s go one step further: the same Google logic need not be limited to space—it may also apply to time.

9976_002_fig_005.jpg

2.5 Andreas Gursky, Amazon (2016). © Andreas Gursky. Courtesy: Sprüth Magers Berlin London / DACS 2016.

2.3 The End of Modern Science

Many sciences, philosophies, and systems of thought look at the past to try and predict the future, because humans themselves learn from experience—and according to some, from experience only. Be that as it may, modern experimental science is based on the assumption that events that tend to repeat themselves with a certain regularity may be predicted. For example, if every time we drop a stone it falls to the ground, we expect that the next time we drop a similar stone in similar conditions it will fall in a similar way, and from the regularities we observe in the repetition of this process we extrapolate rules that will help us predict other comparable incidents before they happen. This is the way modern science (inductive, inferential, experimental science) has worked so far, with great success; the rules this science arrives at are typically expressed as mathematical formulas.

Yet, once again, our data-rich present prompts us to look at our data-starved past from a different vantage point. In terms of pure quantitative data analysis, the history of the modern scientific method (and, indeed, of modern science as a whole) appears as little more than a succession of computational strategies strenuously developed over time to maximize the amount of information we could collect, record, and transmit for scientific (i.e., predictive) purposes. Evidently, the best way to learn from experience would be to keep track of each experiment in its totality. As nobody knows how to define the totality of an event, limits must always be set so the collection of data can stop at some point. However, given the drastic limitations in the amount of data we could afford to keep and process in the past, scientists had to be picky, and turning necessity into virtue, they learned to focus on only a few privileged features for each experiment, discarding all others. Thus over time we learned to extrapolate, generalize, infer, or induct formal patterns, and we began to record and transmit condensed and simplified interpretations of the facts we observed, instead of longer, fuller descriptions of the facts themselves. Theories tend to be shorter than the description of the events they refer to,28 and indeed syllogisms, then equations, then mathematical functions, were, and still are, very effective technologies for data compression. They compress a long inventory of events that happened in the past into a very short script, generally in the format of a causal relationship, from which we can deduct many more events of the same kind, if similarly structured—past or future indifferently.

Galileo, one the founding fathers of modern science, also laid the foundations of modern mechanics, static, and of modern structural engineering. In his last book, Two New Sciences, which had to be smuggled out of Tuscany and printed in the Dutch Republic to escape censorship, Galileo famously described a number of experiments he had made to study how beams break under load.29 But today we need not repeat any of his experiments, or any other experiment, to determine how standard beams will break under most standard loads because, generalizing from Galileo’s experiments and from many more that followed, we have obtained a handful of laws that all engineers study in school: a few, clean lines of mathematical script, easy to grasp and commit to memory. These formulas, or rules, derive from regularities detected in many actual beams that broke in the past and predict how most beams will break in similar conditions in the future.

9976_002_fig_006.jpg

2.6 Galileo Galilei, experiments prefiguring Leonhard Euler’s (1707–1783) moment of inertia, (2.6), and on the deflexion of a beam supported on both ends, and cantilevered (2.7), from Discorsi e Dimostrazioni Matematiche intorno à due nuove scienze, attinenti alla mecanica e i movimenti locali (Leiden: Elzevir, 1638).

9976_002_fig_007.jpg

2.7

However, let’s imagine, again, that we live in an ideal big data world, where we can collect an almost infinite amount of data, keep it forever, and search it at will, at almost no cost. Given such unlikely conditions, we could assume that every one of those experiments, or more generally, the experiential breaking of every beam that ever broke, could be notated, measured, and recorded. In that case, for most future events we are trying to predict, we could expect to find and retrieve a precedent, and the account of that past event would allow us to predict a forthcoming one—without any mathematical formula, function, or calculation. The spirit of the new science of searching, if there is one, is probably quite a simple one, and it reads like this: whatever happened before, if it has been recorded, and if it can be retrieved, will simply happen again, whenever the same conditions reoccur. This is not different from what Galileo and Newton thought. But Galileo and Newton did not have Big Data; in fact, they often had to work from very few data indeed. Today, to the contrary, instead of calculating predictions based on mathematical laws and formulas, the way they did, we can simply search for a precedent for the case we are trying to predict, retrieve it from the almost infinite, universal archive of all relevant precedents that ever took place, and replicate it.30 When this happens, search can replace the method of modern science in its entirety. This may sound like a joke—or a colossal exaggeration. Quite to the contrary, in pure epistemological terms, the new science of search is little more than a marginal quantitative upgrade in the conceptual tools at our disposal.

Albeit ostensibly predictive, modern science does not really predict the future—it never did. The scientific formulas we use to deduct and calculate events before they happen are inferred and extrapolated from a number of experiments already made, and the only thing those formulas can really do is to refer back to, and in fact retrieve, the experiments from which they derive. We may well imagine that some of those events, or some patterns derived from them, will repeat themselves in the future, because they have repeated themselves so many times in the past; but that we do by a leap of faith: the only certainty we have is in the evidence that has already been served. Thus science is always a retrieval, never a prediction. The old science of small data retrieved events that had been recorded in an abbreviated, condensed, compressed, or truncated form due to the limits in the old technology of data recording and transmission. Today’s new science of Big Data has removed many of those limits, so instead of retrieving pale shadows we can now retrieve vivid 3-D avatars of the original facts (or something as close to that as the amount of available data allows). While the old science of small data used scientific formulas to deduct (but in fact to retrieve) a skinny, pitiful handful of numbers, the new science of Big Data can search and retrieve full-bodied, hi-res, high-definition precedents almost in their entirety.31 Either way, we bring evidence back from the past in the hope this may help us predict future events; but while in the old days we used the human science of sorting for that purpose, today we can use the computational art of searching instead. Furthermore, in today’s computational environment, the precedent we dig up need not be a real one; it may equally have been simulated on purpose. This is what avant-garde designers have been doing for the last few years, thanks to the power of today’s digital computation—only in some slightly different ways, and sometimes without saying it in so many words.

2.4 The New Science of Form-Searching

9976_002_fig_008.jpg

2.8 Alisa Andrasek, Wonderlab, AD Research Cluster 1, B-Pro M.Arch Architectural Design, The Bartlett UCL, Liquid (2016). Robotic extrusion of filaments on a vectorial template derived from computational fluid dynamics. Tutors: Alisa Andrasek, Daghan Cam, Andy Lomas. Robotics: Feng Zhou. Students: Zhuoxing Gu, Tianyuan Xie, Bingyang Su, Anqi Zheng. © Alisa Andrasek, AD Research Cluster 1, The Bartlett UCL.

9976_002_fig_009.jpg

2.9 Gilles Retsin, Manuel Jimenez-Garcia, AD Research Cluster 4, B-Pro M.Arch Architectural Design, The Bartlett UCL, CurVoxels, 3D Printed Chair (2015). A robotically extruded chair combining a curved toolpath with a voxel-based data structure. Students: Hyunchul Kwon, Amreen Kaleel, Xiaolin Yi.

9976_002_fig_010.jpg

2.10 Gilles Retsin and SoftKill Design, Protohouse, Collection Centre Pompidou (2012).

Structural optimization (by iterative removal and addition of material) aimed at obtaining minimal volume and uniform stress throughout a complex architectural envelope. SoftKill design team: Nicholette Chan, Gilles Retsin, Sophia Tang, Aaron Silver. Developed at the Architectural Association Design Research Lab in London.

Recent technical developments in the extrusion and robotic winding and weaving of very thin filaments have prompted exciting and promising experiments—at the Institute for Computational Design (ICD) of the University of Stuttgart, at the Bartlett (University College London), and elsewhere.32 In 2014, Achim Menges and Jan Knippers published a groundbreaking technical article describing their use of fiber-reinforced polymers in the thin shell of the experimental ICD/ITKE Research Pavilion they built in 2012.33 Structural calculations for the pavilion had to take into account the complex geometry of the shell, as well as the density and direction of each bundle and layer of carbon and glass fibers wound in it. The authors began with a geometrical and material layout inspired by biological models; then they simulated the structural behavior of this first model using standard computational finite element analysis (FEA), a mathematical method for the calculation of deformation and stresses within a continuous structure.34 Based on the results of this first simulation, some aspects of the design were tentatively tweaked, altering both the geometry of the shell and the internal layout of the fibers. The FEA simulation was then rerun on this second model, and so on, and repeated (iterated) many times over until the authors were pleased with the results.

9976_002_fig_011.jpg

2.11 ICD Institute for Computational Design (Prof. Achim Menges). ITKE Institute of Building Structures and Structural Design (Prof. Jan Knippers), ICD/ITKE Research Pavilion 2012, Robotic filament winding of carbon/glass fiber structure, University of Stuttgart, 2012. © ICD/ITKE University of Stuttgart.

9976_002_fig_012.jpg

2.12 ICD Institute for Computational Design (Prof. Achim Menges). ITKE Institute of Building Structures and Structural Design (Prof. Jan Knippers). ICD/ITKE Research Pavilion 2012, Exterior view of pavilion, University of Stuttgart, 2012. © ICD/ITKE University of Stuttgart.

In this process of heuristic (not mathematical) optimization,35 every simulated model that was tried and discarded corresponded to a physical model that a traditional artisan would have made, tested, and likely broken in real life. Using digital simulations of structural performance, however, today we can make and break on the screen in a few hours more full-size trials than a traditional craftsman would have made and broken in a lifetime. Artisans of pre-industrial times (as well as the ideal artisan of all times recently romanticized by Richard Sennett)36 were not engineers; hence they did not use mathematics to predict the behavior of the structures they made. When they had talent they learned intuitively, by trial and error, by making and breaking as many samples as possible. So do we today, using iterative digital simulations. We may or may not intuit some pattern, regularity, or logic inherent or embedded in the structure we are tweaking—but that is irrelevant. By making and breaking (in simulation) a huge number of variations, at some point we shall find one that does not break, and that will be the good one.

Inspired by Frei Otto’s method of physical form-finding—which Menges was the first to implement digitally and to translate into computational terms—this heuristic design process is functionally equivalent to the big data, search-based alternative to modern science mentioned earlier. Whenever digital tools allow us to collect, record, and process huge troves of data, information retrieval (the search for a precedent) is more effective than the traditional, deductive application of scientific formulas or any other law of causation. The 2012 ICD/ITKE Research Pavilion being an experiment without precedent, no corpus of previous, comparable instances was available for search and retrieval. In the absence of any such historical archive, however, Menges, Knippers, and their team could avail themselves of the immense power of digital simulation to create on the spot, virtually, just the archive they would have needed. They may not have seen it this way, but by simulation and iteration they generated a vast and partly random corpus of many very similar structures that all failed under certain conditions; and they chose and ultimately replicated one that did not.37 This is a far cry from how a modern engineer would have designed that structure—which is one reason why no modern engineer could have designed it.

A modern engineer would have started with a set of formulas establishing causal relationships between loads, forms, and stresses in the structure. Typically used to calculate the resistance of a structure after it has been designed, these formulas can also drive and inspire our first, intuitive design of it. This is because causal laws make sense, somehow: by the causality they express, they interpret and provide some understanding of the physical phenomena they describe. Indeed, in the classical scheme of things, causality is seen as a primary law of nature; so the laws of mechanics, for example, are held to spell out in mathematical terms the way beams, cantilevers, pillars, arches, or vaults function in reality, and the formulas of structural engineering have a “meaning” which is held to be true to nature. Indeed, this meaningfulness, and the structural theories from which it derives, are visible in all masterpieces of modern structural engineering, from Eiffel’s tower to Nervi’s vaults. If we look at these modern structures we understand the basic structural principles their designers had in mind when they first sketched them.

That does not apply to our current way of designing by form-finding, or, as we should perhaps say, to better demarcate the nature of today’s process from that of its physical precursor, computational form-searching. The power of Big Data applied to information retrieval, simulation, and optimization makes the formulaic data compression at the core of modern structural engineering as obsolete as the Yellow Pages—or as the logarithmic tables mentioned above. Gilles Deleuze famously disparaged the abstract determinism of modern science, to which he opposed the heuristic lore of artisan “nomad sciences.”38 Once again, Deleuze’s view of our pre-mechanical past doubles as an eerily cogent anticipation of our digital future. Through computational form-searching we can already design new structures of unimaginable complexity. But precisely because it is unimaginable, this posthuman complexity belies interpretation and transcends the small-data logic of causality and determinism we have invented over time to simplify nature and convert it into reassuring, transparent, human-friendly causal models. Why does one so unimaginably complex structure work, and the thousands of very similar ones we just ran through FEA simulation do not? Who knows. But the point is that it works. And if that is the case, then we must come to the almost inevitable conclusion that the new science of search may soon replace the method of experimental science in its entirety, simply because simulation and search can solve problems that the formalistic approach of modern science could never tackle. Computers can search faster than humans can sort.

Digitally intelligent designers may be more enthusiastic or more outspoken than other early adopters, but the new science of search has already pervaded, in spirit if not in letter, many of today’s data-driven cultural technologies, and traces of the same quantitative, heuristic use of data are evident, in some muted, embryonic way, in other branches of the natural sciences, such as weather forecasting.39 And sure enough, some historians of science have already started to investigate the matter—with much perplexity and reservation, as one would expect; as the postmodern science of big data and computation marks a major shift in the history of the scientific method.40 As mentioned above, mathematical abstractions such as the laws of mechanics or of gravitation, for example, or any other grand theory of causation, are not only practical tools of prediction, but also, and perhaps first and foremost, ways for the human mind to make sense of the world. But then, if abstraction and formalization (that is, most of classical and modern science, in the Aristotelian and Galilean tradition) are also seen as contingent and time-specific data-compression technologies, one could argue that in the absence of the technical need to compress data in that particular way, the human mind can find many other ways to relate to, or interpret, nature. Epics, myth, religion, and magic offer vivid historical examples of alternative, nonscientific methods, and no one can prove that the human mind is, or ever was, hard-wired for modern experimental science. Many postmodern philosophers, for example, would strongly object to that notion. And as so many alternatives to modern science existed in the past, one could argue that plenty of new ones may be equally possible in the future.

The mere technical logic of the new science of searching goes counter to core postulates and assumptions of modern science. Additional evidence of an even deeper rift between the two methods is easy to gather. Western science used to apply causality to bigger and bigger groups, or sets, or classes of events—and the bigger the group, the more powerful, the more elegant, the more universal the laws that applied to it. Science, as we knew it, tended to universal laws—laws that bear on as many different cases as possible. The new science of data is just the opposite: using information retrieval and the search for precedent, data-driven prediction works best when the sets it refers to are the smallest. Indeed, searches are most effective when they can target and retrieve a specific and individual case—the one we are looking for.41 In that, too, the new science of data represents a complete reversal of the classical (Aristotelian, Scholastic, and early modern) scientific tradition, which held that individual events cannot be the object of science.42

In social science and in economics, this novel data-driven granularity means that instead of referring to generic groups, social and economic metrics can and will increasingly relate to individual cases. This presages a brave new world where standards and averages will no longer be either required or warranted: fixed prices, for example, which were introduced during the Industrial Revolution in order to standardize retail transactions, have already ceased to exist, as each retail transaction in a digital marketplace today is an algorithmically customized one-off, delivered at zero processing costs, or almost.43 Likewise, the cost of medical insurance, calculated as it still is on the basis of actuarial and statistical averages, could become irrelevant, because it may be possible to predict, at the granular level, that some individuals will never have medical expenses, hence they will never need any medical insurance, and some will have too many medical expenses, hence no one will ever sell them any medical insurance. The individual that is the object of this new science of granular prediction will no longer be a statistical abstraction—it will become each of us, individually. This may be problematic from a philosophical and religious point of view, as it challenges traditional ideas of determinism and free will; but in more practical terms, it is also incompatible with most principles of a liberal society and of a market economy in the traditional, modern sense of both terms.

Natural sciences, however, offer quite a different picture. As recent works by Neri Oxman and others have shown, we can now design and fabricate materials with variable properties at minuscule, almost molecular scales; and we can detect, quantify, and take into account the infinite, minute, and accidental variations embedded in all natural materials—a capriciousness that made natural materials unsuitable for industrial use, and almost eliminated them from modern industrial design.44 Artisans of pre-industrial times did not have much choice: they had to make do with whatever natural materials they could find. For example, when Alpine farmers had to roof a new chalet, they looked high and low (literally) for a tree that would be a good fit for the ridge piece; sometimes the shape of the roof would be tweaked to match the quirks of the best available trunk. And cabinetmakers could (and the extant few still can) skillfully work around almost any irregularity they find in a plank of timber and make the most out of it. But industrial mass production follows a different logic. To be used in an assembly line, or pieced together by unskilled workers, timber must be processed and converted into a homogeneous material compliant with industrial standards—as plywood, for example, which is a factory-made industrial product, although derived from wood. Artisan masons of old (and few survive in the so-called industrialized countries) knew very well how to make concrete on site the smart way, making it stronger, for example, in the angles and corner walls (more cement), cheaper in some infill (more rubble), thinner and more polished next to some openings (more sand), etc. But for engineers, concrete had to be dumb, homogeneous, and standard, the same all over, because that was the only material they could design with the notational and mathematical tools at their disposal. Even assuming an engineer could calculate the structural behavior of variable property concrete (concrete with different performances in different parts of the same structure), until recently there was no practical way to produce those variations to specifications, either by hand or mechanically. After all, reinforced concrete is only an elementary, two-property material, yet it took several decades to learn a consistent, reliable way to design, calculate, fabricate, and deliver it.

In theory, and increasingly in practice, digital design and fabrication tools are eliminating many of the constraints that came with the rise of industrial standards. X-ray log scanning, for example, is already used in forestry: trees are scanned prior to felling, and the cutting of the boards is customized for each trunk to minimize waste. The scan is discarded by the sawmill after the planks are sold, but there is no reason not to envisage a full design-to-delivery workflow, in this case extended to include the natural production of the source material—from the forest to the end-product, perhaps from the day the tree is planted (which would once again curiously emulate ancestral practices of our pre-industrial past).45 Each tree could then be felled for a specific task: a perfect one-to-one match of supply and demand that would generate economies without the need for scale—which is what digital technologies typically do when they are used the right way. Likewise, variable property materials can now be designed and fabricated at previously unimaginable levels or resolution, including concrete, which can be extruded and laid by nozzles on robotic arms, so each volumetric unit of material can be made different from all others. This is what artisanal concrete always was—which always scared engineers to death, because they could not design and calculate that. Today we can.

9976_002_fig_013.jpg

2.13 Alisa Andrasek, Wonderlab, AD Research Cluster 1, B-Pro M.Arch Architectural Design, The Bartlett UCL, Morphocyte (2016). Variable property materials designed and fabricated through the simulation of the biological process of cellular division. Tutors: Alisa Andrasek, Daghan Cam, Andy Lomas. Robotics: Feng Zhou. Projects/Students: Zuardin Akbar, Yuwei Jing, Ayham Kabbani, Leonidas Leonidou. © Alisa Andrasek, AD Research Cluster 1, The Bartlett UCL.

Much as modern science tended to more and more general laws bearing on the largest possible sets of events, modern technology tended to the mass production of standardized materials that were designed to be, as much as possible, homogenous and isotropic. Industrial standards were meant to generate economies of scale, but also, and crucially, homogenous materials could be described and modeled using elegant mathematical tools such as differential equations and calculus.46 Calculus is a mathematics of continuity, which abhors singularities: it is perfect, for example, to quantify the elastic deformation of any homogeneous chunk of continuous matter. That is why the modern science of engineering can calculate the stress and deformations of the Eiffel Tower, which is made of iron, but until recently the same science could not calculate the resistance of a ten-foot-high brick-and-mortar garden wall.

To the contrary, using digital simulation and data-driven form-searching, we can now model the structural behavior of each individual part in a hypercomplex, irregular, and discontinuous 3-D mesh. And using digital tools, we can fabricate any heteroclite mess precisely to specs, on time and on budget: robots will see to that. Industrial materials were standardized so they could be calculated and mass-produced. Today we can calculate and fabricate variations at all scales, and compose with unlimited variations as needed or as found in nature. Used this way, the new science of granular prediction does not constrain but liberates, and almost animates, inorganic matter.47 And far from being a mere, albeit powerful, inspirational metaphor—which it has been since the start of the digital turn in architecture—vitalism is already, in many cases, an actual and perfectly functional strategy, underlying or already embedded in many experiments, tendencies, and trends that populate today’s computational design.

Indeed, alongside the traditional, positivistic approach to the digital design and fabrication of variable property materials—which would push the resolution of predictive models and design notations to the highest level of granular detail compatible with the task, materials, and technology at hand—another, quite different option appears to be increasingly viable. As Menges has shown, in some cases the easiest way to cope with unwieldy or quirky materials is to devolve some capacity for adaptive improvisation to the last stage of robotic manufacturing, above and beyond traditional margins of tolerance.48 Given the ease and speed of data collection by ubiquitous sensors during all phases of production, robotic manufacturing can already include some reactive, autonomous skills. Regardless of all practical considerations, this approach also reflects a certain idea of the physical world and of the nature of matter: if inorganic matter is alive (as some believe it is, regardless of etymology), then its behavior is also to some extent unpredictable or indeterminable, and the only way to deal with the inherent capriciousness of such “living” materials is to react to their whims and volitions on the spot and on the fly. This would once again vindicate the well-known, pervasive analogy between computational fabrication and the “smooth” tooling of traditional craftsmanship: no artisan would X-ray a piece of timber before working on it, but all good artisans would know how to make the best of whatever they find in it when they start carving it. Likewise, many expert dentists would refrain from advanced digital scanning of a tooth they must treat, and would rather keep drilling it, tentatively, until their hapless patient screams. A first and obvious technological upgrade of this truculently heuristic method would be to have the dentist 3-D scan the tooth to the highest possible resolution, then calculate the best path for the drill on the model, before surgery starts—thus turning the dentist into an engineer, and in fact into a designer, as the whole surgery would be designed in full and in advance on a digital model of the operating theater. This would be the approach of modern structural design, and of design in general, as it has been known since the Renaissance: design is a predictive tool; it models something before it happens. Yet while many patients today would undoubtedly like their dentists to behave like designers, many avant-garde designers seem to prefer their robots to behave like old-school dentists—stopping when the material screams, so to speak. And, oddly, this trial-and-error approach to adaptive and reactive robotic fabrication is already yielding more promising and more practical industrial applications than the traditional scan-and-design approach. Whether we like it or not, the future of robotics may be closer to popular quackery than to industrial engineering.

9976_002_fig_014.jpg

2.14 Jenny E. Sabin, PolyBrick (2015–16). Polybrick 1—Unfired PolyBricks featuring 3-D printed high-fire clay body. Principal investigator: Jenny E. Sabin. Design research team: Martin Miller, Nicholas Cassab, Jingyang Liu Leo, David Rosenwasser. ©Sabin Design Lab, Cornell University. Image courtesy: Cooper-Hewitt Design Museum.

2.5 Spline Making, or the Conquest of Free Form

All tools modify the gestures of their users, and in the design professions this feedback often leaves a visible trace: when these traces become consistent and pervasive across objects, technologies, cultures, people, and places, they coalesce into the style of an age and express the spirit of a time. The second digital style, the style of a data-affluent society and of a nouveau data-rich technology, is the style of the late 2010s. And, as often happens in the history of technology, a good way to assess what is distinctive in the things we make, and in the way they look, is to look at the tools we have stopped using and take stock of the things we have just stopped making. Nothing shows the small-data logic of the first digital age in architecture better than the history of its most distinctive and recognizable trait, the spline-based curve.

As we now know, the first digital style in the 1990s turned out to be one of curves—or, as designers like to say, of “spliny” curves, in reference to the mathematics of continuous curves, or splines—which was one of the novelties of early CAD/CAM, computer graphics, and animation software. Yet one would be hard pressed to find any overarching or long-lasting reason to explain why computers should be primarily—or indeed, almost exclusively—used to make sinuous lines and curving surfaces, as they were in the 1990s. In fact, the theory of digital mass customization, as it was known back then, would suggest just the opposite.49 Starting from the early 1990s, the pioneers of digitally intelligent architecture argued that variability is the main distinctive feature of all things digital: within given technical limits, digitally mass-customized objects, all individually different, should cost no more than standardized and mass-produced ones, all identical. As computers and robots do not articulate aesthetic preferences, using CAD/CAM technologies we should be able to design and make boxes as well as blobs, as need be, at the same unit cost. Digital curvilinearity began to emerge as a theoretical trope in the mid to late 1990s, when it was seen as a side effect of sorts of the so-called Deleuze connection in architecture, in particular through the influence of Deleuze’s book, The Fold: Leibniz and the Baroque.50 But the theory of the objectile (better known today as digital parametricism), as outlined by Deleuze and Bernard Cache in that book, spoke for digital variability as a general tenet of nonstandard mass production, unrelated to any specific visual form. Deleuze’s “fold” itself was indeed a mathematical curve, which Deleuze related to continuous functions and to Leibniz’s invention of differential calculus; but the early digital avant-garde preferred to interpret even the Deleuzian “fold” as an angular crease, in the tradition of Peter Eisenman’s deconstructivism (and Eisenman himself was central to this part of the story).51 In short, nothing predestined the first wave of digitally intelligent designers to become streamliners. Nothing, that is, except the ease of use of the new spline-modeling software that became available in the early 1990s.

Spline modelers are those magical instruments, now embedded in most software for computer graphics and computer-aided design, that can script free-form curves of any kind, and translate every random cluster of points, every doodle or uncertain stroke of a pencil, into perfectly smooth and curving lines. This apparently inexorable program of universal polishing of the man-made environment (which is in fact a contingent side effect of the mathematical tools at our disposal for its notation and representation) derives from a complicated genealogy of mechanical, mathematical, and computational developments, each offering a particular take on the matter: sometimes aimed at finding the smoothest line through some arbitrary points; sometimes at the design of a randomly continuous curve; sometimes at its approximation through recursive subdivisions, and so forth.52

The most important component of today’s curve-generating software derives from studies carried out in the late 1950s and early 1960s by two French scientists, Pierre Bézier, an engineer by training, and mathematician Paul de Casteljau, working for the Renault and Citroën carmakers, respectively. A few years apart, de Casteljau and Bézier found two different ways to write down the parametric notation of a general, free-form, continuous curve (or of more such curves joined together). The two methods eventually merged, and although it is now known that de Casteljau’s work came first, Bézier was first to publish his findings (as of 1966). As a result these curves are known to this day as Bézier curves; recent literature distinguishes between Bézier’s curves and the de Casteljau algorithm still used to calculate them.53 Neither Bézier nor de Casteljau used the term “spline,” and there is evidence that Bézier perceived his own mathematical breakthroughs as conceptually independent from the mechanical and mathematical tradition of spline making.54 The term “spline” derives from the technical lexicon of shipbuilding, where it described slats of woods that were bent and nailed to the timber frame of the hull.55 The slats had to join those structural points in the smoothest possible way to avoid turbulence in the streamline (the line of contact between the water stream and the hull) and to limit drag. Later on, similar operations were performed by hand in other branches of engineering for similar aerodynamic reasons; the term equally refers to flexible rubber strips that technical draftsmen used until recently for drawing smooth curves between fixed points (drafting or draughting splines; or lofting splines when executed in full size). A spline is thus the smoothest line joining a number of fixed points, but there seems to have been no scientific definition of it before 1946, when the mathematician Isaac Jacob Schoenberg used the term to designate a new function he invented to calculate interpolations in statistical analysis. Basis Splines, or B-Splines, as these mathematical functions were then called, were eventually upgraded to include Bézier curves, and further generalized under the name of NURBS (for Non-Uniform Rational B-Splines): NURBS are today the most common notation for free-form curves in all branches of digital design and manufacturing.56 Recursive subdivisions, another method of curve generation, can approximate and parameterize existing shapes and volumes as found, and for that reason, subdivision-based CAD software is largely used by the animation industry.57 When animation software first became largely available, in the late 1990s, designers often saw subdivisions as an alternative, more “naturalistic” approach to free-form, unbounded by the engineering constraints of early CAD/CAM software; the mathematical premises of subdivision algorithms are also very different from those of splines and Bézier curves. Yet for the last ten years or so, “Subs,” or “Sub-Ds,” as students sometimes call them, have been mostly used as different means to the same end—namely, to generate smooth parametric curves and surfaces. Regardless of the mathematics used to notate them, or of the software used to draw them, splines, NURBS, Bézier curves, and Subs generate high-tech, sleek, streamlined images and objects from which each sign of human intervention—the wavy, uncertain trace of the gesture of the human hand and its analog, variable tools (angles, junctions, gaps)—have all been removed.

The original purpose and task of Bézier’s and de Casteljau’s math was, indeed, to eliminate precisely that kind of manual approximation from the design process. As Bézier and de Casteljau both recount in their memoirs, the mathematical notation of curves and surfaces was meant to increase precision, reduce tolerances in fabrication, and save the time and cost of making and breaking physical models and mock-ups.58 We can see why carmakers in the 1950s and 1960s—particularly French ones—would have been interested in that technology: streamlining (aerodynamics) was then popular in car design, but the molds to cast dies for the metal presses had to be individually handmade by artisan model-makers, as were all the prototypes before production; the final design of a car, for example, was not a set of drawings but a 3-D master model made in hardwood, from which blueprints and their measurements would be derived as and when needed. The famously aerodynamic Citroën DS was designed in Paris in the years immediately preceding de Casteljau’s studies—and entirely by hand; which is probably one reason why de Casteljau, a young mathematician then just back from his military service in Algeria, was hired by a research department at Citroën called “détermination mathematique des carrosseries” (mathematic determination of bodywork), headed by a Mr. Vercelli, about whom nothing more is known.59

Bézier’s and de Casteljau’s research, at the start, appears to have been primarily motivated by the mathematical ambition to translate general free-form curves into equations. Digital fabrication must have seemed a less urgent prospect back then: Bézier’s first experiments in numerically controlled milling machines were abandoned in 1960, and the first computer-driven drafting machines he built from scratch in the years that followed must have seemed so unpromising, in commercial terms, that the Régie Renault, then state owned, allowed Bézier’s team to publish a series of scholarly papers where the new technology was described in full.60 Thus Bézier’s research was at the basis of the CAD/CAM system that Renault kept developing and later adopted, called UNISURF, but also of competing proprietary technologies developed by other companies, such as the aircraft maker Dassault. On the other side of town, de Casteljau’s team was bound to secrecy for longer. De Casteljau recently wrote that, due to a prolonged strike of wood modelers, the templates for the Citroën GS were the first to be produced entirely by machines, but it is not clear to what extent the bodywork itself was designed on screens rather than in clay and wood: the GS started production in 1970, and its design was developed throughout the 1960s.61

Of course, this was not an exclusively Gallic story. At the same time as de Casteljau’s and Bézier’s studies on free-form curves, research on B-Splines was being carried out at MIT, Boeing, at the British Aircraft Corporation, and particularly by Carl de Boor at General Motors, which developed its own CAD/CAM system in the 1960s; in 1981 Boeing was among the first adopters (or perhaps the inventor) of NURBS, which in the same year were endorsed by the Initial Graphic Exchange Standard (IGES), a consortium of industry and government bodies.62 Yet Bézier often recounted that, as late as 1971, after one of his presentations, a manager at Renault objected, “If your system were that good, the Americans would already have invented it!”63 In 1969, Robin Forrest, whose work on Bézier’s curves would contribute to the generalization of B-Splines to NURBS, had to travel from Cambridge, England, to the GM Research Laboratories in Detroit to be shown “the crazy way Renault designs surfaces.”64 And in July 1991, when the office of Frank Gehry in Los Angeles started to look for a suitable CAD/CAM technology for the design of a building in the streamlined shape of a fish (which would become the Barcelona Fish, still prominently floating over Barcelona’s Olympic Marina), through a succession of phone calls and the intercession of Bill Mitchell at MIT and of Rick Smith at IBM, Gehry’s office was eventually referred to Dassault’s headquarters in Paris. The role of Dassault’s CATIA software in contemporary architecture is, to this day, better known than its industrial history before and after its architectural reincarnation.65

Few of today’s designers would have become keen spline-makers if they had had to make each spline by hand, bending slats of wood, or if they had had to slog through all of Bézier’s math with paper, pencils, and a slide rule. This is one reason why free-form curves were seldom built in the past, except when absolutely indispensable, as in boats or planes, or in times of great curvilinear exuberance, such as the baroque, or the Space Age in the 1950s and 1960s. But, as it happens, in one of those only apparently serendipitous encounters that mark the history of technological change, mathematics and technology here kept crossing paths. As computers became smaller and cheaper, CAD software migrated from corporate mainframe computers to workstations to desktop personal computers; AUTOCAD, the first CAD tool designed for MS-DOS, was released in 1982.66 As of the early 1990s affordable commercial software for computer-aided design started to include powerful spline modelers that made Bézier’s math easily accessible through graphic user interfaces: control points and vectors that anyone could easily edit, move, and drag on the screen with the click of a mouse. This game turned out to be faster and more intuitive than the mathematics on which it was based, and digital designers started to play it with gusto. Form-Z, the most influential of these early packages, was developed at Ohio State University, apparently with the complicity of Peter Eisenman, and released in 1991.67

Bézier’s curves, B-Splines, and NURBS are pure mathematical objects, based for the most part on differential calculus; their smoothness, which we perceive as a visual and tactile quality when splines are built, is a quantifiable entity throughout the design and production process, defined by one or more derivatives to the function of the first curve. Mathematical objects do not fit with the phenomenological world we inhabit; designers using spline modelers “model” reality by converting it into a stripped-down mathematical script, and the continuous lines and uniform surfaces they draw or make in physical reality are ultimately only a discrete, material approximation of the mathematical functions they use to notate them and computers then use to calculate as many points belonging to them as needed.68 Of course, not every digitally intelligent designer in the 1990s was a pure spline-maker: Greg Lynn and Bernard Cache explicitly claimed to use calculus as a primary tool of design,69 while Frank Gehry (for example) famously used computers to scan, measure, notate, and build the irregular, nongeometric three-dimensional shapes of his handmade maquettes.70 From its inauguration in 1997, Gehry’s Guggenheim Bilbao (designed from 1991 to 1994) was hailed as a global icon of the new digitally driven architectural style, but as most digital designers at the time used curve-generating software to even out, somehow, all final lines and surfaces, the divide between free-form, subdivisions, and mathematical splines is often a tenuous one, and not easy to perceive: regardless of the process (based on mathematics from the start, as in the case of Cache and Lynn, or derived at least in part from natural accidents, as in the case of Gehry), what one sees if one just looks is, simply, a landscape of sweeping, spliny curves. This is the visual aspect that was mostly noted at the time, and defined the style for which the first digital age became famous, for better or worse.

Fast-forward to 2016. The Internet boom famously busted in 2001, but the spline-dominated, curvilinear style now often associated with the “irrational exuberance” of the digital 1990s lived on, still successfully practiced by some of the early pioneers and handed down to a new generation of younger digital designers. With technical progress, many visionary predictions from the digital 1990s are now becoming a reality. Big, streamlined surfaces can now be built at more affordable prices, and recent projects by Zaha Hadid and others deploy this sinuous language at ever bigger and bolder scales, with a level of technical and formal virtuosity that would have been unimaginable only a few years ago. Today this style is often called “parametricism,”71 and at its core mathematical splines and NURBS are its most distinctive notational and technical tool. Mathematical splines, in turn, are based on analytic geometry and calculus: Newton’s and Leibniz’s calculus is, to this day, the best instrument to describe continuous lines that are characterized by variations, and variations of variations. The mathematics of free-form is thus the zenith and culmination of a historical process that started with Descartes, Leibniz, and Newton: Baroque mathematics found a way to use equations to replace the drawings of some simple geometrical figures—straight lines and conics; using basically the same tools, with only marginal upgrades, today we can notate any curve whatsoever. Bézier, so to speak, finished the job that Descartes and Leibniz had started. Yet once again, if we look at the history of mathematics in terms of pure quantitative data analysis, it is hard not to see calculus—just like logarithms, another great invention of baroque mathematics—as another, but even more astounding small-data technology: perhaps the ultimate small-data technology of modern science.

9976_002_fig_015.jpg

2.15 ICD Institute for Computational Design (Prof. Achim Menges). ITKE Institute of Building Structures and Structural Design (Prof. Jan Knippers). ICD/ITKE Research Pavilion 2014–15, sensor driven real-time robot control of cyber physical fiber placement system, University of Stuttgart, 2015. © ICD/ITKE University of Stuttgart.

2.6 From Calculus to Computation: The Rise and Fall of the Curve

Consider the mathematical notation of any continuous line in the well-known format y = f(x). How many points does any such notation include and describe? Plenty: an infinite number of them. In practice, that script contains all the points we would ever need in order to draw or produce that line at all possible scales. But let’s assume, again, to the limit and per absurdum, that we can have access to unlimited, zero-cost data storage and processing power. In that case, freed from the need to skimp on data, we could easily do away with any synthetic mathematical notation and record instead an inordinately long, dumb log: the list of the positions in space (x-, y-, z- coordinates) of as many points of that line as necessary. The resulting cluster of points would not appear to follow any rule or pattern, nor would it need to, so long as each point is duly identified, tagged, and registered—ready for use, so to speak. This is exactly the kind of stuff humans don’t like, but computers do well. That mathematical script (the y = f(x) functional notation) is a compact, economical, small-data shorthand we use to replace what is in fact an extraordinarily long list of numbers. As the list itself would be too long for our small-data skills, we convert it into a very short formula. That formula is much easier for us to manipulate, process, and remember than the list itself would be; just like any of the modern classification systems discussed above, an equation or function also, miraculously, allows us to retrieve all the events it refers to (in this instance, to recalculate the coordinates of all the points it indexes), whenever needed. Thus an equation compresses, almost miraculously, an infinite number of points into a short and serviceable alphanumerical script.

That works very well for us; which is why we still celebrate the names of Descartes, Leibniz, and Newton and we still study analytic geometry and calculus at school—in fact, this is why all the math we study at school, starting from the age of six, is aimed and finalized at getting there—at mastering differential calculus more or less by the time we reach adulthood. But computers do not work that way. When fed a raw, unstructured list, log, or inventory of any size, computers can just keep it as it comes—in its pristine sequence or any other, even if random or haphazard. Computers can search and retrieve each item in any list, regardless of the way that list is or is not ordered, because this is what computers do best: unlike us, and just like Gmail, computers can search without any prior sorting.72 That list may be unimaginably long, but computers don’t care; and the computer’s big data logic makes perfect economic sense, if data cost nothing. A list where raw data are kept unsorted does not make any sense to us, but computers do not care about that either. We humans need to sort (organize, classify, formalize, order, structure) a list to make it usable (so we can retrieve the items in it) and to make it meaningful (so we can organize ideas and things by hierarchies or orders of causation). Computers are not in the business of finding meanings and can use any huge, messy, untreated, and unprocessed random inventory just fine: they can search without sorting; hence they can predict without understanding. And, apparently, in many cases computers can already predict that way better than we can in our own, traditional, small-data way—which was that of modern science.

The calculus-based spline is a quintessential small-data tool. As a design figure, splines are space-age technology; they belong with the Beatles and flared jeans. Let’s be honest: if we only look at forms, or style, and we forget about the technology, the digital spline of the 1990s was a revival. Computers made streamlining cheaper and better; easier to design and make—but streamlining was certainly not a new idea, and in the last decade of the twentieth century the spline was certainly not a new form. By providing computational tools and graphic user-interfaces to Bézier’s math, digital spline modelers gave streamlining a new lease on life. That was a very good idea twenty years ago, when computers, processing power, and data were expensive. In that context, it made perfect sense to use computers to emulate and replicate the small-data logic of modern mathematics—in a sense, to make computers imitate us. But in today’s big data environment that mimetic effort is no longer necessary—indeed, it is no longer warranted. Computers can work better by following their own logic. We can make computers sort before searching, the way we do. But computers already achieve much better results when we let them search without sorting, the way we don’t and can’t do.

Revolutions in manufacturing tend to happen first at a small scale, and scaling up may sometimes be late in coming. In this instance, digital splines revolutionized graphic design one decade before they changed the history of world architecture. It all started with laser printing. Mechanical printers can only print from a limited library of built-in metal fonts: think of a typewriter. The interchangeable typeballs or daisy wheels in the electric typewriters of the 1960s and 1970s allowed typists to switch between a few styles of fonts, but replacing the typeballs or wheels took time.73 Laser printers, to the contrary, can print all kinds of fonts (and indeed any rasterized image), seamlessly and from the same machine; so, when affordable laser printers became available, in the mid-1980s, word processors started to upgrade their library of fonts, adding new styles and sizes.74 Following the traditions of typography, each font (for example, Times) should then have been designed anew for each size (8p, 10p, 12p …), and each glyph digitized as a rasterized map of pixels; but this would have created graphic files far too big for the limited memories and processors of early personal computers. The solution came with Adobe’s PostScript software, first released in 1984.75 PostScript notated the design of each glyph mathematically, as a combination of straight lines and Bezier curves. The advantage was that the same script would fit all sizes, because the same formula would generate the same drawing (say, the lowercase a in Times font) at every scale the available software and hardware would support, on the screen as well as in print, and each of these scalable signs would look the same (hence the acronym, then so popular, WYSIWYG, for “what you see [on the screen] is what you get [in print]).” Thus, thanks to the math of free-form curves (Bézier’s, etc.), a vast and unwieldy graphic library was compressed into a file so small that it could run on most PCs of the time (which incidentally also led to the desktop publishing revolution of the late 1980s and 1990s).

Once again, however, that entire strategy would be unwarranted in today’s computational environment. Processors and storage devices are now so cheap and powerful that huge graphic libraries of all kinds can now be kept and processed almost everywhere without the need for any sophisticated compression technologies. For example: if we wanted to, we could redesign each glyph to allow for design variations specific to each size of a font (as was the case in traditional typography), and we could do that for an inordinate number of different fonts. That would create a very large inventory of bitmaps—but again, today that would hardly be a problem. Of course, no matter how big that inventory, the number of available bitmaps would always be limited, and as bitmaps are not scalable, chances are that sooner or later someone would fail to find a font in the exact size or resolution needed. Using splines that would never happen: that is indeed the great notational and, in a sense, ontological advantage of the mathematics of continuity against all arithmetics of discreteness, old and new alike: an advantage which, in this instance, we would deliberately relinquish. Would that matter? Today, we might Google the font we need and find out that it already exists somewhere. Someone could be tasked to add the missing parts of the design when the time comes. Or we might let a few pixels show in all of their coarse discreteness for a while, until someone, or something, interpolates, or fills the gaps. This is what some designers of the second digital age are now doing.

2.7 Excessive Resolution

In different ways, today’s digital avant-garde has already started to use Big Data and computation to engage somehow the messy discreteness of nature as it is, in its pristine, raw state—without the mediation or the shortcut of elegant, streamlined mathematical notations. The messy point-clouds and volumetric units of design and calculation that result from these processes are today increasingly shown in their apparently disjointed and fragmentary state; and the style resulting from this mode of composition is often called voxelization, or voxelation. The Computational Chair Design Studies by Philippe Morel of EZCT Architecture & Design were among the earliest demonstrations of this approach,76 and the 2013 ArchiLab exhibition in Orléans, France, reveals this new formal landscape at a glance (see, for example, works by Alisa Andrasek and Jose Sanchez, Marcos Cruz and Marjan Colletti, Andrew Kudless, David Ruy and Karel Klein, Jenny Sabin, and Daniel Widrig).77 Subdivisions-based programs, originally used to simulate continuous curves and surfaces, today are often tweaked to achieve the opposite effect, and segments or patches are left big enough for the surface to look rough or angular. Discreteness is also at the basis of the method of finite elements (seen above),78 now embedded in most software for structural design, and which represents in many ways an early example of “agnostic” science,79 where the prediction of structural behavior is separated from causal interpretations.

9976_002_fig_016.jpg

2.16 Zaha Hadid Architects, Heydar Aliyev Centre, Baku, Azerbaijan (2007–12). Photo: © Hufton + Crow.

9976_002_fig_017.jpg

2.17 Philippe Morel / EZCT Architecture & Design Research, Studies on Optimization: Computational Chair Design Using Genetic Algorithms (with Hatem Hamda and Marc Schoenauer) (2004). The version “T1-M 860” is obtained through the optimization of 860 generations (86,000 Finite Elements Analysis–based evaluations). EZCT Architecture & Design Research © 2004. Photo: Ilse Leenders.

9976_002_fig_018.jpg

2.18 Alisa Andrasek and Jose Sanchez, BLOOM, a crowdsourced garden, urban toy, and social game, developed for the London Olympics (2012). Visitors were invited to change the layout of the initial pavilion and to add or combine new pieces for seating or other purposes. © Alisa Andrasek and Jose Sanchez.

More examples could follow, but the spirit of the game is the same: in all such instances, designers use the power of today’s computation to notate reality as it appears at any chosen scale, without converting it into simplified and scalable mathematical formulas or laws. The inherent discreteness of nature (which, after all, is not made of dimensionless Euclidean points or of continuous mathematical lines but of distinct chunks of matter, all the way down to molecules, atoms, electrons, etc.), is then captured and, ideally, kept as it comes, or in practice as close to its material structure as needed, with all of the apparent randomness and irregularity that will inevitably show at each scale of resolution. Evidently, the abstract continuity of the spline does not exist in nature: we can write down splines as mathematical formulas and imagine them as a seamless flow of Euclidian points, but in physical reality we can only make most of them by discrete pieces, by pixels or voxels—which can only be as small as the maximum resolution supported by the display, printer, or physical interface we are using.80 The manufacturing tool which best interpreted the spirit of continuity of the age of spline making was the CNC milling machine, a legacy subtractive fabrication technology that, using computer-controlled drills, could at its best simulate the sweeping, smooth, and continuous gestures of the hand of a skilled craftsman—a sculptor, but also a baker, or a wax modeler.81 Not surprisingly, the CNC milling machine was the iconic tool of the 1990s, and there was a time when every school of architecture in the world had, or wanted, one. Today the 3-D printer has taken its place: an additive fabrication technology, where each voxel must be individually designed, calculated, and made.

In its present form, the distinction between subtractive and additive making goes back to Leon Battista Alberti’s De Statua (composed in Latin at some point between 1435 and 1472), and to Michelangelo’s Letter to Benedetto Varchi of 1549 (see chapter 3). Not surprisingly, Michelangelo thought that only the effort of taking matter away from a block of solid stone (per forza di levare) was worthy of the name of sculpture. In more recent times, subtractive fabrication was the first to go digital: numerically controlled milling machines have been in use since the early 1950s, first driven by punched cards and tapes, then by electronic computers (hence the term CNC, for Computer Numerically Controlled). Three-dimensional printing and additive fabrication came later: stereolithography, invented in 1984, was known and marginally available in the 1990s, but cheap and versatile 3-D printers (including desktop 3-D printers) were launched only in 2008–09,82 and the technology has taken the forefront of digital design and culture in the last few years. 83 At the time of writing, 3-D printing is as influential for a new generation of digital makers as 3-D milling was at the change of the millennium. This is ostensibly due to a number of developments in the technologies of manufacturing, but the two processes are also based on contrary, indeed incompatible, informational logics.

CNC milling can be as data rich as one wants it to be, or the machine allows, but in the absence of any signal (that is, in the case of zero-data input) digital subtractive technologies will still work—and deliver a plain, solid chunk of matter: in most cases, releasing the original surface in its pristine material state, unmarked, and without any denting, milling, or amputation. On the contrary, in additive technologies each voxel must be individually printed; in the absence of signal, additive fabrication delivers nothing at all. Hence digital milling can make do with few data, or even with no data—and indeed designers using digital subtractive technologies apply data to inform matter only as and where needed, whereas each 3-D printed voxel requires a certain amount of data and of machine time.

Furthermore, as each voxel is individually printed, and 3-D printing does not involve any reusable cast, mold, stamp, or die, there is no need, and no incentive, to make any voxel-generated volume identical to any other, regardless of scale or size. Mechanical printing technologies are matrix based, and any matrix, once made, must be used as many times as possible to amortize its cost. But standardization does not deliver any economy of scale in a digital design and fabrication workflow: just as twenty years ago we learned that we could laser print one hundred different pages, or one hundred identical copies of the same page, at the same unit cost, today we can 3-D print any given volume of a given material at the same volumetric cost, based on the number of voxels that compose it (that is, on resolution), not on geometry or configuration (that is, regardless of where each printed voxel will be relative to all others in the same volume).84 An economist would say that the marginal production cost of a voxel is always the same, no matter how many we print—and irrespective of how they will be assembled. Thus 3-D printing brings the logic of digital mass customization from the macro scale of product design to the micro scale of the production of physical matter—and at previously unimaginable levels of complexity and granularity: recent 3-D printers can create objects with variable densities and in multiple materials.

These simple technical truisms have remarkable consequences. Let’s consider a seminal example of monumental 3-D printing, Michael Hansmeyer and Benjamin Dillenburger’s now famous digital grotto, commissioned by Frac Orléans and shown there in the summer of 2013.85 In spite of and against all appearances, the grotto was not carved from a block (in the subtractive way); it was printed from dust—that is, almost from nothing—in the additive way. As a result, the grotto we see, including all of its intricate details, was faster and cheaper to make than a plain full block of that size (if printed at the same resolution), simply because the void inside the grotto was not printed. If we had wanted the plain full block, we should have kept printing—and would have kept spending. Likewise, any onlooker familiar with the traditional (manual or mechanical, or even early digital) ways of making may instinctively assume that the astoundingly intricate detailing of the grotto must have cost even more labor and money—that is, labor and money that would have been saved had this detailing not been added. Not so: as all 260 million surfaces in this 30-billion voxel space had to be individually 3-D printed, the technical cost of delivering the same number of voxels in regular rows, so as to create plain, flat, and regular surfaces, would have been exactly the same.

9976_002_fig_019.jpg

2.19 Daniel Widrig, Degenerate Chair, Frac Centre Collection (2012). Author’s rendering.

9976_002_fig_020.jpg

2.20 Michael Hansmeyer and Benjamin Dillenburger, Digital Grotesque (2013). Test assembly (2.20) and detail (2.21).

This is a rather counterintuitive result, as we tend to think that decoration, or ornament, is expensive, and the more decoration we want, the more we have to pay for it. But in the case of Hansmeyer’s grotto, the details and ornament we see inside the grotto, oddly, made the grotto cheaper. Of course this is only true in theory, if we disregard the time and cost of designing each voxel one by one—an operation that appears to have required some drastic design shortcuts, interpolations, and simplifications.86 All the same, the difference in technical and theoretical terms is striking, even revolutionary. Since the beginning of modern times, indeed since Leon Battista Alberti, Western architecture has developed a systematic theory of ornament as supplement: something which is added on top of an object or a building, and which can be taken away if necessary.87 But for this very reason, puritans, Taylorists, and modernists of all sorts have always blamed ornament as waste, superfluity, and, in Adolf Loos’s famous slogan, a crime—labor and capital thrown out the window, money that should have been better spent in some other way.

It now appears that the technical and cultural premises of all this are simply not true anymore. In the age of Big Data and 3-D printing, decoration is no longer an addition; ornament is no longer a supplemental expense; hence the very same terms of decoration and ornament, predicated as they are on the traditional Western notion of ornament as supplement and superfluity, do not apply, and perhaps we should simply discard these terms, together with the meanings they still convey.88 This opens a Pandora’s box of theoretical issues, which in turn undermine some core aesthetic and architectural principles of both the classical and modernist traditions.

2.8 The New Frontier of Alienation, and Beyond

As several critics have pointed out, the opulent detailing shown in recent 3-D printed pieces appears to require an almost inhuman—or posthuman—level of information management: no one can notate 30 billion voxels one by one without the intervention of a more or less intelligent tool for the machinic or automatic control of some design features, and these big-data works display a kind of design intelligence that is already closer to that of the machine than to ours.89 Others have also pointed out that the visual and tactile density evoked by these pieces is more than human senses can perceive, but that allegation is probably unfounded: no matter how much design and detailing we can now put into our work, its resolution is bound to compare poorly with anything that already exists in nature—in organic nature, and even in inorganic nature.90 Admittedly, the pieces we fabricate can now be designed at much smaller scales than the standard items of modern industrial mass production used to be (say, a steel I-beam); but in that too we are simply, little by little, getting closer to nature—not such an uncommon outcome among the mimetic arts.

Similar arguments have also been recently invoked to construe a theory of today’s digital style as a style of realism, excessive realism, or digital hyperrealism. In an interesting and quirky book just published by polymath designer and theoretician Michael Young, the richness in detail and figural sumptuosity that are increasingly transparent in some work of today’s digital avant-garde are interpreted as an “estrangement” device, as defined by Viktor Shklovsky in 1917 (sometimes called “distancing effect” or “alienation effect,” and today better known as Bertolt Brecht’s hallmark performing art technique).91 In this instance, the distancing effect appears to derive from mostly technical factors, involving streaks and strands of Heidegger’s theory of the Unzuhandenheit (unhandiness of the technical object), better known today among digital theoreticians through the recent interpretations of Bruno Latour and Graham Harman: some technical objects that we do not even notice when they function become perceivable and hugely meaningful when they fail, or when some symptomatic anomalies suggest the possibility of an imminent failure.92 Michael Young cites as an example a 1999 work by the artist Jeff Wall, where what looks like a normal photograph is in fact a subtle montage of indexical incongruities acting in the background, almost surreptitiously, as an “intensifier of aesthetic attention,” and suggesting that while all seems right in the picture, something, somewhere, must be quite wrong.93 According to Michael Young, this unhandiness is a stylistic feature of today’s digital avant-garde: the overwhelming richness of digitally created detail induces feelings of discomfort, or estrangement, similar to the “weird realism” that Harman famously attributes to the horror fiction writer H. P. Lovecraft (a cyberpunk and cyberpulp cult writer, as well as the subject of the first published essay by the acclaimed misanthropic writer Michel Houellebecq).94 Assuredly, excessive resolution is a diacritical trait of the second digital style, and it has reasons to appear “weird.” Excessive resolution is the outward and visible sign of an inward and invisible excess of data: a reminder of a technical logic we may master and unleash, but that we can neither replicate, emulate, nor even simply comprehend with our mind. Again, this is not entirely unprecedented: just like the Industrial Revolution created prosthetic extensions that multiplied the strength of our natural bodies, the digital revolution is now creating prosthetic extensions that multiply the strength of our natural intelligence; but just like mechanical machines did not abide by—indeed, they often subverted—the organic logic of our bodies, digital machines now do not abide by—indeed, they often subvert—the organic logic of our minds. Thus, just like industrial products embodied an artificial technical logic that went counter to that of natural hand-making (and many did not like that back then), computational products now embody an artificial logic that is counter to that of natural, organic intelligence—the mode of thinking of our mind, as expressed by the method of modern science (and many today do not like that). This may be one reason why the emergence of some inchoate form of artificial intelligence in technology and in the arts already warrants a more than robust amount of natural discomfort, and the feeling of “alienation,” which originally, in Marx’s critique of the Industrial Revolution, meant the industrial separation of the hands of the makers from the tools of production,95 may just as well be applied today to the ongoing postindustrial separation of the minds of the thinkers from the tools of computation.

9976_002_fig_021.jpg

2.21 Michael Hansmeyer and Benjamin Dillenburger, Digital Grotesque (2013). Test assembly and detail.

9976_002_fig_022.jpg

2.22 Christian Kerez, Incidental Space (figures 2.22 and 2.23). Installation, Swiss Pavilion, 15th Venice Biennale of Architecture (2016). Neither designed nor scripted, Kerez’s walk-in grotto was the high-resolution, calligraphic transcription, 42 times enlarged, of a cavity originally produced by a random accident inside a container the size of a shoebox. As in Frank Gehry’s pioneering use of digital scanners to notate the shapes of irregular volumes in his handcrafted models, digital tools for design and production are turned into a seamless, universal 3-D pantograph, capable of capturing any accident of nature, of notating it at any level of geometrical resolution, and replicating it at any scale of material fabrication. © Christian Kerez Zürich AG. Photos: Oliver Dubuis.

9976_002_fig_023.jpg

2.23

9976_002_fig_024.jpg

2.24 Marjan Colletti, Plantolith (2013), 250 kilogram silica sand 3-D print. Photo: Marjan Colletti.

9976_002_fig_025.jpg

2.25 Quaquaversal Centrepiece at the Spring-Summer 2016 Iris Van Herpen ready-to-wear collection (Musée d’Historie de la Médicine, Paris, October 8, 2015). Three robotic arms from the University of Innsbruck’s REX|LAB, dressed up as fable-like creatures, manipulate and 3-D-print actress Gwendoline Christie’s dress. Design: Iris van Herpen, Jolan van der Wiel, Marjan Colletti + REX|LAB. Photo: Marjan Colletti.

9976_002_fig_026.jpg

2.26 Young and Ayata, Still Life with Lobster, Silver Jug, Large Berkenmeyer Fruit Bowl, Violin, Books, and Sinew Object After Pieter Claesz, 1641 (2014). Digital rendering and photomontage. Team: Emmanuel Osorno.

9976_002_fig_027.jpg

2.27 Young and Ayata, Base Flowers, Volume Gallery, Chicago (2015). Multimaterial 3-D print, resin, full color sandstone. Team: Sina Ozbudun, Isidoro Michan.

9976_002_fig_028.jpg

2.28 Alisa Andrasek, Wonderlab, AD Research Cluster 1, B-Pro M.Arch Architectural Design, The Bartlett UCL, Gossamer Skin (2016). Building skin based on environmental data (light, temperature and acoustics) and robotically 3-D printed.

Tutors: Alisa Andrasek, Daghan Cam, Andy Lomas. Robotics: Feng Zhou. Projects/Students: Supanut Bunjaratravee, WeiWen Cui, Manrong Liang, Xiao Lu, Zefeng Shi. © Alisa Andrasek, AD Research Cluster 1, The Bartlett UCL.

The critique of the “weird realism” of some contemporary digital avant-garde appears to have been inspired by a recent philosophical movement known as speculative realism, or object-oriented ontology. Yet, in spite of a spate of recent publications on the matter, the two trends do not appear to have much more in common than the name—and a very general one at that. In the winter of 2015 several contributors to the journal Log tried to ascertain what “an object oriented architecture would look like,” arguing that the philosophy of speculative realism relates to a new design sensibility based on “joints, gaps, … misalignments, and patchiness”;96 but this simply describes one of the core traits of the second digital style, on which no philosopher of that school (or of any other) has vented any opinion. However, another aspect of speculative realism may perhaps more deeply resonate with some concerns, ambitions, and expectations of today’s computational design. Regardless of what it originally meant, which is irrelevant in this context, the speculative realists’ notion of a “flat ontology” is often cited to endorse the view that minerals, vegetables, animals, humans, as well as technical objects, can and should be seen as ontologically equal, all endowed with the same quantum of free will.97

While the idea that a stone, a cat, a parsnip, and a vacuum cleaner may decide to go out together in the evening for a drink may appear to defy common sense—or at least common experience—vitalism, animism, and spiritism have a long and distinguished tradition in the West (as sciences, beliefs, and crafts, as well as in witchcraft, sorcery, and magic). The so-called postmodern sciences of indeterminacy and nonlinearity always had a strong spiritualistic component (alongside a more positivistic, hard-science one); these doctrines and beliefs have always been a powerful source of inspiration for digital designers, particularly in the 1990s, when many thought that computers, and the Internet, would vindicate a long-standing, nondeterministic view of science and of nature. Likewise, the theories of emergence and of self-organizing systems, which have played an equally powerful role in the history of digitally intelligent architecture, always lent themselves to vitalistic interpretations—alongside more practical, instrumental ones.98 In the end, although few would admit it verbatim, the very same notion of artificial intelligence—of an inorganic life created by humans—could vindicate many time-honored assumptions of white and black magic, and fulfill some of its objectives. Without going to such extremes, a respectable, scholarly, university-grade philosophical school claiming that all matter is equally alive—including inorganic matter, which designers craft and mold and animate, in a metaphorical sense—is likely to appeal to contemporary designers and theoreticians who may actually believe in the animation of the inorganic, as some do. Likewise, today’s computer-based science of data, dealing as it does with previously unimaginable degrees of complexity, may appear as the next avatar of choice for the timeless human ambition to reach for something beyond the grasp of the human mind. As argued above, today’s computation is certainly out of kilter with the methods of modern science and the processes of our mind, both hardwired for small data. But does the new science of data warrant, encourage, or even just admit of any continuing belief in the indeterminacy (if not the animation) of the natural and social phenomena it describes?

Stephen Wolfram received a PhD in theoretical physics at the age of twenty, and two years later, in 1981, he was one of the first recipients of a MacArthur “genius grant.” In 1986 he started to develop Mathematica, a new scientific software based on the operations and functions of modern mathematics.99 In a sense, Mathematica (at the time of this writing, a global industry standard for all kind of applications, and not only in science and technology) allows computers to imitate and reenact the modern science of math in its entirety—including notations, symbols, formalisms, and logic. Then Wolfram had another idea: he thought that, rather than making computers imitate us, we would be better off to let them work in their own way. He turned to cellular automata, a discrete mathematical model that had been known since the 1940s and had gained popularity in some postmodern circles in the 1970s. In 2002 Wolfram published a 1,280-page book, A New Kind of Science, claiming that, using cellular automata, machines can simulate events that modern mathematics cannot calculate, and modern science cannot predict.100

Cellular automata are rules or algorithms for very simple operations that computers can easily repeat an extraordinary number of times. Ostensibly, this is the opposite of the human logic: as human operations are slow and human time is limited, we generally prefer to go the other way, and human science takes a lot of time to develop, hone, and refine a few very general laws that, when put to task, can easily lead to calculable results. Computers, having no knack for abstraction, prefer to repeat the same dumb operation almost ad infinitum (which they can do very fast) until something happens. By letting computers do just that, Wolfram’s “new kind of science” can already predict complex natural phenomena (such as the growth of crystals, the formation of snowflakes, the propagation of cracks, or turbulences in fluid flow) that modern science has traditionally seen as indeterminable or not calculable. When the right initial rules are intuited and let loose, computers will replicate those unpredictable events and play them out just as they unfold in nature. Wolfram never explained how to intuit the right rules—a process that may not be very dissimilar from that of arriving at the right causal laws in modern inferential science. As simulated tests are much faster than real ones, however, trial and error in this instance may be a viable heuristic method, and indeed Wolfram’s new scientific method is very similar to the process of computational simulation already current in structural engineering: when structures are too complex to calculate or even to understand using the rules and laws of classical mechanics, engineers simulate a vast number of almost random variations and try them out in simulation (in fact, they break them on the screen), until they find one that is good enough, or does not break. Likewise, if one lets a machine try plenty of them out for as long as it takes, at some point one of Wolfram’s cellular automata may strike the right sequence and replicate a complex natural process in full. Which cellular automaton will do the magic? No one can tell at the start. And why did that cellular automaton work, and not another? No one can tell at the end.

9976_002_fig_029.jpg

2.29 Stephen Wolfram, cellular automaton number 30, at 10, 25, and 250 steps, from Wolfram, A New Kind of Science (Champaign, IL: Wolfram Media, 2002), 27–29 (redesigned by A. Vougia). © Stephen Wolfram, LLC.

For many empiricist, positivist, and utilitarian thinkers, the primary purpose of science always was to predict the future—our understanding of nature was little more than an additional bonus, in a sense, and an ancillary means to this end. Computers can now predict things that science cannot explain. Yet, if prediction without causation always had some sulphurous zing and magic aura about it, there is not much that is magic in cellular automata. The same cellular automaton script, when rerun, will always generate exactly the same sequence, no matter how long we let it run; if one of these sequences replicates at some point some formerly indeterminable natural processes, then the natural phenomena thus described must be equally replicable. Computational simulation and cellular automata do not expand, and certainly do not extol, the ambit of scientific indeterminacy; on the contrary, they lessen and lower it, because they offer predictions where modern science didn’t and doesn’t. What some still call indeterminism is in fact a new kind of computational hyper-determinism—quite simply, determinism we do not understand. We use it because it works. And, evidently, we can try to make some effort to figure out how and why computers can do that.

As I have argued, this posthuman logic is already ubiquitous in our daily lives and embedded in many technologies we use. Nothing represents the spirit and the letter of this new computational environment more than the “search, don’t sort” Google tagline: humans must do a lot of sorting (call it classifications, abstraction, formalization, inferential method, inductive and experimental science, causal laws and laws of nature, mathematical formulas, etc.) in order to find and retrieve things—both literally and figuratively. Computers are just the opposite: they can search without sorting. Humans need a lot of sorting because they can manage only few data at a time; computers need less sorting—or, indeed, no sorting—because they can manage way more data at all times. To sort, humans must have a view of the world—regardless of which one came first, the worldview or the will to sort; computers need neither.

Architects adopted digital technologies earlier, and more wholeheartedly, than any other trade, profession, or craft. Since the early 1990s the design professions have been at the forefront of digital innovation, and many key principles of the digital revolution—from digital mass customization to distributed 3-D printing—have been interpreted, developed, popularized, if not outright invented by architects and designers. Wolfram’s cellular automata are now the hottest topic in schools of architecture around the world, where students use them to do all kind of things—just as they did with spline modelers twenty years ago. Now, like then, the use and abuse of these tools leave a visible trace—and that shows in the style of the things we make. But digital designers, due to their position as trendsetters and early adopters, are also ideally positioned to capture, interpret, and give visible form to the technologies they use. When Frank Gehry’s Guggenheim Bilbao was inaugurated in 1997, it was immediately recognized as the emblem of a new way of building—and of a new technical age. That building—and a few similar ones, but none with the same force of persuasion—proved to the world that, using digital technologies, we could realize objects that, until a few years before, few architects could have conceived, and no engineer could have built. And if new ideas and new technologies can to such extent revolutionize building, one may be led to think that the same may be true of any other field of art and industry and of human society in general. Such is the power of persuasion that architectural forms can wield in some fatidic moments of architectural history. The first digital turn was one such moment. For better or worse, the second digital turn now unfolding may be another.

Notes

I have discussed some of the ideas presented in the introduction and in chapter 2 in one or more of the following articles: “Digital Phenomenologies: From Postmodern Indeterminacy to Big Data and Computation,” GAM 10 (2014): 100–112; “Breaking the Curve. Big Data and Digital Design,” Artforum 52, 6 (2014): 168–173; “Big Data and the End of History,” Perspecta 48, Amnesia (2015): 46–60; “The Digital is Our Stuff,” in Fluid Totality, ed. Zaha Hadid and Patrik Schumacher (Basel: Birkhäuser, 2015): 20–25; “The New Science of Form-Searching,” AD 237 (2015): 22–27; “Christian Kerez’s Art of the Incidental,” Arch+ 51 (2016): 70–76; “Excessive Resolution,” AD 244 (2016): 80–83; “The New Science of Making,” in LabStudio: Design Research Between Architecture & Biology, ed. Jenny E. Sabin and Peter Lloyd Jones (London and New York: Routledge, 2017).