1  What Is an Algorithm?

If we want to live with the machine, we must understand the machine, we must not worship the machine.

Norbert Wiener1

Rise of the Culture Machines

Sometime in the late 2000s, our relationship with computers changed. We began carrying devices around in our pockets, peering at them at the dinner table, muttering quietly to them in the corner. We stopped thinking about hardware and started thinking about apps and services. We have come not just to use but to trust computational systems that tell us where to go, whom to date, and what to think about (to name just a few examples). With every click, every terms of service agreement, we buy into the idea that big data, ubiquitous sensors, and various forms of machine learning can model and beneficially regulate all kinds of complex systems, from picking songs to predicting crime. Along the way, an old word has become new again: the algorithm. Either overlooked or overhyped, the algorithm is rarely taken seriously as a key term in the cultural work that computers do for us. This book takes that word apart and puts it back together again, showing how algorithms function as culture machines that we need to learn how to read and understand.

Algorithms are everywhere. They already dominate the stock market, compose music, drive cars, write news articles, and author long mathematical proofs—and their powers of creative authorship are just beginning to take shape. Corporations jealously guard the black boxes running these assemblages of data and process. Even the engineers behind some of the most successful and ubiquitous algorithmic systems in the world—executives at Google and Netflix, for example—admit that they understand only some of the behaviors their systems exhibit. But their rhetoric is still transcendent and emancipatory, striking many of the same techno-utopian notes as the mythos of code as magic when they equate computation with transformational justice and freedom. The theology of computation that Ian Bogost identified is a faith militant, bringing the gospel of big data and disruption to huge swaths of society.

This is the context in which we use algorithms today: as pieces of quotidian technical magic that we entrust with booking vacations, suggesting potential mates, evaluating standardized test essays, and performing many other kinds of cultural work. Wall Street traders give their financial “algos” names like Ambush and Raider, yet they often have no idea how their money-making black boxes work.2 As a keyword in the spirit of cultural critic Raymond Williams,3 the word algorithm frequently encompasses a range of computational processes including close surveillance of user behaviors, “big data” aggregation of the resulting information, analytics engines that combine multiple forms of statistical calculation to parse that data, and finally a set of human-facing actions, recommendations, and interfaces that generally reflect only a small part of the cultural processing going on behind the scenes. Computation comes to have a kind of presence in the world, becoming a “thing” that both obscures and highlights particular forms of what Wendy Hui Kyong Chun calls “programmability,” a notion we will return to in the guise of computationalism below.4

It is precisely this protean nature of computation that both troubles and attracts us. At some times computational systems appear to conform to that standard of discrete “thingness,” like the me of Sumerian myth or a shiny application button on a smartphone screen. At other moments they are much harder to distinguish from broader cultural environments: to what extent are spell-check programs changing diction and grammatical choices through their billions of subtle corrections, and how do we disentangle the assemblage of code, dictionaries, and grammars that underlie them? While the cultural effects and affects of computation are complex, these systems function in the world through instruments designed and implemented by human beings. In order to establish a critical frame for reading cultural computation, we have to begin with those instruments, jammed together in the humble vessel of the algorithm.

Our look at Snow Crash revealed the layers of magic, “sourcery,” and structured belief that underpin the facade of the algorithm in culture today. Now we turn to the engineers and computer scientists who implement computational systems. Rooted in computer science, this version of the algorithm relies on the history of mathematics. An algorithm is a recipe, an instruction set, a sequence of tasks to achieve a particular calculation or result, like the steps needed to calculate a square root or tabulate the Fibonacci sequence. The word itself derives from Abū ʿAbdallāh Muḥammad ibn Mūsā al-Khwārizmī, the famed ninth-century CE mathematician (from whose name algebra is also derived). Algorismus was originally the process for calculating Hindu-Arabic numerals. Via al-Kwarizmi, the algorithm was associated with the revolutionary concepts of positional notation, the decimal point, and zero.

As the word gained currency in the centuries that followed, “algorithm” came to describe any set of mathematical instructions for manipulating data or reasoning through a problem. The Babylonians used some of the first mathematical algorithms to derive square roots and factor numbers.5 Euclid devised an algorithm for taking two numbers and finding the greatest common divisor they share. Throughout this evolution, the algorithm retained an essential feature that will soon become central to the story: it just works. That is to say, an algorithm reliably delivers an expected result within a finite amount of time (except, perhaps, for those edge cases that fascinate mathematicians and annoy engineers).

Historian Nathan Ensmenger recounts how the academic discipline of computer science coalesced only after its advocates embraced the concept of the algorithm, with one of the field’s founders, Donald Knuth, tracing the field’s origins to al-Khwarizmi in his seminal textbook The Art of Computer Programming.6 The algorithm was an ideal object of study, both easily grasped and endlessly puzzling:

By suggesting that the algorithm was as fundamental to the technical activity of computing as Sir Isaac Newton’s laws of motion were to physics, Knuth and his fellow computer scientists could claim full fellowship with the larger community of scientists.7

And yet, as mathematician Yiannis Moschovakis points out, Knuth’s argument about what algorithms actually are is an extremely rare instance where the question is foregrounded.8 For computer scientists the term remains more of an intuitive, unexamined notion than a delineated logical concept grounded in a mathematical theory of computation.

Thanks in large part to Knuth, the algorithm today is a fundamental concept in computer science, an intellectual keystone typically covered in the introductory Algorithms and Data Structures course for undergraduate majors. Algorithms represent repeatable, practical solutions to problems like factoring a number into its smallest prime number components or finding the most efficient pathway through a network. The major focus for contemporary algorithmic research is not whether they work but how efficiently, and with what tradeoffs in terms of CPU cycles, memory, and accuracy.

We can distill this pragmatic approach to algorithms down to a single PowerPoint slide. Robert Sedgewick, a leading researcher on computational algorithms, also happened to teach the version of Algorithms and Data Structures that I took as an undergraduate; he calls the algorithm a “method for solving a problem” in his widely circulated course materials.9 This is what I term the pragmatist’s definition: an engineer’s notion of algorithms geared toward defining problems and solutions. The pragmatist’s definition grounds its truth claim in utility: algorithms are fit for a purpose, illuminating pathways between problems and solutions. This is the critical frame that dominates the breakout rooms and workstations of engineers at Google, Apple, Amazon, and other industry giants. As Google describes them: “Algorithms are the computer processes and formulas that take your questions and turn them into answers.”10 For many engineers and technologists, algorithms are quite simply the work, the medium of their labor.

The pragmatic definition lays bare the essential politics of the algorithm, its transparent complicity in the ideology of instrumental reason that digital culture scholar David Golumbia calls out in his critique of computation.11 Of course this is what algorithms do: they are methods, inheriting the inductive tradition of the scientific method and engineering from Archimedes to Vannevar Bush. They solve problems that have been identified as such by the engineers and entrepreneurs who develop and optimize the code. But such implementations are never just code: a method for solving a problem inevitably involves all sorts of technical and intellectual inferences, interventions, and filters.

As an example, consider the classic computer science problem of the traveling salesman: how can one calculate an efficient route through a geography of destinations at various distances from one another? The question has many real-world analogs, such as routing UPS drivers, and indeed that company has invested hundreds of millions of dollars in a 1,000-page algorithm called ORION that bases its decisions in part on traveling salesman heuristics.12 And yet the traveling salesman problem imagines each destination as an identical point on a graph, while UPS drop-offs vary greatly in the amount of time they take to complete (hauling a heavy package up with a handcart, say, or avoiding the owner’s terrier). ORION’s algorithmic model of the universe must balance between particular computational abstractions (each stop is a featureless, fungible point), the lived experience and feedback of human drivers, and the data the company has gathered about the state of the world’s stop signs, turn lanes, and so on. The computer science question of optimizing paths through a network must share the computational stage with the autonomy of drivers, the imposition of quantified tracking on micro-logistical decisions like whether to make a right or left turn, and the unexpected interventions of other complex human systems, from traffic jams to pets.

ORION and its 1,000-page “solution” to this tangled problem is, of course, a process or system in continued evolution rather than an elegant equation for the balletic coordination of brown trucks. Its equations and computational models of human behavior are just one example among millions of algorithms attempting to regularize and optimize complex cultural systems. The pragmatist’s definition achieves clarity by constructing an edifice (a cathedral) of tacit knowledge, much of it layered in systems of abstraction like the traveling salesman problem. At a certain level of cultural success, these systems start to create their own realities as well: various players in the system begin to alter their behavior in ways that short-circuit the system’s assumptions. Internet discussion boards catalog complaints about delivery drivers who do not bother to knock and instead leave door tags claiming that the resident was not at home. These shortcuts work precisely because they are invisible to systems like ORION, allowing the driver to save valuable seconds and perhaps catch up on all those other metrics that are being tracked on a hectic day when the schedule starts to slip.

Many of the most powerful corporations in existence today are essentially cultural wrappers for sophisticated algorithms, as we will see in the following chapters. Google exemplifies a company, indeed an entire worldview, built on an algorithm, PageRank. Amazon’s transformational algorithm involved not just computation but logistics, finding ways to outsource, outmaneuver, and outsell traditional booksellers (and later, sellers of almost every kind of consumer product). Facebook developed the world’s most successful social algorithm for putting people in contact with one another. These are just a few examples of powerful, pragmatic, lucrative algorithms that are constantly updated and modified to cope with the messy cultural spaces they attempt to compute.

We live, for the most part, in a world built by algorithmic pragmatists. Indeed, the ambition and scale of corporate operations like Google means that their definitions of algorithms—what the problems are, and how to solve them—can profoundly change the world. Their variations of pragmatism then inspire elaborate responses and counter-solutions, or what communication researcher Tarleton Gillespie calls the “tacit negotiation” we perform to adapt ourselves to algorithmic systems: we enunciate differently when speaking to machines, use hashtags to make updates more machine-readable, and describe our work in search engine-friendly terms.13

The tacit assumptions lurking beneath the pragmatist’s definition are becoming harder and harder to ignore. The apparent transparency and simplicity of computational systems are leading many to see them as vehicles for unbiased decision-making. Companies like UpStart and ZestFinance view computation as a way to judge financial reliability and make loans to people who fail more traditional algorithmic tests of credit-worthiness, like credit scores.14 These systems essentially deploy algorithms to counter the bias of other algorithms, or more cynically to identify business opportunities missed by others. The companies behind these systems are relatively unusual, however, in acknowledging the ideological framing of their business plans, and explicitly addressing how their systems attempt to judge “character.”

But if these are reflexive counter-algorithms designed to capitalize on systemic inequities, they are responding to broader cultural systems that typically lack such awareness. The computational turn means that many algorithms now reconstruct and efface legal, ethical, and perceived reality according to mathematical rules and implicit assumptions that are shielded from public view. As legal ethicist Frank Pasquale writes about algorithms for evaluating job candidates:

Automated systems claim to rate all individuals the same way, thus averting discrimination. They may ensure some bosses no longer base hiring and firing decisions on hunches, impressions, or prejudices. But software engineers construct the datasets mined by scoring systems; they define the parameters of data-mining analyses; they create the clusters, links, and decision trees applied; they generate the predictive models applied. Human biases and values are embedded into each and every step of development. Computerization may simply drive discrimination upstream.15

As algorithms move deeper into cultural space, the pragmatic definition gets scrutinized more closely according to critical frames that reject the engineering rubric of problem and solution, as Pasquale, Golumbia, and a growing number of algorithmic ethics scholars have argued. The cathedral of abstractions and embedded systems that allow the pragmatic algorithms of the world to flourish can be followed down to its foundations in symbolic logic, computational theory, and cybernetics, where we find a curious thing among that collection of rational ideas: desire.

From Computation to Desire

What are the truth claims underlying the engineer’s problems and solutions, or the philosophy undergirding the technological magic of sourcery? They depend on the protected space of computation, the logical, procedural, immaterial space where memory and process work according to very different rules from material culture. The pragmatist’s approach gestures toward, and often depends on, a deeper philosophical claim about the nature of the universe. We need to understand that claim as the grounding for the notion of “effective computability,” a transformational concept in computer science that fuels algorithmic evangelism today. In her book My Mother Was a Computer, media theorist N. Katherine Hayles labels this philosophical claim the Regime of Computation.16 This is another term for what I sometimes refer to as the age of the algorithm: the era dominated by the figure of the algorithm as an ontological structure for understanding the universe. We can also think of this as the “computationalist definition,” which extends the pragmatist’s notion of the algorithm and informs the core business models of companies like Google and Amazon.

In its softer version, computationalism argues that algorithms have no ontological claim to truly describing the world but are highly effective at solving particular technical problems. The engineers are agnostic about the universe as a system; all they care about is accurately modeling certain parts of it, like the search results that best correspond to certain queries or the books that users in Spokane, Washington, are likely to order today. As Pasquale and a host of other digital culture critics from Jaron Lanier to Evgeny Morozov have argued, even the implicit claims to efficiency and “good-enough” rationalism at the heart of the engineer’s definition of algorithms have a tremendous impact on policy, culture, and the practice of everyday life, because the compromises and analogies of algorithmic approximations tend to efface everything that they do not comprehend.17

The expansion of the rhetoric of computation easily bleeds into what Hayles calls the “hard claim” for computationalism. In this argument algorithms do not merely describe cultural processes with more or less accuracy: those processes are themselves computational machines that can be mathematically duplicated (given enough funding). According to this logic it is merely a matter of time and applied science before computers can simulate election outcomes or the future price of stocks to any desired degree of accuracy. Computer scientist and polymath Stephen Wolfram lays out the argument in his ambitious twenty-year undertaking, A New Kind of Science:

The crucial idea that has allowed me to build a unified framework for the new kind of science that I describe in this book is that just as the rules for any system can be viewed as corresponding to a program, so also its behavior can be viewed as corresponding to a computation.18

Wolfram’s principle of computational equivalence makes the strong claim that all complex systems are fundamentally computational and, as he hints in the connections he draws between his work and established fields like theoretical physics and philosophy, he believes that computationalism offers “a serious possibility that [a fundamental theory for the universe] can actually be found.”19 This notion that the computational metaphor could unlock a new paradigm of scientific inquiry carries with it tremendous implications about the nature of physical systems, social behavior, and consciousness, among other things, and at its most extreme serves as an ideology of transcendence for those who seek to use computational systems to model and understand the universe.

Citing Wolfram and fellow computer scientists Harold Morowitz and Edward Fredkin, Hayles traces the emergence of an ideology of universal computation based on the science of complexity: if the universe is a giant computer, it is not only efficient but intellectually necessary to develop computational models for cultural problems like evaluating loan applications or modeling consciousness. The models may not be perfect now but they will improve as we use them, because they employ the same computational building blocks as the system they emulate. On a deeper level, computationalism suggests that our knowledge of computation will answer many fundamental questions: computation becomes a universal solvent for problems in the physical sciences, theoretical mathematics, and culture alike. The quest for knowledge becomes a quest for computation, a hermeneutics of modeling.

But of course models always compress or shorthand reality. If the anchor point for the pragmatist’s definition of the algorithm is its indefinable flexibility based on tacit understanding about what counts as a problem and a solution, the anchor point here is the notion of abstraction. The argument for computationalism begins with the Universal Turing Machine, mathematician Alan Turing’s breathtaking vision of a computer that can complete any finite calculation simply by reading and writing to an infinite tape marked with 1s and 0s, moving the tape forward or backward based on the current state of the machine. Using just this simple mechanism one could emulate any kind of computer, from a scientific calculator finding the area under a curve to a Nintendo moving Mario across a television screen. In other words, this establishes a computational “ceiling” where any Turing computer can emulate any other: the instructions may proceed more slowly or quickly, but are mathematically equivalent.

The Universal Turing Machine is a thought experiment that determines the bounds of what is computable: Turing and his fellow mathematician Alonzo Church were both struggling with the boundary problems of mathematics. In one framing, posed by mathematician David Hilbert, known as the Entscheidungsproblem, the question is whether it’s possible to predict when or if a particular program will halt, ending its calculations with or without an answer. Their responses to Hilbert, now called the Church–Turing thesis, define algorithms for theorists in a way that is widely accepted but ultimately unprovable: a calculation with natural numbers, or what most of us know as whole numbers, is “effectively computable” (that is, given enough time and pencils, a human could do it) only if the Universal Turing Machine can do it. The thesis uses this informal definition to unite three different rigorous mathematical theses about computation (Turing machines, Church’s lambda calculus, and mathematician Kurt Gödel’s concept of recursive functions), translating their specific mathematical claims into a more general boundary statement about the limits of computational abstraction.

In another framing, as David Berlinski argues in his mathematical history The Advent of the Algorithm, the computability boundary that Turing, Gödel, and Church were wrestling with was also an investigation into the deep foundations of mathematical logic.20 Gödel proved, to general dismay, that it was impossible for a symbolic logical system to be internally consistent and provable using only statements within the system. The truth claim or validation of such a system would always depend on some external presumption or assertion of logical validity: turtles all the way down. Church grappled with this problem and developed the lambda calculus, a masterful demonstration of abstraction that served as the philosophical foundation for numerous programming languages decades after his work.21 As Berlinski puts it, Turing had “an uncanny and almost unfailing ability to sift through the work of his time and in the sifting discern the outlines of something far simpler than the things that other men saw.”22 In other words, he possessed a genius for abstraction, and his greatest achievement in this regard was the Turing machine.

Turing’s simple imaginary machine is an elegant mathematical proof for universal computation, but it is also an ur-algorithm, an abstraction generator. The mathematical equivalence of Church and Turing’s work quickly suggested that varying proofs of effective computability (there are now over thirty) all gesture toward some fundamental universal truth. But every abstraction has a shadow, a puddled remainder of context and specificity left behind in the act of lifting some idea to a higher plane of thought. The Turing machine leaves open the question of what “effectively computable” might really mean in material reality, where we leave elegance and infinite tapes behind. As it has evolved from a thought experiment to a founding tenet of computationalism (and the blueprint for the computational revolution of the twentieth and twenty-first centuries), the Church–Turing thesis has developed a gravitational pull, a tug many feel to organize the universe according to its logic. The concept of universal computation encodes at its heart an intuitive notion of “effective”: achievable in a finite number of steps, and reaching some kind of desired result. From the beginning, then, algorithms have encoded a particular kind of abstraction, the abstraction of the desire for an answer. The spectacular clarity and rigor of these formative proofs in computation exists in stark contrast to the remarkably ill-defined way that the term is deployed in the field of computer science and elsewhere.23

This desire encoded in the notion of effectiveness is typically obscured in the regime of computation, but the role of abstraction is celebrated. The Universal Turing Machine provides a conceptual platform for uniting all kinds of computing: algorithms for solving a set of problems in particle physics might suddenly be useful in genetics; network analysis can be deployed to analyze and compare books, business networks, and bus systems. Abstraction itself is one of the most powerful tools the Church–Turing thesis—and computation in general—gives us, enabling platform-agnostic software and the many metaphors and visual abstractions we depend on, like the desktop user interface.

Abstraction is the ladder Wolfram et al. use to climb from particular computational systems to the notion of universal computation. Many complex systems demonstrate computational features or appear to be computable. If complex systems are themselves computational Turing Machines, they are therefore equivalent: weather systems, human cognition, and most provocatively the universe itself.24 The grand problems of the cosmos (the origins thereof, the relationship of time and space) and the less grand problems of culture (box office returns, intelligent web searching, natural language processing) are irreducible but also calculable: they are not complicated problems with simple answers but rather simple problems (or rule-sets) that generate complicated answers. These assumptions open the door to a mathesis universalis, a language of science that the philosophers Gottfried Wilhelm Leibniz, René Descartes, and others presaged as a way to achieve perfect understanding of the natural world.25 This perfect language would exactly describe the universe through its grammar and vocabulary, becoming a new kind of rational magic for scientists that would effectively describe and be the world.

Effective computability continues to be an alluring, ambiguous term today, a fault line between the pragmatist and computationalist definition of algorithms. I think of this as computation’s first seduction, rooted at the heart of the Church–Turing thesis. It has expanded its sway with the growth of computing power, linking back to the tap root of rationalism, gradually becoming a deeper, more romantic mythos of a computational ontology for the universe. The desire to make the world effectively calculable drives many of the seminal moments of computer history, from the first ballistics computers replacing humans in mid-century missile defense to Siri and the Google search bar.26 It is the ideology that underwrites the age of the algorithm, and its seductive claims about the status of human knowledge and complex systems in general form the central tension in the relationship between culture and culture machines.

To understand the consequences of effective computability, we need to follow three interwoven threads as the implications of this idea work themselves out across disciplines and cultural fields: cybernetics, symbolic language, and technical cognition.

Thread 1: Embodying the Machine

“Effective computability” is an idea with consequences not just for our conception of humanity’s place in the universe but how we understand biological, cultural, and social systems. Leibniz’s vision of a mathesis universalis is seductive because it promises that a single set of intellectual tools can make all mysteries accessible, from quantum mechanics to the circuits inside the human brain. After World War II, a new field emerged to pursue that promise, struggling to align mathematics and materiality, seeking to map out direct correlations between computation and the physical and social sciences. In its heyday cybernetics, as the field was known, was a sustained intellectual argument about the place of algorithms in material culture—a debate about the politics of implementing mathematical ideas, or claiming to find them embodied, in physical and biological systems.

The polymathic mathematician Norbert Wiener published the founding text of this new discipline in 1949, calling it Cybernetics; or Control and Communication in the Animal and the Machine. Wiener names Leibniz the patron saint of cybernetics: “The philosophy of Leibniz centers about two closely related concepts—that of a universal symbolism and that of a calculus of reasoning.”27 As the book’s title suggests, the aim of cybernetics in the 1940s and 1950s was to define and implement those two ideas: an intellectual system that could encompass all scientific fields, and a means of quantifying change within that system. Using them, the early cyberneticians sought to forge a synthesis between the nascent fields of computer science, information theory, physics, and many others (indeed, Wiener nominated his patron saint in part as the last man to have “full command of all the intellectual activity of his day”).28 The vehicle for this synthesis was, intellectually, the field of information theory and the ordering features of communication between different individual and collective entities, and pragmatically, the growing power of mechanical and computational systems to measure, modulate, and direct such communications.

On a philosophical level, Wiener’s vision of cybernetics depended on the transition from certainty to probability in the twentieth century.29 The advances of Einsteinian relativity and quantum mechanics suggested that uncertainty, or indeterminacy, was fundamental to the cosmos and that observation always affected the system being observed. This marked the displacement of a particular rationalist ideal of the Enlightenment, the notion that the universe operated by simple, all-powerful laws that could be discovered and mastered. Instead, as the growing complexity of mathematical physics in the twentieth and twenty-first centuries has revealed, the closer we look at a physical system, the more important probability becomes. It is unsettling to abandon the comfortable solidity of a table, that ancient prop for philosophers of materialism, and replace it with a probabilistic cloud of atoms. And yet only with probability—more important, a language of probability—can we begin to describe our relativistic universe.

But far more unsettling, and the central thesis of the closely allied field of information theory, is the notion that probability applies to information as much as to material reality. By framing information as uncertainty, as surprise, as unpredicted new data, mathematician Claude Shannon created a quantifiable measurement of communication.30 Shannon’s framework has informed decades of work in signal processing, cryptography, and several other fields, but its starkly limited view of what counts has become a major influence in contemporary understandings of computational knowledge. This measurement of information is quite different from the common cultural understanding of knowledge, though it found popular expression in cybernetics, particularly in Wiener’s general audience book The Human Use of Human Beings. This is where Wiener lays one of the cornerstones for the cathedral of computation: “To live effectively is to live with adequate information. Thus, communication and control belong to the essence of man’s inner life, even as they belong to his life in society.”31 In its limited theoretical sense, information provided a common yardstick for understanding any kind of organized system; in its broader public sense, it became the leading edge of computationalism, a method for quantifying patterns and therefore uniting biophysical and mathematical forms of complexity.

As Wiener’s quote suggests, the crucial value of information for cybernetics was in making decisions.32 Communication and control became the computational language through which biological systems, social structures, and physics could be united. As Hayles argues in How We Became Posthuman, theoretical models of biophysical reality like the early McCulloch–Pitts Neuron (which the logician Walter Pitts proved to be computationally equivalent to a Turing machine) allowed cybernetics to establish correlations between computational and biological processes at paradigmatic and operational levels and lay claim to being what informatics scholar Geoffrey Bowker calls a “universal discipline.”33 Via cybernetics, information was the banner under which “effective computability” expanded to vast new territories, first presenting the tantalizing prospect that Wolfram and others would later reach for as universal computation.34 As early as The Human Use of Human Beings, Wiener popularized these links between the Turing machine, neural networks, and learning in biological organisms, work that is now coming to startling life in the stream of machine learning breakthroughs announced by the Google subsidiary DeepMind over the past few years.

This is Wiener ascending the ladder of abstraction, positioning cybernetics as a new Liebnitzian mathesis universalis capable of uniting a variety of fields. Central to this upper ascent is the notion of homeostasis, or the way that a system responds to feedback to preserve its core patterns and identity. A bird maintaining altitude in changing winds, a thermostat controlling temperature in a room, and the repetition of ancient myths through the generations are all examples of homeostasis at work. More provocatively, Wiener suggests that homeostasis might be the same thing as identity or life itself, if “the organism is seen as message. Organism is opposed to chaos, to disintegration, to death, as message is to noise.”35 This line of argument evolved into the theory of autopoiesis proposed by philosophers Humberto Maturana and Francisco Varela in the 1970s, the second wave of cybernetics which adapted the pattern-preservation of homeostasis more fully into the context of biological systems. Describing organisms as information also suggests the opposite, that information has a will to survive, that as Stewart Brand famously put it, “information wants to be free.”36

Like Neal Stephenson’s programmable minds, like the artificial intelligence researchers who seek to model the human brain, this notion of the organism as message reframes biology (and the human) to exist at least aspirationally within the boundary of effective computability. Cybernetics and autopoiesis lead to complexity science and efforts to model these processes in simulation. Mathematician John Conway’s game of life, for example, seeks to model precisely this kind of spontaneous generation of information, or seemingly living or self-perpetuating patterns, from simple rule-sets. It, too, has been shown to be mathematically equivalent to a Turing machine, and indeed mathematician Paul Rendell designed a game of life that he proved to be Turing-equivalent (figure 1.1).37

10766_001_fig_001.jpg

Figure 1.1 “This is a Turing Machine implemented in Conway’s Game of Life.” Designed by Paul Rendell.

In fact, if we accept the premise of organism as message, of informational patterns as a central organizing logic for biological life, we inevitably come to depend on computation as a frame for exploring that premise. Wiener’s opening gambit of the turn from certainty to probability displaced but did not eliminate the old Enlightenment goals of universal, consilient knowledge. That ambition has now turned to building the best model, the finest simulation of reality’s complex probabilistic processes. Berlinski observed the same trend in the distinction between analytic and computational calculus, noting how the discrete modeling of intractable differential equations allows us to better understand how complex systems operate, but always at the expense of gaining a temporally and numerically discrete, approximated view of things.38 The embrace of cybernetic theory has increasingly meant an embrace of computational simulations of social, biological, and physical systems as central objects of study.

Hayles traces this plumb line in cybernetics closely in How We Became Posthuman, arguing that the Macy Conferences, where Wiener and his collaborators hammered out the vision for a cybernetic theory, also marked a concerted effort to erase the embodied nature of information through abstraction. In the transcripts, letters, and other archival materials stemming from these early conversations, she argues that the synthesizing ambitions of cybernetics led participants to shy away from considerations of reflexivity and the complications of embodiment, especially human embodiment, as they advanced their theory. But, as Hayles puts it, “In the face of such a powerful dream, it can be a shock to remember that for information to exist, it must always be instantiated in a medium.”39

While Hayles’s reading of cybernetics pursues the field’s rhetorical ascent of the ladder of abstraction as she frames the story of “how information lost its body,” there is a second side to the cybernetic moment in the 1940s and 1950s, one that fed directly into the emergence of Silicon Valley and the popular understanding of computational systems as material artifacts. We can follow Wiener back down the ladder of abstraction, too, through a second crucial cybernetic term, the notion of “feedback.” The feedback loop, as Hayles notes, is of interest to Wiener primarily as a universal intellectual model for understanding how communication and control can be generalized across different systems.40 But the feedback loop was also a crucial moment of implementation for cybernetics, where the theoretical model was tested through empirical experiments and, perhaps more important, demonstrations.

Consider Wiener’s “moth” or “bedbug,” a single machine designed to demonstrate a feedback loop related to seeking or avoiding light. Wiener worked with electrical engineer Jerry Wiesner to create the machine, a simple mechanical apparatus with one photocell facing to the right and another to the left, with their inputs directing a “tiller” mechanism that would aim the cart’s wheels as they moved. The demonstration achieved its intended purpose of showing lifelike behavior from a simple feedback mechanism, creating a seeming existence proof both of the similarity of mechanical and biophysical control mechanisms and of the efficacy of cybernetics as the model for explaining them. In fact, as historian Ronald Kline describes, the entire enterprise was a public relations stunt, the construction of the robot financed by Life magazine, which planned to run an article on cybernetics.41 Wiener’s demonstration machine presaged future spectacles of human–machine interaction like early Silicon Valley icon Douglas Engelbart’s “mother of all demos,” which first showcased several aspects of a functional personal computer experience in 1968.

10766_001_fig_002.jpg

Figure 1.2 Norbert Wiener and his “moth” circa 1950. Alfred Eisenstaedt / The LIFE Picture Collection / Getty Images.

The theoretical aspirations of cybernetics were always dependent on material implementation, a fact that has challenged generations of artificial intelligence researchers pursuing the platonic ideal of neural networks that effectively model the human mind.42 Kline reports that Life never ran photos of Wiener’s moth because an editor felt the machine “illustrated the analogy between humans and machines by modeling the nervous system, rather than showing the human characteristics of computers, which was Life’s objective.”43 In the end, Wiener had built a bug. The material context of the moth included not just a functioning feedback mechanism on wheels but the cultural aperture through which that construct would be viewed. In implementation, the mechanical feedback loop was overshadowed by an intellectual one, the relationship between a public scientist and his editors at Life. As it turned out they were less interested in Wiener’s argument, that feedback mechanisms could be computationally and mechanically modeled, than they were in searching out the human in the machine.

Thread 2: Metaphors for Magic

More than anything else, cybernetics was an attempt to create a new controlling metaphor for communication, one that integrated technological, biological, and social forms of knowledge. The story of Wiener’s moth illustrates the hazards of this approach: defining a controlling metaphor for communication, and by extension for knowledge, requires a deep examination of how language itself can shape both ideas and reality. The cybernetic vision of a unified biological and computational understanding of the world has never left us, continuing to reappear in the technical and critical metaphors we use to manipulate and understand computational systems. Chun explores the deeper implications of this persistent interlacing of computational and biological metaphors for code in Programmed Visions, demonstrating the interconnections of research into DNA and computer programming, and how those metaphors open up the interpretive problem of computation. For Chun the key term is “software,” a word she uses to encompass many of the same concerns I explore here in the context of the algorithm.

Programmed Visions draws a direct link between the notion of fungible computability reified by the Turing machine and the kinds of linguistic magic that have come to define so many of our computational experiences:

Software is unique in its status as metaphor for metaphor itself. As a universal imitator/machine, it encapsulates a logic of general substitutability; a logic of ordering and creative, animating disordering. Joseph Weizenbaum has argued that computers have become metaphors for “effective procedures,” that is, for anything that can be solved in a prescribed number of steps, such as gene expression and clerical work.44

With the “logic of general substitutability,” software has become a thing, Chun argues, embodying the central function of magic—the manipulation of symbols in ways that impact the world. This fundamental alchemy, the mysterious fungibility of sourcery, reinforces a reading of the Turing machine as an ur-algorithm that has been churning out effective computability abstractions in the minds of its “users” for eighty years. The “thing” that software has become is the cultural figure of the algorithm: instantiated metaphors for effective procedures. Software is like Bogost’s cathedral of computation, Chun argues, “a powerful metaphor for everything we believe is invisible yet generates visible effects, from genetics to the invisible hand of the market, from ideology to culture.”45 Like the crucifix or a bell-tower signaling Sunday mass, software is ubiquitous and mysterious even when it is obvious, manifesting in familiar forms that are only symbolic representations of the real work it does behind the scenes.

The elegant formulation of software as a metaphor for metaphor, paired with Chun’s quotation of Weizenbaum—the MIT computer scientist who created an alarmingly successful algorithmic psychotherapist called ELIZA in the 1960s—draws together cybernetics and magic through the notion that computers themselves have become metaphors for the space of effective computability. The algorithm is not a space where the material and symbolic orders are contested, but rather a magical or alchemical realm where they operate in productive indeterminacy. Algorithms span the gap between code and implementation, between software and experience.

In this light, computation is a universal solvent precisely because it is both metaphor and machine. Like Wiener’s robotic moth, the implemented algorithm is on the one hand an intellectual gesture (“Hello, world!”), a publicity stunt, and on the other a functioning system that embeds material assumptions about perception, decision-making, and communication in its construction. For example, think of the humble progress bar. When a new piece of software presents an indicator allegedly graphing the pace of installation, that code might well be a bit of magic (the status of the bar holding little relation to the actual work going on behind the scenes). But that familiar inching bar is also a functional reality for the user because no matter how fictitious the “progress” being mapped, nothing else is going to happen until the bar hits 100 percent—the illusion dictates reality. The algorithm of the progress bar depends not only on the code generating it but the cultural calculus of waiting itself, on a user seeking feedback from the system, and on the opportunity—increasingly capitalized on—to show that user other messages, entertainments, or advertising during the waiting phase.

As our generally unthinking acceptance of the progress bar demonstrates, we are primed to accept these magical calculations on multiple levels. We believe in the power of code as a set of magical symbols linking the invisible and visible, echoing our long cultural tradition of logos, or language as an underlying system of order and reason, and its power as a kind of sourcery. We believe in the elegant abstractions of cybernetics and, ultimately, the computational universe—that algorithms embody and reproduce the mathematical substrate of reality in culturally readable ways. This is what it means to say that an algorithm is a culture machine: it operates both within and beyond the reflexive barrier of effective computability, producing culture at a macro-social level at the same time as it produces cultural objects, processes, and experiences.

Just because we are growing more familiar, more intimately entangled with these culture machines, however, does not mean we understand the nature of their magic any more deeply. Weizenbaum argues that even for those closest to the mysteries, the programmers and developers who directly implement algorithms in code,

instrumental reason has made out of words a fetish surrounded by black magic. And only the magicians have the rights of the initiated. Only they can say what words mean. And they play with words and they deceive us.46

Chun extends Weizenbaum’s reference to fetish, arguing that the fetish of source code has only increased the perceived occult power of computation and the programmers who wield it. But, as she points out soon afterward, there is a channel through which we attempt to reconstruct the invisible working of the mechanism.

The fact that there is an algorithm, a meaning intended by code (and thus in some way knowable), sometimes structures our experience with programs. When we play a game, we arguably try to reverse engineer its algorithm or at the very least link its actions to its programming, which is why all design books warn against coincidence or random mapping, since it can induce paranoia in its users. That is, because an interface is programmed, most users treat coincidence as meaningful. To the user, as with the paranoid schizophrenic, there is always meaning: whether or not the user knows the meaning, s/he knows that it regards him or her.47

This is a quest for knowledge where the game substitutes for the world. Chun reveals another form of magic—an interpolation of purpose or meaning behind the informational experiences of computation. When we attempt to penetrate the mysteries of the interface, we are usually far less interested in how the experience works (where the bits are stored, how the pixels are rendered) than we are in why. If software is a metaphor for metaphors, the algorithm becomes the mechanism of translation: the prism or instrument by which the eternally fungible space of effective computability is focalized and instantiated in a particular program, interface, or user experience. With our apophenia, our desperate hunt for meaningful patterns, we try to peer through the algorithm to catch a glimpse of the computation behind it.

This is the purest expression of the fundamental magic of language itself as a system for effectively transmitting meaning and experience across the gulf between minds. As the instrument for spanning computational and cultural spaces, the algorithm also serves as a bridge that can, at times, allow traffic in both directions: meaning, or at least the promise of meaning, and an avenue for interpretation. The search for that meaning behind the screen is what Berlinski calls the search for “intelligence on alien shores,” an idea he pursues to the unexpected conclusion that algorithmic intelligence might support an argument for the universe’s intelligent design.48 This is a bridge too far, but the argument underscores the deep links between the cathedral and computation. Abstraction from the patterns, games, and politics of code to the discovery of meanings behind them is the essential move of algorithmic reading, which we return to below. Chun’s fetishism and Berlinski’s intelligent design have something in common, then: an argument that the central magic of computation depends on a quest for meaning, for understanding, that depends on particular forms of human cognition and recognition.

Thread 3: How to Think about Algorithms

Following the threads of embodied computation and the power of language to their conclusions brings us to the most challenging, and perhaps most interesting, question of all: how the mind itself grapples with effective computability. Our fascination with complexity and the search for meaning in patterns underlies some of our most entrenched myths about how we relate to our machines and tools on a cognitive level. This is what makes stories like Snow Crash so compelling: they address a deep-seated desire to fuse internal and external structures of order and meaning. A universal operating system for the human mind carries with it all sorts of implications for how information can be transmuted into knowledge both within and beyond our embodied physical selves—knowledge that, as in cybernetics, translates precisely between computational and biological information systems. Variations on this idealized link between computation and cognition run in both directions, from instantly downloading the knowledge of kung fu in The Matrix to the vision advanced by advocates of the singularity for uploading consciousness onto a digital computer. Those extremes are so fantastic, however, that they sometimes obscure the many ways that algorithms are already changing how we think.

Technological interfaces with consciousness are not only the stuff of science fiction. We use all sorts of tools to enhance or modify our mental experience of the world. These can be algorithmic systems, like those sophisticated notifications and nudges smartphones can deploy to encourage their owners to exercise more or depart on time for an upcoming meeting. Or they can be very simple, like a blind man using a cane to navigate. Indeed, as Plato famously argued in the Phaedrus, writing itself is such a technology, one he feared would diminish the mental faculties of those who depended on it instead of their own powers of memory and understanding.49 Thus begins the long history of humanity outsourcing our minds to machines: entrusting our thoughts and memories to stone, papyrus, lithograph, wax platter, photographic negative, hard drive, and networked storage service. The extension of human memory in technological dimensions allows us to “remember” far more today than we ever did, even as it alters our capacity to perform certain kinds of biological recollection by encouraging us to focus on how we might access information rather than the information itself.50

The philosopher of cognition Andy Clark has called this the “extended mind,” a framing of cognition that accommodates the many ways in which it spills out of the conscious brain into the body and our surrounding social and technical environments.51 As we grow more engaged and connected to algorithmic systems, the “coupling” or dependency on external resources for memory, decision-making, and even desire can become stronger.52 As Clark writes in Natural-Born Cyborgs,

Human thought and reason is born out of looping interactions between material brains, material bodies, and complex cultural and technological environments. We create these supportive environments, but they create us too.53

It is in large part because this kind of technological extension is already so deeply embedded in human culture that the possible integration of human and computer has been so fraught. The contemporary version of Plato’s concern springs from the same source as Stephenson’s nam-shub—the idea that human thought and computational processes can be shown to be equivalent. The question, as Weizenbaum puts it, of “whether or not human thought is entirely computable” has led, in the guise of scientism and the blind worship of modern technical systems, to the “spiritual cosmologies engendered by modern science [becoming] infected with the germ of logical necessity.”54 Echoing Wiener’s move from certainty to probability, “they convert truth to provability.”55 Weizenbaum suggests that our obsession with mastering everything that falls within the boundaries of effective computability has not only blinded us to what lies beyond that frontier, but also weakened our capacity to understand or debate the failings of computation within its domain. “Belief in the rationality-logicality equation has corroded the prophetic power of language itself. We can count, but we are rapidly forgetting how to say what is worth counting and why.”56 We confuse knowledge and meaning, process and purpose, by substituting the teleology of the original Enlightenment quest for knowledge with that secondary substitute, the quest for quantification and calculability.

Extending this line of thinking, what troubles Weizenbaum the most is not the vision of computers directly emulating human thought, or minds modeled in silicon, but rather the corrosive impact of computational thinking on the human self. “Now that language has become merely another tool, all concepts, ideas, images that artists and writers cannot paraphrase into computer-comprehensible language have lost their function and their potency.”57 Golumbia takes up this polemical argument against computationalism as a politically dangerous ideology (to which we return below), but Weizenbaum’s fundamental concern here is about linguistic imagination and the “prophetic power” of words. The gravity, the pull of computation, encourages us to discipline and order our language, to establish persistent and isomorphic relationships between ideas, to tailor our thinking for representation in relational databases. In Snow Crash the hackers were most susceptible to the nam-shub because computational thinking had already reordered their minds: “Your nerves grow new connections as you use them—the axons split and push their way between the dividing glial cells—your bioware self-modifies—the software becomes part of the hardware.”58 The process Stephenson describes here is automatization, the realignment of mental faculties to internalize and homeostatically manage a complex task like driving a car. And, as media journalist Nicholas Carr points out in The Glass Cage, we all experienced automatization first-hand when we learned to read, gradually internalizing the rules of grammar and spelling until the interpretation of written symbols was largely effortless.59 Just as Plato feared, our interaction with the technology of the written word not only changed the medium of thought, extending it to external papers, scrolls and other material stuff, but it also changed the mode of thought.

How we think about algorithms is a question that links symbolic language, computability, and brain plasticity. We might argue that the internalization of literacy is a kind of reprogramming, a nam-shub that we laboriously inculcate in children with every passing generation.60 We might press the modern instantiation of the McCulloch–Pitts neuron into service, claiming with Wiener’s cybernetics that the reconfiguring of neural networks in the brain can be computationally described. These would be pathways to arguing that reading is a mental algorithm encoded in literate minds. I am less interested in pursuing this notion than I am in the philosophical underpinnings that bring us here: the way that language itself, particularly written language, serves as the original “outside” to human thought, the first machine for processing culture across time and space.

The question of language’s role as a technology of cognition is a deep one, linking the Church-Turing thesis’s foundations in symbolic logic to the question of magic and culturally constructed reality. Indeed, we perceive language as a special case of the relationship between humanity and technology precisely because it plays an ontological role in constructing the world as we perceive it. Andy Clark offers several compelling pieces of empirical evidence for this thesis, arguing that language is “the kind of seed-technology that helped the whole process of designer-environment creation get off the ground.”61 The technological function of language as a system of me underpins not just communication but “precise numerical reasoning,” which brain scans have revealed depends on language centers.62

Weizenbaum argues the same point in the context of imagination:

It is within the intellectual and social world he himself creates that the individual prehearses and rehearses countless dramatic enactments of how the world might have been and what it might become. That world is the repository of his subjectivity. … Man can create little without first imagining that he can create it.63

Tools and processes are the me that embody these enactments, from the first prehistoric stone ax to the model of the informational universe that our search engines present us today. Or as Hayles puts it, cognition

reaches out into the techno-environment, dissolving the boundary between inside and outside into fluid assemblages that incorporate technical artifacts into the human cognitive system, not just as metaphors but as working parts of everyday thoughts and actions.64

And the first tool, the ur-process, is the intersubjective culture machine of language.

To think of language as a tool also allows us to begin seeing our other tools as linguistic statements too, as nam-shubs that contain concepts, grammars, verbs. They are, as Weizenbaum eloquently puts it, “pregnant symbols in themselves”: “a tool is also a model for its own reproduction and a script for the reenactment of the skill it symbolizes.”65 This line of thinking closely echoes the philosopher of technology Gilbert Simondon in his thinking about technics, or the ways in which technical objects can establish their own identities and constitute ensembles that reflect the tensions between multiple competing sociotechnical forces. Hayles weaves Simondon’s notion of ensembles together with Nigel Thrift’s argument that we are increasingly automatizing technological systems, ceasing to perceive them as forces that shape our world and simply accepting their functionality and design imperatives on a subconscious level.66 This is precisely the outcome that Weizenbaum cautioned against twenty years avant la lettre: the easy slide from rationality to a dependency on logicality and, increasingly, computational approximations of reality.

At this stage the specific relevance of this line of philosophical thinking about the nature and consequences of technics swings into view for our discussion of algorithms. The debate carried on from Plato to Simondon regarding our intellectual dependencies on external technical assemblages is paradigmatically similar to the debate over mathematical computability and logical consistency that raged in the early twentieth century, ultimately producing the Church–Turing thesis. Let me explain: mathematicians launched on the pathway to effective computability by first asking what the limits of symbolic languages were. This was an investigation of the foundations as well as the boundaries of mathematical thought based on the recognition that the languages of mathematics were themselves an essential part of the machinery of that thought. This was a limited instance of the broader cognition and mind debates Clark’s “extended mind” hypothesis sparked decades later—an examination of the relationship between cognition and the tools of cognition, grounded here in terms of mathematical truth and provability.

It was a debate about the nature of our dependence on mathematical language and the ways that choices of language, the affordances of different symbolic systems, could foreclose access to other means of understanding. Gödel’s incompleteness theorem definitively answered a fundamental, existential question (is there a logical and complete mathematical language?) with a firm negative. He demonstrated that no mathematical language whose statements are effectively calculable can both prove all true statements about natural numbers and remain logically consistent. Perhaps more damning, no such system can demonstrate its own consistency—one must always reach outside the boundaries of such a language in order to prove it. This theorem was a startling result, addressing generations of debate over the fundamental truth of arithmetic and the foundations of mathematics, an effort to find a bedrock of philosophical authority to support the rapidly expanding structures of mathematics.

The development of the Church–Turing thesis and the foundations of effective computability created a new linguistic machinery by means of the Turing machine. This conceptual object, this abstraction engine, opened up a space of operations for computation and served the essential function of clearly articulating the nature of our dependence on limited symbolic systems. Thanks to these proofs, we learned where the boundaries are. The mathematical proofs of effective computability offered by Church, Turing, and others created a new kind of certainty, and a new metaphor for thinking with—the universal mathematical ceiling of computability, and the Turing machine, respectively. But they also encoded a new form of ambiguity, or desire, in the boundary region of effective computability as implemented processes. All of this—the representational power and logical consistency of symbolic language; the construction of technical ensembles that coevolve with human cognition; the role of language as a bridge between human and computational structures of knowledge—all of it gets swept under the rug of the algorithm, the largely unexamined construct we use to instantiate ideas as processes at the intersection of computation and culture.

Process and Halting States

Now that we have taken the algorithm apart we can reassemble it with a new context. Drawing these threads together, we can see the multivalent and growing significance of the algorithm, that seemingly shallow and shopworn cultural figure at the heart of this book. The algorithm is an idea that puts structures of symbolic logic into motion. It is rooted in computer science but it serves as a prism for a much broader array of cultural, philosophical, mathematical, and imaginative grammars.67 The most radical of these tensions exists between the algorithm’s role as an effective procedure, a set of steps designed to produce an answer in a predictable length of time, on the one hand, and its function as a perpetual computational process, on the other. There is a crucial supposition embedded in both the pragmatist’s definition of the algorithm and the ideology of computationalism. In each case, the logical abstraction of mathematical solutions is yoked to a carefully considered definition of process as time-limited. For engineers it’s a method to solve a problem. For Church, Turing, and the computationalists, the notion of effectively computable problems and the Turing machine itself both depend on processing, on carrying out instructions in finite time. At one level this might seem facile or tautological: the definition of a method must rely on some notion of method. But, like the Turing machine, or Chun’s definition of software, the method has become its own metaphor, more or less visible, for how the algorithm really works as a process that runs forever, persistently modeling reality.

Google search is one such algorithm that embeds these tensions into its technical and cultural architecture. This system delivers relevant results based on a wide range of factors, completing its execution in hundredths of a second—an effective procedure that proudly announces the rapidity of its completion with every query. And yet, as a process, search operates perpetually, extending its reach and influence over the Internet as Google aggregates new sources of information into its systems. As we will see in the next chapter, that influence extends into the future as well, as the company focuses on anticipating our future needs in addition to answering our present questions. For the company, the space of computable questions is continually expanding in multiple dimensions. Engineers at Google, Apple, Amazon, and many other entities are working ceaselessly to actively push the envelope of effective computability in order to make their products better and to create new culture machines: a limitless frontier for limitless processing.

In this way the process of the algorithm transcends the logic of the effective procedure to become a steady-state technical being, as Simondon would have it. Search is not just a system that leaps into action for a fraction of a second here or there; it is a persistent, highly complex organism that simultaneously influences the shape of the Internet, drives new innovations in machine learning, distributed computing and various other fields, and modifies our own cognitive practices.

It should be clear to anyone who has participated in digital culture for the past decade that this phenomenon is not homeostatic, but rather is moving irreversibly toward particular goals. And, as believers in the singularity like to point out, it is accelerating as it goes. From Moore’s Law (predicting that the number of transistors packed onto a computer chip would double every two years) to the explosive growth in global data production, it’s obvious that “ubiquitous computing” will continue to create a thickening layer of sensors, data, and algorithms over physical and cultural space. From television shows to finance, we are claiming new spaces for computation in a period of expansion fueled by the tension between time-limited (effective) procedures and perennial processes.

The answer we have come up with is to continually expand the problem space while still offering finite computational solutions. Algorithmic thinking encodes the computationalist vision, the maximalist idea that all complex systems will eventually be made equivalent through computational representation. This is the desire for effective computability writ large, and it has existential consequences for humanity. As our extended mind continues to elaborate new systems, functions, applications, and zones of operation, the question of what it means to be human grows increasingly abstract, ever more imbricated in the metaphors and assumptions of code. Discussing Simondon’s vision of technics as interpreted by fellow philosopher Bernard Stiegler, media scholars Andrés Vaccari and Belinda Barnet argue that

both philosophers put the idea of a pure human memory (and consequently a pure thought) into crisis, and open a possibility which will tickle the interest of future robot historians: the possibility that human memory is a stage in the history of a vast machinic becoming. In other words, these future machines will approach human memory (and by extension culture) as a supplement to technical beings.68

Our existential anxiety about being replaced by our thinking machines underlies every thread of algorithmic thinking, from the shibboleth of the Turing test and Wiener’s argument for the “human use of human beings” to the gradual encroachment of digital computation on many human occupations, beginning with that of being a “computer.” Nowhere is the prospect more unsettling than in the context of extended cognition, however. As we outsource more of our minds to algorithmic systems, we too will need to confront the consequences of dependence on processes beyond our control. There is some compelling evidence to suggest that the externalization of human memory and experience makes certain technological advances “inevitable,” according to sociologists William F. Ogburn and Dorothy Thomas.69 The universal machine of culture itself might prime new intellectual discoveries, making certain inventions not just possible but inescapable at certain historical junctures. Calculus, natural selection, the telegraph: all were “discovered” or “invented” multiple times, in various guises, as words, ideas, and methods circulated through the right scientific circles. As Vaccari and Barnet’s playful notion of future robot historians suggests, it’s easy to read these events as moments in a long arc of progress that might not include humanity at its end.

As Stiegler has argued in partial reply to Simondon, the balance of agency may already lie with technical systems:

Today, machines are the tool bearers, and the human is no longer a technical individual; the human becomes either the machine’s servant or its assembler [assembliste]: the human’s relation to the technical object proves to have profoundly changed.70

This is a phase shift in process. As a vessel for putting symbolic logic into motion, the algorithm has increasingly come to manage not just memories but decisions. The growing complexity of many human fields, particularly in technical research, has deepened our dependence on computational systems and in many instances made scientific experimentation itself a domain for effective computability. Algorithmic approaches to research have already prompted some investigators to argue that “automated science” will revolutionize technical progress, perhaps even making the generation of hypotheses obsolete as algorithms continuously interact with huge volumes of data.71 Algorithms have generated mathematical proofs and even new explanatory equations that defy human comprehension, making them “true” but not “understandable,” a situation that mathematician Steven Strogatz has termed the “end of insight.”72

For Stiegler this is a nightmare; for others it presages the computational rapture, the event horizon of the singularity, when algorithmic intelligence transcends humanity (with infamously unpredictable results for our species). If the origin story of code begins with language, logos, and the manipulation of symbols to generate meaning, this is its mythical finale, the triumph of sign over signification. We know it as the apotheosis of the algorithm, when technological change will accelerate to such speed that human intelligence may simply be eclipsed. In this scenario we no longer manipulate the symbols and we can no longer construe their meaning. It is the endgame of computationalism as considered by philosopher Nick Bostrom, computer scientist Vernor Vinge, and others, an existential referendum on the relationship between humanity and technics.73 If we follow the asymptote of the effective procedure far enough, the space of computation advances with not just a vanguard but a rearguard, and humanity might simply be left behind—no longer effective or efficient enough to merit emulation or attention.

Ironically this possible end-state—the end of insight—is a rationalist romance, drawing its lineage straight to the deeply humanistic spirit of inquiry at the heart of the Enlightenment. It extends the vision of Denis Diderot, one of the cocreators of the world-changing Encyclopédie, as we’ll see in chapter 2: persistently applying the system or procedure of the Enlightenment would eventually lead to a state of transcendent knowledge, leaving open the question of whether robot encyclopedists can experience transcendence. Isaac Asimov took that vision still farther, calling it “psychohistory” in his Foundation stories. With enough cleverness and data, he imagined, we can predict the course of human events because culture is algorithmic, because individuals and circumstances can be abstracted away according to dependable rules. If the singularity provides one way to interpret the endgame of computationalism, this is the second: the triumph of instrumental reason effected by machines we can no longer understand.

Our technical systems have specifically political implications, articulating certain forms of power that often contradict the emancipatory rhetoric of computation. David Golumbia indexes this political calculus in The Cultural Logic of Computation, noting how

computerization tends to be aligned with relatively authority-seeking, hierarchical, and often politically conservative forces—the forces that justify existing forms of power [in a project that] meshes all too easily with the project of instrumental reason.74

The “psychohistory” that Asimov imagined as a potentially emancipatory technical discovery is, for Golumbia, exemplary of a passive acceptance of the political statement that “a great deal, perhaps all, of human and social experience can be explained via computational processes.”75 At its heart, this is about the politics of abstraction, which Golumbia ties to the instrumental reason of the Enlightenment. It is the same anxiety that communications scholar Fred Turner traced in 1960s student protestors who repurposed computer punch cards to battle the administrative machine: “I am a UC student. Please do not fold, bend, spindle or mutilate me.”76 This second telos also ends with the triumph of the machine, but what Golumbia imagines is a different sort of engine: the kinds of state power and bureaucracy that computational management and quantification enable. The place of the human is ambiguous at best in both the singularity universe and in Golumbia’s reading of computational ideology.

Golumbia stands in here for a range of critics who argue that the de facto result of computational culture, at least if we do not intervene, is to reinforce state power. In later chapters we will examine more closely the “false personalization” that Tarleton Gillespie cautions against, extending Internet activist and author Eli Pariser’s argument in The Filter Bubble, as well as media theorist Alexander Galloway’s elegant framing of the political consequences of protocol.77 But Turner’s From Counterculture to Cyberculture offers a compelling view of how countercultural impulses were woven into the fabric of computational culture from the beginning. The figure of the hacker draws its lineage in part from the freewheeling discourse of the industrial research labs of the 1940s; the facilities that first created the opportunities for young people to not only work but play with computers.78

But, as Galloway has argued, the cybernetic paradigm has recast the playful magic of computation in a new light.

With the growing significance of immaterial labor, and the concomitant increase in cultivation and exploitation of play—creativity, innovation, the new, the singular, flexibility, the supplement—as a productive force, play will become more and more linked to broad social structures of control. Today we are no doubt witnessing the end of play as politically progressive, or even politically neutral.79

This shift, the computation of play, signals a fundamental sea change in values that we will address in more detail in chapters 4 and 5: the substitution of process itself as a value for the human experiences of play and joy. The political critique of computationalism reaches its pinnacle here, in the argument that our most central human experiences, the unconstrained play of imagination and creativity, are increasingly falling within the boundaries of effective computability and the regime of computation.

Implementation

The mythos of computation reaches its limits where it begins to interact with material reality. Like UPS’s ORION, computation in real-world environments is messy and contingent, requiring constant modification and supervision. I call this problem “implementation”: the ways in which the desire for effective computability get translated into working systems of actual computers, humans, and social structures. Learning what goes on inside the black box of the algorithm does not change the fact that the action is specifically contained by its implementation: the box itself is just as important. By learning to interpret the container, the inputs and outputs, the seams of implementation, we can begin to develop a way of reading algorithms as culture machines that operate in the gap between code and culture.

Negotiating that gap is precisely what algorithms do: operating at the intersection of computational and cultural space, they must compromise or adjudicate between mathematical and pragmatic models of reason. The inescapability of that work, the fact that algorithms must always be implemented to be used, is actually their most significant feature. By occupying and defining that awkward middle ground, algorithms and their human collaborators enact new roles as culture machines that unite ideology and practice, pure mathematics and impure humanity, logic and desire. To discuss implementation is thus to join a conversation about materiality and the embodied subjects that enact, transmit, and receive information.80 Simply asking the question where an algorithm like ORION exists might lead to very complicated answers involving a distributed network of sensors, servers, employees, code, and so on. To grapple with that question we need to turn to platform studies and media archeology, where we can consider implementation as a form of materiality grounded in the hardware and software that make up the “foundation of computational expression.”81

In Mechanisms, media scholar Matthew Kirschenbaum deploys two forms of materiality for reading digital objects. “Forensic materiality” is the physical and material situation of a particular digital object: the particular hard drive where a database is stored, with its own electromechanical systems and physical characteristics.82 In the context of algorithms the logistical details of where and how data is stored are vital aspects of implementation, like the vast amount of energy expended to keep major data centers in operation. “Formal materiality” is the intellectual shadow cast by these physical manifestations of computation. The term refers to the “imposition of multiple relational computational states on a data set or digital object,” or the “procedural friction or perceived difference—the torque—as a user shifts from one set of software logics to another.”83 Kirschenbaum eloquently describes the essential role of the observer or forensic investigator in effectively reading digital objects, and formal materiality illustrates how much translation or transposition is involved in effectively manipulating them.

An algorithmic system must be implemented in a forensically material way by having its code and data stored on some physical hard drive, running on some processor. But these physical instantiations involve a dizzying flurry of formal materiality moves before they can become broadly accessible: the server running this algorithm might really be a virtual conglomerate of hundreds of machines organized by a distributed computing platform like Hadoop; the various instances of the algorithm might exist in software containers managed by another formal material layer, something like the Docker platform; the public interfaces for the algorithm might vary their appearance and behavior based on user customization, like Google’s tailored search results. Of course, the list can go on: the boundaries of implementation seem endless because they are the boundaries of the material universe.

And yet the algorithm is always bounded in implementation because the principle of effective computability is central to its formal identity. This is why I choose to use the word “implementation” rather than rely on the concept of platform studies or materiality per se. As a tool or effective procedure, the algorithm is an implement that is coded into existence through a framework of forensic and formal analogies, assumptions, and declarative frameworks. The ideal implementation must encode or embed abstracted versions of those externalities, or deal with them as structured inputs, if it is to operate successfully and “solve the problem” at hand. But of course the reality of implementation always incurs new contingencies, dependencies, and complexity, muddying the ground between the forensic and formal status of the algorithm as an implemented system. Unlike the proverbial black box, the culture machine is actually porous, ingesting and extruding cultural and computational structures at every connection point with other sociotechnical systems.

Ian Bogost, from the “Cathedral of Computation”:

Once you start looking at them closely, every algorithm betrays the myth of unitary simplicity and computational purity. … Once you adopt skepticism toward the algorithmic- and the data-divine, you can no longer construe any computational system as merely algorithmic. Think about Google Maps, for example. It’s not just mapping software running via computer—it also involves geographical information systems, geolocation satellites and transponders, human-driven automobiles, roof-mounted panoramic optical recording systems, international recording and privacy law, physical- and data-network routing systems, and web/mobile presentational apparatuses. That’s not algorithmic culture—it’s just, well, culture.84

Piercing the illusion of computation as an exceptional religious experience, however, leaves us with a new problem. It may be “just, well, culture,” but it is a culture increasingly transformed by these platforms. Giving up the magic of code does not change the pervasive effects of algorithmic implementations on cultural systems. Instead, it complicates the picture, erasing the false simplicity and idealism of Silicon Valley–style computationalist evangelism. What we are left with underneath that facade of computational perfection is exactly the mess of interconnected systems, policy frameworks, people, assumptions, infrastructures, and interfaces that Bogost describes above.

In other words, implementation runs both ways—every culture machine we build to interface with the embodied world of human materiality also reconfigures that embodied space, altering cognitive and cultural practices. More important, this happens because implementation encodes a particular formulation of the desire for effective computability, a desire that we reciprocate when we engage with that system. The algorithmic quest for universal knowledge mirrors and feeds our own eternal hunger for self-knowledge and collective awareness. The effectiveness of systems that model, predict, and recommend things to us can feed the religious experience Bogost cautions against, and we willingly accept their abstractions in order to feel the magic of computation. There is a seductive quality to algorithmic models of digital culture even when they are only partially successful because they order the known universe. You listen to a streaming music station that almost gets it right, telling yourself that these songs, not quite the right ones, are perfect for this moment because a magical algorithm selected them.

Computational algorithms may be presented as merely mathematical, but they are operating as culture machines that dramatically revise the geography of human reflexivity, as we will see in the algorithmic readings that follow this chapter. They reshape the spaces within which we see ourselves. Our literal and metaphorical footprints through real and virtual systems of information and exchange are used to shape the horizon ahead through tailored search results, recommendations, and other adaptive systems, or what Pariser calls the “filter bubble.”85

But when algorithms cross the threshold from prediction to determination, from modeling to building cultural structures, we find ourselves revising reality to accommodate their discrepancies. In any system dependent on abstraction there is a remainder, a set of discarded information—the différance, or the crucial distinction and deferral of meaning that goes on between the map and the territory. This gap emerges in implementation, when the collisions between computational and cultural understandings of algorithms must be resolved. In many ways the gap creates the cultural space for the figure of the algorithm, providing the glitches, inexplicable results, and strange serendipity we imagine as the magic of code. In Snow Crash, the problem of implementation underwrites several crucial plot twists, but one of the most memorable is the plotting of three-dimensional space in the Multiverse, the virtual reality where Hiro Protagonist, fellow hackers, and other Technorati congregate. Hiro can move through walls by sticking his katana through them and following his sword, exploiting a

loophole that he found years ago when he was trying to graft the sword-fighting rules onto the existing Metaverse software. … But like anything else in the Metaverse, [the rule governing how walls function] is nothing but a protocol, a convention that different computers agree to follow. In theory, it cannot be ignored. But in practice, it depends on the ability of different computers to swap information very precisely, at high speed, and at just the right times.86

For the novel, this is a convenient trick, like many of the things hackers exploit or create: a mechanism for sidestepping standard structures of control reminiscent of Galloway’s call to arms in Protocol. But for our purposes, it also illustrates the essential features of the gap between computational and cultural metaphors, between abstraction and implementation.

Hiro’s katana thrust works in part because it exploits the gulf between different logical regimes of abstraction—the algorithmic rules governing swords in the Metaverse and the set of similar rules governing avatars and structures. It also depends on the abstracted construction of temporality in computational systems—as Stephenson points out, the gap is temporal as much as it spatial, depending on the lag between Hiro’s satellite connection and the servers handling his session in the Metaverse. Hiro engages in a kind of arbitrage when he exploits the lag between two algorithmic systems to literally hack his way into a black box. And, finally, the gap is cultural: Hiro asks an impossible question when he pokes his sword into the wall, and he receives just the impossible answer he was hoping for—shazam, hacker magic is performed.

It is important to realize, however, that the gap is not the same as the glitch, the crash, or other signs of malfunctioning computational systems. These moments when the facade of computational omniscience falls are very helpful in seeing the gap, and they have given rise to fascinating genres of computer art and performance, but they are only windows into the broader opening between computation and reality. We construct the gap, or create space for it, on both sides. Algorithmic systems and computational models elide away crucial aspects of complex systems with various abstracting gestures, and the things they leave behind reside uneasily in limbo, known and unknown, understood and forgotten at the same time. But the human participants, users, and architects of these systems play an equally important role in constructing the gap when we organize new cognitive patterns around computational systems and choose to forget or abandon forms of knowledge we once possessed. Every moment of dependence, like a forgotten phone number or spelling that we now depend on an algorithmic system to supply, and especially every rejected opportunity for direct, unmediated human contact, adds a little to the space between computation and human experience.

Algorithms work as complex aggregates of abstraction, incantation, mathematics, and technical memory. They are material implementations of the cathedral of computation.87 When we interact with them, we are speaking to oracles, gods, and minor demons, hashing out a pidgin or trade language filled with command words, Boolean conjunctions, and quite often, deeply personal information. We are constantly reworking the myth of the algorithm through these interactions, reaffirming it through our recitals of the familiar invocations (muttering “OK Google Now” or tapping out a familiar URL) and extending its reach as we develop more sophisticated relationships with computational culture machines. Those relationships depend on multiple forms of literacy—we are all reading algorithmic systems now, more or less badly, depending on our awareness and attention to the context of implementation.

Algorithmic Reading

To effectively read the strange figure of the algorithm, that deliberately featureless, chameleon-like passthrough between computational and cultural logics, we need to take an algorithmic approach ourselves. The reading of complex computational cultural objects requires its own effective procedure, one that operates in the space of implementation between critical theory, computational logic, and cultural understanding. Just as computational algorithms embed a desire to make all things effectively computable, we should recognize the agenda that algorithmic reading brings with it: a desire to make all facets of computation legible to human beings. As literary critic Stephen Ramsey argues in the conclusion to Reading Machines, the “new kinds of critical acts” he terms algorithmic criticism may be not only possible but necessary, “implicit in the many interfaces that seek only to facilitate thought, self expression [sic] and community.”88 Algorithmic reading, as I define it below, is a mode of thought, or a tool for thinking, that anyone can use to interpret cultural artifacts.

In this light algorithmic reading triangulates between competing desires: the computationalist quest to continually expand the boundary of the effective procedure, on the one hand, and the human desire for universal knowledge, on the other. Between them, something new that we are only now beginning to recognize: the mutually constitutive desire to create and manipulate the gap, to have a kind of magic emerge from the complex interactions of abstraction and implementation like flocks of birds from a computational game of life. That différance provides the energy for our evolving love affair with computation, and it is the resource we tap into when we perform algorithmic readings.

This is why reading the gap and reconstructing the computational and social forces that make up the walls and linkages of each culture machine, each porous computational box, is itself a “method for solving a problem.” Like the algorithm itself, algorithmic reading is a complex conceptual structure containing layers of processes, abstractions, and interfaces with reality.

The algorithmic object of study extends far beyond the surface manifestation of a particular fragment of text or multimedia. A reading of a particular post on Facebook, or even, say, Note Book, a collection of literary scholar Jeff Nunokawa’s essayistic Facebook posts, would capture only the human side of the collaboration unless it engaged directly with the apparatus of Facebook itself. In this way algorithmic reading draws from the multiple critical forerunners we have already considered here—cybernetics, cultural studies, platform and software studies, media theory, and digital materiality. We are just beginning to work out how to pull these different perspectives together to ask questions about the ethics of algorithms, the legibility of software and the politics of computation. Algorithmic platforms now shape effectively all cultural production, from authors engaging in obligatory Twitter badinage to promote their new books to the sophisticated systems recommending new products to us. A central tenet of algorithmic reading, what distinguishes the method, is that we must take the culture machine itself as the object of study, rather than just its cultural outputs. To do that effectively, I’d like to offer a set of key terms or transformative concepts that serve as the central functions of any culture machine.

The first methodological tool we need is a grounded critical understanding of process. Algorithms of all kinds advance a version of the effective computability argument, encoding explicit or implicit arguments that the problem—whether it is agriculture or square root extraction—can be solved by following the steps of the method. In this way process itself is an ordering logic for critical understanding, leaning on notions of “process philosophy” espoused by the philosophers Martin Heidegger, Alfred Whitehead, Simondon, and Stiegler, among others. The algorithmic object of study is a system in motion, a sequence of iterations that comes into being as it moves through time. The most important aspect of an algorithmic system is not the surface material it presents to the world at any particular moment (e.g., the items appearing at the top of one’s Facebook feed) but rather the system of rules and agents that constantly generate and manipulate that surface material (e.g., the algorithms filtering and promoting particular nuggets of content). That process embeds, as we explored above, the tension between self-perpetuation and completion, between an effective procedure that ends gracefully and a spirit of universal computation that fills the universe.

This notion of process depends intimately on our second methodological keyword, abstraction. Algorithmic systems are objects of study not only for what they include, but for what is elided. The systems of abstraction that translate electrical signals to assembly language to high-level code to a graphical user interface to a system of icons and cultural metaphors (with many other layers in between) create ideological frames and arguments about reality. The work of media scholars like Hayles, Galloway, and McKenzie Wark serve to illuminate how these abstractions work in the world. If algorithms are culture machines, abstractions are one of their primary outputs. As an example, in chapter 4 I discuss Uber’s application interface, with a cartoonish map showing cars roaming the urban grid. Uber depends on abstracting away the complexities, regulations, and established conventions of hailing a cab, turning the hired car experience into a kind of videogame. That mode of abstraction has been so successful that an entire genre of Silicon Valley startups can now be categorized as “Uber for X,” where that X is actually a double abstraction. First, we adapt Uber’s simplifying, free-agent “sharing economy” business model to another economic arena. Then we make all such arenas fungible, a variable X that can stand for any corner of the marketplace where ubiquitous computing and algorithmic services have yet to disrupt the status quo. Like Turing’s original abstraction machine, these systems extend a symbolic logic into the cultural universe that reorders minds and meanings that come into contact with them.

The medium for these interactions is our third keyword, the state of implementation. The processes that culture machines run and the abstractions they produce can only exist in the space of implementation. That space is a gap between computational and cultural constructions of reality, one that culture machines both generate and manipulate in order to achieve their procedural objectives and the broader expansion of effective computability. Netflix’s decision to use a group of human “taggers” to evaluate its streaming video catalog according to a range of qualitative and quantitative metrics represented a profound shift in implementation, as we’ll see in chapter 3. They left behind the purely statistical approach of their first recommendation algorithm in favor of a messier, more culturally entangled process, a transformation that now informs the even more complicated business of creating original content like House of Cards based on human and algorithmic inputs.

The growing interdependence of humans and algorithms in creating culturally complex, aesthetically evaluated culture machines and creative works leads us to the most challenging aspect of algorithmic reading: imagination. The gap between computation and culture is not just a gulf between different systems of symbolic logic, of representation and meaning: it is also a gap between different modes of imagination. All symbolic systems, all languages, contain a particular logic of possibility, a horizon of imagination that depends on the nature of representation and semantic relationships. Mathematicians can precisely describe highly abstract relationships that are almost impossible to define in more familiar human language. Computational systems are developing new capacities for imaginative thinking that may be fundamentally alien to human cognition, including the creation of inferences from millions of statistical variables and the manipulation of systems in stochastic, rapidly changing circumstances that are temporally behind our ability to effectively comprehend. We see computational imagination throughout the readings that follow, from the “ghost in the machine” that a Netflix VP described in his own system’s results to the kinds of strange serendipity and beautiful glitches we have all glimpsed at the edges of computation’s facade of perfect functionality and predictability.89

Taken together, these components of algorithmic reading provide the ingredients for a new recipe, an algorithmic approach to the cultural understanding of algorithms. It is a means of reading by the lights and shadows of machines: the brilliant illumination of computationally enhanced cognition and the obfuscations of black boxes. As our keywords suggest, algorithmic reading is a critical frame for interpreting objects that are also interpreting you: computational systems that adapt to your behavior in a mutual hermeneutic process.

After all, we are already communing with algorithms—sharing, trusting, and deputizing them to think and act on our behalf. For every nefarious black box and oppressive platform I unearth in this dig, there are bright spots: instances of astounding creativity and insight that would never have been possible without the collaboration of human and machine. Like all of our other myths, the culture machine has been us all along. We build these tools, we imbue them with power and history, because we seek to secure some part of ourselves outside the fragile vessel of the human form. We build cathedrals, rituals, and collective stories to cast a spell on ourselves, a nam-shub of eternal memory to keep our brightest moments alive. Understanding the figure of the algorithm is the first step to becoming true collaborators—and not just with machines, but with one another through the vast collectives that algorithmic systems make possible. Underneath all these layers of silted symbol, code, and logic, we find that the figure of the algorithm is not fixed but in motion, and that algorithmic reading requires working in a charged sphere of action between computation and culture. This is the playground of algorithmic imagination, the zone where human and computational assemblages can do extraordinary, beautiful things.

Notes