3 Material Infrastructures of Writing and Programming

Historical studies of other technologies are important not so that historical analogies can be made, but because without such historical analyses, we cannot truly understand the nature and shape of current technologies.

—Christina Haas1

During the eleventh through thirteenth centuries in England, writing evolved from an occasional tool into a highly useful and infrastructural practice for the communication and recording of information. For thousands of years prior, writing had helped to record commerce and maintain redistributive economies, though it had a niche status and its importance fluctuated with vicissitudes in governance and trade.2 But after the thirteenth century, writing in England never again waned in its central role in communication and recording of information—the power of writing “stuck.”3 Centralized government initiated this transition, but once writing began building bureaucracy, it also rippled out to restructure commerce and then individual and family life. In this process, fundamental concepts of memory, identity, and information shifted to accommodate the fact that knowledge could exist externally to individuals, travel without human proxies, and be preserved for posterity.4 As the technology of writing transitioned from rare to common, scribes acquired a special status apart from other craftspeople. Whether or not people could read or write—and most couldn’t for at least another 500 years—they saw writing infiltrate many of their everyday transactions and activities. As literacy historian Brian Stock argues, people “began to live texts.”5

In a similar way, computer programming appears to have “stuck” in twentieth-century American society. Although the roots of computational devices extend back further, once code-controlled digital computers were widely adopted as information-processing tools by government offices and large corporations in the 1950s, code and the computational devices used to process it became increasingly infrastructural to business, bureaucratic transactions, and social practices of life in the West. Writing remade institutions and individual lives as it became infrastructural to medieval government, commerce, and social relations, and computer programming is restructuring our lives now. We e-mail each other and pay our bills online; our health, employment, marriage, credit, and tax records are recorded in computerized databases; we rely on computational algorithms to filter our news and purchases; and our free time and relationships are shaped by software such as Facebook, Twitter, Match.com, Yelp, and TripAdvisor. In developed nations such as the United States, code increasingly supports our information and communication infrastructure. As computational devices become more portable and more deeply embedded in our physical surroundings and as more spheres are subjected to computation—consumer buying habits, facial recognition, employee performance evaluations, and national security infrastructure and surveillance—we are increasingly controlled and defined by the computation enacted through computer programming. Historian Michael Mahoney writes, “From the early 1950s down to the present, various communities of computing have translated large portions of our world—our experience of it and our interaction with it—into computational models to be enacted on computers, not only the computers that we encounter directly but also the computers that we have embedded in the objects around us to make them ‘intelligent.’”6 Via ubiquitous and sometimes omnipotent computational devices and processes, computer programming has joined writing to become infrastructural to our lives as citizens, employees, and social beings.

With computer code, we have not had the luxury of time to adjust to a new material infrastructure, as was enjoyed by residents of medieval England. Stock notes that the changes he describes over the eleventh and twelfth centuries may not have been apparent within a single lifetime.7 Across the eleventh through thirteenth centuries, historian Michael Clanchy finds “no evidence of a crisis suddenly demanding numerous literates. Because the pre-literate emphasis on the spoken word persisted, the change from oral to literate modes could occur slowly and almost imperceptibly over many generations.”8 In contrast to the centuries-long span when people gradually became accustomed to how writing could structure their lives, our embrace of computer code has been much quicker and is often painfully perceptible. As the first two chapters have shown, twenty-first-century code exhibits some of the symbolic, social, and operational features we associate with writing. Because code has been able to build on the extensive communication infrastructure already established through writing, it has itself become infrastructural much more quickly. Its role in our infrastructure renders it an important symbolic system to understand and communicate in. How can we use the history of writing to understand our society’s dependence on computer code and the programming that constructs it? How might our new hybrid infrastructure of writing and code shape the institutions and lives that we build on it, as well as our composing practices?

To explore these questions, this chapter examines the period when text became central to societal infrastructure and then uses that history to understand the patterns through which we have now embraced code. This chapter works in conjunction with the next, which focuses on a second key era in the history of writing: the birth of mass literacy. The first era, described in detail below, is marked by the adoption of the material technologies of writing as central means of organizing society. In England, historians mark this transition as occurring in the eleventh through thirteenth centuries. I argue that a similar transition occurred for the technology of programming during the 1950s and 1960s in the United States. While in the first era (covered in this chapter) inscription technologies are adopted as material infrastructures, in the second era (chapter 4) these inscription technologies begin affecting quotidian activities of everyday citizens: literacy is adopted as infrastructure.

Put another way, chapters 3 and 4 use the history of writing and literacy to explore how and why writing and code have worked their way into our everyday lives. For both writing and code, the evolution from material infrastructure to literacy infrastructure follows a similar pattern. Writing and code are first adopted by centralized institutions for their communicative and information-processing and storing potentials, described in the previous chapter. Initial material costs for writing and programming are high, and centralized institutions seem better poised to absorb them. Initiatives and innovations by these institutions then push the technology out to commercial and bureaucratic spaces. Next, the technology enters homes and individual lives; it becomes domesticated. This final stage paves the way for literacy, as the technology becomes so personalized and enmeshed in people’s everyday lives that to not know how to communicate in it becomes a disadvantage, and the naming of illiteracy as a concept points to an emerging mass literacy. For writing, a critical mass of people became fluent with the technology, which meant text was no longer a domain reserved just for specialists. Society could then build on the assumption that most people could read and write. For writing in Western society, this second transition happened in the long nineteenth century—the era of mass literacy campaigns by church and state. For programming, this transition to mass literacy is yet to occur. A central claim of this book is that programming is following a similar trajectory to writing, and as it does so, it changes what literacy is. This chapter tells the story of how the material infrastructure is laid for both writing and programming, ahead of the (potential) transition to mass computational literacy.

This history of text and code trickling down from centralized government to commerce to individual lives that I tell here is, like any history, both oversimplified and motivated. The adoption of writing and code as material infrastructure and the birth of literacy occurred at different times in different places, affected some demographic groups before others, and were not linear or smooth in any time or place. Their trajectories bear similarities, but their material, social, and historical conditions are different. Yet we can gain insight from history. Historians of literacy have broadly marked the two shifts I highlight in this chapter and the next.9 Not only are the locations and moments I have chosen richly documented, but they also appear to serve as vanguards for other locations and periods in the adoption of writing and code.10 Other historical comparisons may prompt other observations, but the particular comparisons I make in these chapters illuminate our contemporary transition, when code has become infrastructural and programming is becoming a powerful and generalizable—though not yet generalized—ability. Examining the key transition for the technologies of text and code can help us understand not only what it means to “live code”; it can also suggest something more generally about the wide-scale adoption of new communication infrastructures—who initiates an adoption, where it moves next, what kinds of pressures preceded the adoption, and what kinds of structural shifts it engenders.

To begin, we travel back in time to eleventh through thirteenth century England, when writing “stuck.” Texts from that period indicate an increased reliance on the technology of writing in both church and state governance. The census, contract and common law, and the use of textual artifacts in everyday human interactions all point to writing infiltrating the everyday lives of citizens. After looking at the early infrastructures of writing established during this period, we jump forward to the 1950s, when programming “stuck” in American society. In the postwar era, rapid advances in computers and the code used to control them brought these material technologies into national defense and commercial infrastructures. These advances prefigured a later transition to programming as a domestic activity and the beginnings of a new popular mentality influenced by computation—where chapter 4 picks up the thread.

A few words about key concepts I draw on here. The concept of infrastructure, which I borrow in part from Susan Leigh Star, is central to understanding the historical parallels drawn in this chapter and the next. Star names several important aspects of infrastructure on the basis of her ethnographic work on people’s interactions with communication technologies.11 Infrastructures are embedded, transparent until they break down, have broad reach, are shaped by standards, and are difficult to change.12 As Star notes, “We see and name things differently under different infrastructural regimes.”13 In other words, infrastructures fundamentally shape the way a society operates.14 While she and her collaborators focus on infrastructures such as medical classification systems, I use these characteristics of infrastructure to understand the central roles that writing and programming play in contemporary society on a larger scale.

The words writing and programming conveniently have the same ability to refer to both a material artifact and its act of creation; I use them in this chapter to do that double-duty. I often use the word code when I refer solely to the material artifact of programming and text for the artifact of writing.

Another key concept for this chapter is that of “material intelligence,” which I introduced in the previous chapter. Andrea diSessa argues that material technologies allow us to store some of our thinking processes in material forms, which then become integral to our ability to think and communicate. Because they fuse our thoughts with their materiality, these technologies become material intelligences. Writing is one of them, and programming is another, diSessa argues. A literacy, according to diSessa, is a more broadly distributed material intelligence: “the convergence of a large number of genres and social niches on a common, underlying representative form.”15 My historical argument syncs up with diSessa’s conceptual one: chapter 2 described programming as a material intelligence, but because the ability to program is not yet generalized or universal, programming is only on the cusp of literacy.

I also draw on theories of centralized, bureaucratic governance to tell the parallel historical narratives in this chapter and the next. Effective governance of large areas relies on efficient communication, as Harold Innis writes in his influential Empire and Communications.16 As the geographic area and human population of empires increases, the demand for efficient information management increases as well. Innis’s work has been subject to critique for technological determinism and its simplification of the conflicting forces that apply to the centralization and distribution of government. However, his framework provides a useful way to think about communication and governance: bureaucracy relies on a degree of standardization in communication, especially as the area governed expands and the amount of information about subjects increases and diversifies. Since Innis, scholars such as Jack Goody, Walter Ong, James Beniger, and Ben Kafka have argued that modern bureaucratic governments build on sophisticated technologies of communication, specifically writing. Drawing on Max Weber, Beniger describes the system of bureaucratic control as rationalization—the limiting of information to be processed by a central government. Rationalization becomes necessary in eras where governments are faced with what he calls “crises of control,” when “the social processing of material flows threaten[s] to exceed in both volume and speed the system’s capacity to contain them.”17 Crises of control prompt shifts in the ways that information is managed and distributed. The advantages of greater standardization, translation across media,18 and “intelligibility at a distance”19 are some reasons centralized bureaucracies tasked with coordinating communication at a distance implemented writing and code to support their information-processing requirements.

Beniger focuses on the “control revolution” of the nineteenth century, which brought us train schedules and time zones as information technologies to manage the “control crisis” of that time. But we can trace his idea of a control revolution further back to the use of writing in the eleventh through thirteenth centuries to control the population of England.20 When the Normans took over England, they were outsiders, and traditional methods of social- and family-based law enforcement did not serve them well. Facing a control crisis, the Normans responded by depersonalizing and consolidating government through the technology of writing. They first undertook an ambitious census to catalog the people and land. Tax records, wills, and codified laws followed, and thereby brought writing into the lives of everyday citizens around the thirteenth century. The Norman control crisis was not so much a response to information demands from outside but rather an attempt to use an inscription technology to help control a population. The documentary innovations spurred by the Norman Conquest of 1066 not only created a lasting central government, but also familiarized people with the ways texts could record actions, make promises, and define their place in society. As medieval historians Clanchy and Stock both argue, this ubiquity of writing contributed to a “literate mentality” among late medieval English people.

Like the Norman invasion, World War II was a control crisis, one that prompted America, England, and Germany to explore computation as a new information technology. The complex battlefields created a profound need for information management, and both the Axis and Allies vigorously accelerated the construction of automatic tabulation machines. Firing tables, cryptography and code-breaking, international communication, and advanced precision weaponry all pushed centralized governments, especially in the United States, Germany, and England, toward code that might help machines more efficiently process this information. Developments in digital computers and programming in the 1940s were prefigured by earlier developments in analog computation that responded to information demands from industrialization and a growing and diversifying population in the United States. The alliances between the U.S. military and universities, which began with research during the war effort and were reinvigorated after the launch of Sputnik, accelerated innovation in computer technology and communication. The expense and high requirements for expertise kept computers out of the hands of most individuals until the 1980s, but by then code was already an infrastructural technology for government and business. Although they were first developed as a response to the control crisis of modern warfare, population growth, and greater information complexity, computers and programming became commonplace technologies for commercial and personal transactions.

During what we might think of as control crises, then, centralized governments adopted writing and programming because of their ability to organize, process, and record information. In the eras highlighted here, writing and programming began as management techniques responding to a flood of information and became infrastructural technologies. We can see this system of written documentation scaling into infrastructure in other eras. In Revolutionary France and in the nineteenth-century British Empire, paperwork was a technology to automate bureaucracy and government. Writing made the processing of individual claims to the state less about personal relations—picture a king hearing and adjudicating cases—and more about rules and proper process. As infrastructure, writing could then be used for individual communication. According to Clanchy, “lay literacy grew out of bureaucracy.”21 Several centuries later this lay literacy helped to create the demand for print.22

The following section traces these historical developments in writing. In the second half of the chapter, this history of writing helps us to speculate on what the burgeoning bureaucracy of code could mean for lay programming literacy and what demands it might prompt.

The Adoption of Writing in Medieval England

The general trajectory of the inscription technologies of writing and programming began with their adoption by large-scale bureaucratic institutions, then the technologies moved to commercial entities and finally to domestic spaces. The church and the state in medieval England were the first to take advantage of the affordances of writing on a large scale, using it to keep track of subjects and their activities. Infrastructural changes in information processing and communication followed these innovations. Following this trajectory, below I begin with the church and state and then trace the movement of writing through commercial and domestic applications.

Writing in Church and State

In the eleventh century, the triumphant Norman invaders of England struggled to control a vast and strange land. The previous means of decentralized governance through social mechanisms did not favor them as outsiders. To assert the authority of the central government, they looked to writing, beginning with an ambitious census of their new territory. Throughout the eleventh through thirteenth centuries, kings and their governments instituted reforms such as contracts and laws, which were codified with written texts. In this way, the Normans were able to create a bureaucracy based on writing, propelling England “from a sophisticated form of non-modern state, managed through social mechanisms, to a crude form of modern state, organized through administrative institutions.”23 As shires and towns were caught up in these bureaucratic texts, individuals began to feel the need to understand writing.24 By the end of the thirteenth century, English people were familiar with the functions and power of texts even though few could read or write.

The “social mechanisms” of government that greeted the Norman conquerors were based on a medieval concept of embodied memory. Established in the Greek and Roman rhetorical tradition (e.g., the Rhetorica ad Herennium, Cicero’s De Oratore, Aristotle) and revived in the late medieval period,25 embodied memory was tied to physical actions and visual aids. As historians Mary Carruthers and Jan Ziolkowski argue, memory was spatial rather than temporal, as we now think of it.26 The medieval tactic of the memory palace, borrowed from classical tradition, best exemplifies this spatial approach to memory. By employing the spatial organization of a palace, one could memorize lengthy texts or concepts, attaching passages or ideas to rooms and imagining walking through them. For example, influential twelfth-century theologian and philosopher Hugh of St. Victor claimed that concepts should be attached to facial expressions, gestures, and physical locations in order to be memorized.27 Memories could be codified in pictures as well—preferably an image of no more than one page so that it could be envisioned as a whole.28 Memories could also be attached to physical performance. The physical ritual of “beating the bounds,” practiced in Anglo-Saxon times, was performed every several years to remind parishioners of property boundaries. In a time before maps were common, or at least before most people could read maps, groups of people would perambulate property lines; at key markers of boundaries, they would beat trees or rocks or even young boys among the party. By literally impressing these physical bodies, they impressed the location of boundaries in people’s memories. Embodied traditions of memory were also demonstrated in the York Cycle and other medieval plays, according to performance historian Jill Stevenson. These plays were performed in specific locations in order to enrich the associations of these places to religious themes and to help townspeople remember the didactic themes of the play.29

This physical, embodied approach may have been effective for remembering, but to the Normans it had two distinct disadvantages. First, it could not scale up to the level that a powerful centralized government required: regardless how large his mental palace was, a king could not remember all of his subjects. Second, it was difficult for outsiders to infiltrate. When the government forced the codification of land exchanges in writing in the eleventh century, the Norman outsiders were able to assert control and redraw property boundaries more easily with this writing than if they had relied on embodied memories to fail. Rather than once again pulling out their swords, they could massage documents to reflect new distributions of property. The Norman government’s increased use of records and texts eroded the embodied approach to memory, and in this way a “crude form of modern state” was born—bureaucracy rather than direct rule. Writing began to supplant certain functions of human memory in cataloging and ruling people. For this reason, Clanchy calls this the shift “from memory to written record.”

Many activities once impressed in human memory and social relations became codified in writing. Government documents proliferated in the twelfth century, or so the increased use of sealing wax suggests. The government was responding to a growing population, but was also relying more on writing to exercise its authority.30 In the mid-twelfth century, for example, King Henry II displaced the Anglo-Saxon ruling style of personally settling disputes with “a system of standardized writs to automate and depersonalize the legal process.”31 Stock observes that during this time, “a complex set of human relations was eventually reduced to a body of normative legislation.”32 Feudal relationships established personally between lord and vassal were increasingly formalized in the eleventh through thirteenth centuries.33 English common law became codified during this period as well, and in the process it homogenized and calcified certain social customs in writing.34 Written wills were first documented in London in 1258, and only a generation later the oral witnesses to wills were no longer regularly recorded, indicating a complete transition from human to written testimony in only a few decades.35 Edward I’s quo warranto proceedings in the late thirteenth century forced landowners to prove “by what [written] warrant” they held their land, effectively displacing previous memory-based systems of establishing property ownership.36

Reflecting this shift from embodied to textual memory, the term deed began in the thirteenth century to refer not just to an action (e.g., a deed well done) but also to a written legal contract (e.g., a deed to a house)—suggesting that people understood actions could be carried out in text.37 A profound example of this shift from memory to written record is the change in how the legal system coped with historical information. Customarily, the court would limit the events that could be proved in litigation to what had happened since the most recent coronation, as an acknowledgment that memory before that date was unreliable. However, Richard I’s coronation date, September 3, 1189, served this referential role for the whole of the Middle Ages, indicating that documentation had surpassed human memory as the measure for legal proof. Thus, Clanchy argues that this date “marked the formal beginning of the era of official memory” and the end of government reliance on mortal memory.38 In these ways, writing depersonalized and shored up the authority of the state.

The Catholic Church preceded the crown in many documentary innovations. Just as the crown established certain security measures for documents, the papacy began to recognize the power of scribes and to require notaries and witnesses to documents.39 In the eleventh century, the church implemented several major changes that all served to standardize documents and make them more accessible.40 The first complete papal register that survives was completed in the eleventh century by Pope Gregory VII, who had a profound influence on canon law and believed that written law should constitute the basis of ecclesiastical administration.41 In the late twelfth century, there was a more consistent archival policy of Vatican documents.42 Extensive church records were established by the Archbishop of Canterbury, Hubert Walter, who later went on to perform the same recordkeeping for the crown.43 Dominican friars had written a concordance of the Bible by 1239.44 The church had spiritual as well as bureaucratic uses for documents: it communicated—and ex-communicated—via written papal bulls.45

Perhaps the most symbolic and well-known example of this shift to written record in England at this time was the Domesday Book, a written census commissioned by William the Conqueror in 1085. Anthropologist Jack Goody notes that, historically, written censuses have been critical for redistributive economies to keep track of people so they could pay taxes to a central authority, as in the temple and state in ancient Mesopotamia and Egypt.46 In these times and places, as well as the one I focus on here, centralized control developed alongside writing, indicating a complex feedback loop between the two.47 William’s census attempted to codify property ownership in eleventh-century England by collecting testimonies from more than 60,000 people across the countryside. These oral testimonies were collected in vernacular languages from juries and then translated into Latin by scribes and recorded in the Domesday Book. Clanchy writes, “The jurors’ verdicts, which had been oral and ephemeral in the vernacular, were converted through the skill of royal scribes into a Latin text that was durable and searchable.” The reckoning of lands and assets that the Norman census undertook was called the “Domesday Book,” echoing the Christian “Doomsday” because “it seemed comparable in its terrifying strictness with the Last Judgement at the end of time”48 While laypeople considered it a powerful form of judgment, for the crown it was a dramatic way to get people under control of written law. Success using written charters to break the cycles of land inheritance and exchange may have led the Normans to think that this comprehensive written census would establish their governance more firmly.49 In the royal treasury, where the book physically resided, it was known as the liber judiciarious, or “the judicial book.” Its Latin title invoked Roman and papal law as well as the authority of those two powerful traditions.50 And in this “dual process of vernacular inquiry and Latin record-making,” the Domesday Book symbolized this movement from memory to written record in post-conquest England.51

The Domesday Book, which translated vernacular testimony into Latin record, was a largely symbolic document for the Norman government. William’s ambition was frustrated by the practical difficulties of recording so many minute details. Hence, the Domesday Book is incomplete and very inconsistent across regions.52 It was not a practical document at the time of its creation, but, along with its surveying process, it was a way to associate writing with a form of royal power.53 Even more than its practical application for taxation, Goody claims that a census can “represent the penetration of the state into the domestic life of its subjects.”54 In chapter 4, we will see this symbolic power of the Domesday Book echoed in the symbolic power of mainframe computers in the 1950s: the huge devices had few applications but nevertheless affected public perception of computing.

Writing, from Symbolic to Practical

The symbolic power of writing in the Domesday Book translated to practical purposes as the government learned to make, store, and retrieve records more effectively.

In eleventh century England, government documents had been essentially treated as “special objects treasured in shrines,” rather than as records.55 But by the end of the twelfth century, the Exchequer had organized these documents by creating centralized treasury archives.56 Documents produced and held by the Exchequer then became critical for the collection and redistribution of taxes.57 Although the archives may have been at first unusable for retrieval, they indicated the increasing value placed on written records.58 New security measures for these documents underscore this importance: copies in triplicate, and locks and keys held by multiple people.59 By 1300, Edward I wanted documents to be retrievable for his review at any point, which meant that they needed to be indexed and organized effectively.60 This move suggested a more practical approach to documents—writing for government use, rather than for symbolic power. Along with the increased quantity, the shift from symbolic to practical in the way the Exchequer’s and other royal documents were treated across the eleventh through thirteenth centuries in England suggests an increasingly important role for them in governance.

In order for writing to transition from a specialized skill to a literacy, it needed to spread beyond a specialized class of citizens—beyond just clergy or gentleman, for instance. As Clanchy argues, before literacy could spread beyond these specialized classes, “literate habits and assumptions, comprising a literate mentality, had to take root in diverse social groups and areas of activity.”61 When writing is controlled by a small class of people in this way, Goody calls the uses of literacy in that society “restricted.”62 That is, writing is specialized rather than generalized—and in the terms of this book, it is still a material intelligence, not yet part of literacy.63 But traces of the beginning of writing literacy can be found in the latter part of the period we survey here. The move from the Domesday Book in the eleventh century to thirteenth-century deeds and other functional documents in England plus the transition between writing as symbolic to writing as practical together point to a developing “literate mentality” and precede the general spread of literacy across different classes of people. As people began to treat texts as practical rather than sacred, and as they brought these texts into domestic spaces, we can see people begin to inhabit the “literate habits and assumptions” that Clanchy describes.

The circulation of documents in everyday citizens’ lives was key to this transition. The government was the primary producer, user, and keeper of documents until at least the thirteenth century.64 But by the thirteenth century, the crown required written deeds to prove the legitimacy of land transactions, written responses to censuses, and written evidence and arguments in court. Individuals and localities often had the responsibility of issuing these documents, as well as certifying, storing, and keeping track of them.65 The proliferation of documents and bureaucracy meant that the government became more dependent on literates to carry out its functions.66 Partly due to the state’s attempts to codify financial and land transactions, rural estate managers felt economic pressures to keep records for themselves as well.67 For instance, thirteenth-century documents exist for such mundane transactions as the purchase of livestock.68 Stock notes that the areas of human life subject to documents at this time were limited, but significant: “birth and death, baptism and marriage, initiation, terms of service, transfers of property, and a small number of issues in public and private law.”69 Writing at this time had become useful not only for keeping track of government finances and taxes but also for small-scale accounting and personal records.

Medieval library policies are another reflection of this shift from texts as specialized or sacred to practical. Eleventh-century librarians supervised the borrowing of books once a year; each monk had one book to read for the whole year and exchanged it on a particular date under the librarian’s supervision. The Dominican approach in the thirteenth century reflects a much more modern concept of libraries: books should be ready to hand and multiple.70 The innovation of portable books, used especially by peregrinating Dominican friars, also suggests a more practical attitude toward texts.71 This difference is critical: where monks once ruminated over one cluster of ideas, they could then peruse many portable books at once. The former scenario encourages deep reading and reverence for a particular text, whereas the latter allows for greater scrutiny of texts, as readers can bring together more wide-ranging arguments.72 For Clanchy, “The difference in approach towards writing [in these two modes] … is so fundamental that to use the same term ‘literate’ to describe them both is misleading.”73 In the later eras, historians have cited this attitudinal shift resulting from the availability of multiple, juxtaposable texts as a cause for the Enlightenment in Europe74 as well as the dawn of mass literacy in the northeastern United States.75 While these claims have been accused of overstatement,76 at the very least the treatment of texts as reference material rather than as sacred seems to have helped create a more practical use for them.

Concordant with their new patterns of circulation, texts could also be found in new places by the thirteenth century. In particular, religious texts moved from monasteries into homes; the domestication of the word coincided with the domestication of the Word. This move was important in at least two ways: first, it made texts more accessible to laywomen,77 and second, it began to integrate texts into the everyday lives of people. Women were less likely than men to have encountered texts in government and commercial transactions, and this move to domestic spaces made texts available to women, especially those of higher classes who could afford them. We know little about literacy rates among women in this period, although there are some references to nuns as literate.78 We do know that women were more often readers than writers; for spiritual enlightenment, both women and men in higher classes were expected to read in Latin, French, and English, as well as to be able to interpret religious images.79

Regardless of the degree to which they could read them, however, elite women in the thirteenth century commissioned Books of Hours, often with illustrations and elaborate, jewel-encrusted covers as indicators of wealth.80 These Books of Hours not only made books more accessible to women but also brought a culture of reading into the home. This culture of reading was then passed on to children in the household, paving the way for more extensive cultures of literacy in subsequent generations. Literacy was often learned in the home, from mothers,81 a pattern repeated elsewhere in history.82 Clanchy goes so far as to argue that “the ‘domestication’ of ecclesiastical books by great ladies, together with the ambitions of mothers of all social classes for their children, were the foundations on which the growth of literacy in fourteenth- and fifteenth-century Europe were constructed.”83 As we will see in the second half of this chapter, the domestication of computers also set the stage for the spread of programming. When books and computers became available for home use, people could interact with them and find ways to fit them into their lives. They became personalized.

Books of Hours were both objects and texts. They contained writing as well as images and memory maps to help individuals read and retain their religious import. They were constructed of animal skin, containing annotations and images sewn in by previous owners. All texts are material of course, but the physical traces of previous owners and personal modifications heighten the palpability of Books of Hours, according to Jill Stevenson. In a particular text, the Pavement Hours, she points out sewn-in insertions of images of a female saint reading a book and of Saint Christopher (the protector of travelers and also associated with merchants), which both mirror the user/reader of the book and “literally thread the book’s owner into each prayer’s use.”84 As objects, Books of Hours could be physically present during worship, functioning much like souvenirs to help people remember key events.85 Their material qualities and the spaces they inhabited meant that Books of Hours helped to bridge the gaps between memory and writing and between sacred and practical literacy. In the Middle Ages, traditions of reading aloud and illuminated texts served as bridges between the oral and written, aiding those who depended on writing but were unable to read or write themselves. In this way, writing subtly wove itself into existing patterns of orality and images.86

Other Janus-faced artifacts witnessed this transition from a memory-based to a document-based society. For example, knives with inscriptions that date from the twelfth century connected material memory to new documentary methods of recording land exchanges. Ironically, the Normans appear to have imported this tactic of material exchange to signify land transactions, along with their importation of more established documentation. A memorable (though possibly apocryphal) story of William the Conqueror from 1069 has him dramatically brandishing a knife during a land exchange and saying, “That’s the way land ought to be given,” alluding to the way that he acquired English land by force several years earlier. The document that records this transaction says, “‘By this evident sign, this gift is made by the testimony of many nobles standing at the king’s side.’” Pointing to the material object and performance as evidence, witnesses could attest from memory that the transaction had occurred.87

The signatory seal was another material symbol of this shift from memory to written record. To participate in a new world of written contracts and deeds, individuals needed to learn to sign their names to indicate their acquiescence to the contracts. Many people learned to read during this period, although those who could read could not necessarily write because the medieval technology of writing—the paper, ink, writing instruments, and scripts—was difficult to master.88 Those who could not sign their names could use seals. Once possessed only by kings and nobles, the seal became a commonplace possession even of serfs, who were required by statute to own one by the end of the thirteenth century.89 For this reason, Clanchy calls the signatory seal “the harbinger of literacy, as it was the device which brought literate modes even into remote villages.”90

Religious representations of reading and writing also imply the growing power of text at this time. Prior to the thirteenth century, the virgin Mary is generally shown spinning at the Annunciation. Afterward, she is often shown reading piously when the angels interrupt her. This representation served as a model for contemporary women who owned the Books of Hours in which she was depicted.91 As in other eras, the virtue of reading contrasts with the dangers of writing. Clanchy notes from religious depictions that “the devil … became literate in the thirteenth century; he also established a hellish bureaucracy to match that of the king or pope.” In light of the thirteenth-century Inquisition, which depended on written depositions, this depiction was particularly sinister.92 Other images show devils recording mispronounced prayers in church and using those deformed words for ill.93 As this portrayal of the devil suggests, writing’s relationship to truth caused some anxiety among medieval people, a theme we pick up on in the next chapter.

Images combined with text, the material and textual Books of Hours, inscribed knives, and signatory seals served as “boundary objects” between a society organized by memory and one organized by documents. According to Geoffrey Bowker and Susan Leigh Star, “boundary objects” are those that make infrastructures legible to each other, especially during times of transition. They serve as translators across contexts.94 Immaterial practices such as reading aloud also bridged gaps between oral and literate culture.95 Through these boundary objects and other ways, literates and nonliterates alike could participate in the burgeoning written culture. But by end of the thirteenth century, this was no longer an option: bureaucratic initiatives by the state and church had driven written documents into the very life cycle of English people—their birth, marriage, and death records as well as their religious and domestic spaces.

Prior to the eleventh century, when writing was only occasional and not central to business or legal transactions, the ability to read and write was a craft not so different from the ability to carve wood or make pottery. Scribes or clerks could be employed when necessary, but business and governance were generally conducted through personal contact. The concept of literacy did not exist because knowing how to read and write were highly specialized skills. But as writing became infrastructural in the early fourteenth century—that is, when writing became so important that institutions such as government and commerce began to depend on it—texts were no longer set apart as special, and literacy was no longer a specialized skill. As texts became more embedded in the general activities of everyday life, they prefigured another transition, which we will explore in more detail in the next chapter: the transition toward a literate mentality. This literate mentality signals not widespread literacy, but a mindset about the world that is shaped by the ubiquity of texts. The pervasive technology of writing affected methods of understanding the world, ways of presenting the self, and understanding the relationship of humans and nature—similar to the influence of the technology of computation once it became more widespread.

The Adoption of Computation in Twentieth-Century America

In medieval England, the “twin bureaucracies” of church and state mobilized over several centuries to develop sophisticated documentation systems. In twentieth-century America, what we might think of as the “triplet bureaucracies” of government, industry, and university mobilized to further their computational information-management systems. We moved from governments of writing to governments of writing and computation. Rather than religious and government bureaucracy in medieval England, it was primarily national defense that encouraged computation to spread in the United States. During World War II, the U.S. government experienced a Beniger-style “control crisis”96; accurate weapons tables, effective espionage, and compressed time frames pushed human and analog calculators to their limits. Wartime budgets were the primary funders of research in the early years of computers, and the American government, in particular the military, was the greatest user of computers during the 1940s and 1950s.97 After computers became indispensable to governance and defense in the 1950s, corporations such as American Airlines discovered they could get a competitive edge using computation. Computers were miniaturized and personalized beginning in the 1970s. From the 1980s on, they began to work their way into everyday life in America: households played computer games, kept financial records in spreadsheets, and used modems and other hardware to connect to others through networks. We might think of this moment as parallel to what Clanchy called the shift “from memory to written record”—we have experienced a shift from written to computational record. But just as texts never completely displaced orality in everyday life, computation has not superseded writing across the board. Instead, we live in a world where written language and code interact in complicated ways.

The second half of this chapter focuses on computation and follows a similar trajectory we just traced with writing: from large-scale, centralized uses to small-scale, domestic uses. As with writing, the big, initial investments in computer technology were made by centralized governments; afterward, computers were taken up by businesses and universities, and finally, people invited them into their homes and daily lives. Massive, expensive, batch-processed machines are necessarily tools for dedicated specialists, as John Kemeny and Thomas Kurtz observed when they set out to make programming accessible to Dartmouth undergraduates.98 When machines are inexpensive, portable, and one doesn’t have to be an engineer to use them, the kinds of people who can learn to program change. Each generation of computer called for a different kind of programming; by following the hardware as it became more accessible and practical, we can see a development parallel to the history of programming languages we encountered in chapter 2. Only when computation could happen on cheap, small, and personal devices could a concept of computational or coding literacy be possible, so this half of the chapter necessarily pairs the institutional forces supporting the material intelligence of programming with the computational technologies they developed to do so. It therefore focuses on a history of hardware and infrastructure development as a necessary precursor to computational literacy. While the quest for a perfect language for novices has been ongoing since BASIC, this movement of computational devices into homes and our everyday lives removed many of the material barriers to computational literacy.

Government Spurs Computational Research

Although World War II was the most immediate cause of the development of code and computers in the United States, the information pressures of the nineteenth-century census foreshadowed those of World War II. Just as the Domesday Book attempted to catalog the newly conquered English population, the American census helped to recruit soldiers and to tax citizens of the new United States. The first accounting of the new republic’s population, conducted in 1790, required census-takers to personally visit every household in the new nation, taking note of all residents. But the task of visiting and collecting data from every household was sustainable only for a smaller U.S. population, in the same way that personal relationships could not scale up for a more centralized government in the Middle Ages. As the nation grew commensurately with its ambitions for data, human-implemented writing and mathematics reached their limit. Thus, the census once again became an impetus for a more sophisticated literacy technology. The Domesday Book largely took advantage of a technology already present, but the American census prompted the development of a brand-new technology: automated computation.

As the American census became more ambitious and larger in scale during the nineteenth century, the government looked to more efficient data-processing techniques to collect and make use of the information.99 The 1830 census implemented standardized, printed forms for collecting data to simplify the tabulating process. Streamlined forms made more cataloging possible: “social statistics” (e.g., information about jobs and class) and information from corporations were then collected.100 But the standardization of data collection could only aid the tabulation process so much. Struggling with data analysis, the Census Office implemented a rudimentary tallying machine in 1870. Even with this machine, by 1880 the population had grown so much that the calculations of its collected data took most of the decade.101

Anticipating the onslaught of information that would come from the 1890 census, Herman Hollerith, a mechanical engineer and statistician who worked for the Census Office during the 1880 census, devised an analog, electronic computer. The “Hollerith machine” processed cards with various data points punched out, and when hand-fed by operators it calculated data much faster than statisticians could. Variations of the Hollerith machine were used from the 1890 through the 1940 census. By 1940, there were many analog calculating machines designed to solve specific classes of mathematical problems such as the census presented, but there existed no general-purpose computer as either Charles Babbage or Alan Turing had imagined. By 1950, the U.S. Census Office was one of the first customers of a commercially available digital computer—the UNIVAC I.102 As this dramatic change in technology for the census suggests, between 1940 and 1950 research in automated computation was greatly accelerated.

World War II was an information war; it pushed much of the Western world into a “control crisis” that necessitated faster and more efficient processing of information. Specifically, the acute need for weapons and strategic data tabulation led to advances in computation on both sides of the conflict. In Germany, isolated from other development in computers, Konrad Zuse developed a series of computers beginning in the late 1930s. The most important of these was the Z3, which performed sophisticated arithmetic and was in operation from 1941 to 1943, when it was destroyed in an Allied raid on Berlin.103 The British, including Alan Turing, focused on cryptography and used computation to crack the code used by the Germans to transmit critical information during the war.104 In 1944, they completed the code-breaking Colossus and put ten of these computers in operation at Bletchley Park, reducing the time to break codes from weeks to hours. The male cryptographers and Women’s Royal Naval Service (Wren) operators worked together to decode messages from the German Lorenz machine. In advance of D-Day, they revealed that Hitler was unaware of the Allied plans, a key bit of knowledge that General Eisenhower claimed may have shortened the war significantly.105 The Colossus project was classified until the 1970s, however, and so it and many of its operators did not significantly influence or participate in later development of computers.106

The Americans, spurred by Vannevar Bush as head of the U.S. Office of Scientific Research and Development during World War II, were interested in increasing the speed and power of tabulating machines, particularly for the use of calculating firing and navigation tables. Firing tables were the limiting factor in advances in artillery technology at the time: each new gun required a firing table for the gunner to account for ammunition trajectories and moving targets, and each firing table took a hundred people a month to calculate.107 Analog computers were occasionally used for this tabulation, although the pressure for more rapid results drove research on electronic, digital computation. The Mark I (1944), designed at Harvard and built by IBM to perform calculations for the Navy, was an automatic and electromechanical computer that was driven by a 50-foot rotating shaft.108 John von Neumann used it for atomic bomb calculations during the war. The ENIAC (1946) was the first successful electronic computer. Underwritten by the U.S. government during World War II, the ENIAC was developed at the University of Pennsylvania Moore School, exemplifying the collaboration between government and universities in this phase of computational research. Although the computer was finished a few months too late to help the war effort, its underlying research paved the way for subsequent developments in computers.109

One problem with the ENIAC was the time it took to reprogram it: it needed to be physically reconfigured for every new problem it solved. Subsequently, John von Neumann and the Moore School team worked on the concept of a “stored program computer”—a computer that would store its programs in the same way that it stored its data. As discussed in chapter 2, this design allowed computers to be general purpose machines because they could be reprogrammed without being rewired. Crucially, historians Martin Campbell-Kelly and William Aspray mark the completion in 1949 of the EDSAC, the first successful stored program computer, as “the dawn of the computer age.”110 Beyond stored programs, countless other developments occurred in the wake of World War II: magnetic storage, transistors, direct keyboard input, compilers, break points for debugging, and programming languages. Like innovations in late medieval England that allowed the government to store and access documents more easily, these material improvements made computers more useful and easier for people to work with.

The Cold War of the 1950s led to further development of computer technology within government, industry, and universities, embedding the new technology more deeply within the bureaucratic systems of each of these institutions. The SAGE (Semi Automatic Ground Environment) missile defense system, begun in the 1950s, illustrates the ways that military forces encouraged the spread of code in the infrastructure of American government after the war. The SAGE defense project, an ambitious, multisite, integrated computational system designed to defend the United States against a potential Soviet missile attack, was the most extensive software project of the 1950s and, like the research on computation during World War II, it relied on the merged efforts of industry, government, and universities in its development. Postwar computers were batch-controlled, a method that was useless for feedback during real-time flight and combat situations. MIT and IBM had worked on a computer (Project Whirlwind) that would help give real-time feedback to Navy bombers, and this real-time technology was integrated into the SAGE project.111 SAGE was significant for its level of complexity; it combined communications, computation, and weaponry to detect, evaluate, and intercept a potential airborne attack (figures 3.1 and 3.2).

10655_003_fig_001.jpg

Figure 3.1 The SAGE system linked radar towers and fed the information back to a SAGE Direction Center, where it would be processed by an AN/FSQ-7 computer. SAGE simulation by Chester Beals, 2009. Reprinted with permission of MIT Lincoln Laboratory, Lexington, Massachusetts.

10655_003_fig_002.jpg

Figure 3.2 If an enemy aircraft was detected, the SAGE Direction Center would designate air bases to launch counterattacks and monitor the aircraft’s position. If the aircraft got through the initial counterattacks, the Direction Center would trigger Nike surface to air missiles to launch. SAGE simulation by Chester Beals, 2009. Reprinted with permission of MIT Lincoln Laboratory, Lexington, Massachusetts.

Although developments in missile technology made it an imperfect command and control defense system,112 SAGE’s influence on computing technology was tremendous, particularly in the way that it served as a training ground for programmers.113 In 1955, fewer than 200 people in the country could do the kind of programming necessary to build a large-scale system like SAGE. By 1960, the Systems Development Corporation (SDC), the RAND spinoff that did software development for SAGE, had 3,500 programmers working on SAGE and other Department of Defense projects, and 4,000 had already left to join private industry.114 The nonprofit status of SDC allowed it to function as a kind of “university for programmers”: as a rule, SDC didn’t oppose the recruitment of their personnel, which allowed programmers to diffuse throughout commercial industry in the 1950s and 1960s.115 The massive scope and manpower of SAGE—a total of 1,800 programmer-years were spent on the project—meant that programmers who trained on SAGE were present on every other large software project in the 1960s and 1970s.116 Along with personnel, technological and organizational innovations from SAGE were also diffused throughout the early computing industry.117 Beyond its influence on other computational projects and personnel in the 1960s and 1970s, SAGE also marked a significant step in the government’s infrastructural reliance on computer programming. The many programmers of the SAGE defense system wrote code that protected the integrity of Canadian and American governments quite literally. The SAGE system was operational at multiple sites in the United States and Canada from 1963 until the early 1980s, when it was finally dismantled.118

While Soviet missiles posed a threat to American physical safety and led to the SAGE defense system, the 1958 launch of Sputnik symbolized the Soviet threat to American scientific and technological prowess. To combat this intellectual threat, the U.S. Advanced Research Projects Agency (ARPA) was launched in the United States and allocated a budget from defense funding lines in order to promote scientific and technological “blue-sky” research.119 The first project of the new agency was to facilitate communication and information exchange and share computational resources across research and government institutions. Computers at the time ran unique, specialized operating systems and had widely varying interfaces, so connecting them across a standardized network was a challenge.120 J. C. R. Licklider, who directed the ARPA division working on networking, imagined an “intergalactic network” that would bring humans and computers together in symbiosis121 by relying on standardized “message processors.”122 Under his direction, the networking efforts of ARPA led eventually to the development of the ARPANET, first implemented in 1969. At the time, ARPANET, which relied on protocol and standardized code to exchange packets of information between nodes across a network, had no commercial application. But it was the precursor to the Internet, and its basic packet-switching protocol is key to the architecture of our current World Wide Web.

Directly or indirectly, much of the current technology and the human expertise for programming can be traced back to midcentury, large-scale, American government projects such as ARPANET, SAGE, and ENIAC. Several generations of programmers learned their trade on major government-funded software projects in the 1950s and 1960s like SAGE and ARPANET, and then circulated out into large commercial projects with IBM, Remington Rand, or smaller companies, disseminating their knowledge of code writing further. As historian Kenneth Flamm points out, government funding tends to support basic research and infrastructure such as ARPANET, which then has long-term benefits for industry.123 Post-Sputnik government funding allowed IBM to set up an official research arm in 1961.124 In 1965, government funding supplied roughly half of all computer research and development budgets in the United States.125 Government funding for computer research continued to be significant in the 1960s and 1970s but surged again in the 1980s with the chilling of the Cold War, as the United States maneuvered against the Soviets on the battleground of space exploration, technological research, education, and defense.126

More than just funding for large projects, the organizational structures of government helped to support computational research. For example, ARPANET’s protocol to connect disparate computers across networks may have been possible only with the muscle of centralized institutions such as the U.S. government to enforce it. ARPA’s famously short chain of command under J. C. R. Licklider in the early 1960s allowed it to pursue radical projects such as ARPANET.127 This organizational structure was then emulated by Licklider’s successor, Robert Taylor, when he went to Xerox PARC in the 1970s to influence another round of computational research. Centralized organization helped to facilitate the big projects of midcentury computation, but computation also emulated this structure. The traditional bureaucratic structure of centralized government was an analog precursor to the computer, argues Jon Agar. For Agar, the British Empire’s embrace of the “symbolic abstraction of writing and the surveillance of facts” led to the concept of the computer as a means to systematize government bureaucracy.128 Government funding, organization, and enforcement helped boost the technologies of code and computation through modeling and funding, just as it had with writing. In turn, adoption of these means allowed governments to scale up and systematize information processing through written or computational bureaucracy. This systemization was incomplete and problematic in some ways, as I describe in chapter 4, but it did lead to more pervasive and embedded uses of writing or computation in everyday lives of citizens.

Commerce Embraces Computers and Code

In the latter half of the twentieth century, industry followed the U.S. government in adopting computation to handle information-processing tasks that they were (barely) managing through other means. Several computational projects from the 1960s through 1980s serve as illustration: the influential SABRE airline reservation system, IBM’s problematic and innovative OS/360 operating system, and the commercially popular VisiCalc spreadsheet program for the minicomputer. SABRE, one of the first major nongovernmental software projects and built by some of the programmers of SAGE, showed how computation could manage the growing information problem of airline reservations. The OS/360 was IBM’s attempt to make an operating system that would run the same software across an entire line of computers. The project represented the burgeoning market for computers in commercial spaces as well as a response to increased demand for software functionality and reliability. Finally, VisiCalc was part of other developments that made computation necessary and useful for smaller-scale businesses. The calculating spreadsheet program was the first “killer app” for home computers; it also illustrated an expansion in the personal uses for commercial software. These software projects provide a snapshot of how computation filtered out from large and central information-management projects to small businesses and individuals.

Like missile defense, the airline reservation process had grown in complexity in the postwar era. Technological advancements and increasing wealth led more people to choose travel by air; more travelers traveled on more planes. However, American Airlines’ system to keep track of reservations had evolved very little from what had been first implemented in the 1920s, when plane travel was a luxury reserved for the few. Human agents kept track of flights and passengers on paper cards. Once airline travel became more popular and these agents could no longer share one file and one table, a giant screen posted the flights and seats available in a room full of agents. At this point, American Airlines’ manual reservation system hit a wall: one in every 12 reservations had errors.129 In the midst of this massive information-management problem in 1953, the president of American Airlines found himself on a flight sitting next to a high-ranking salesman from IBM who had been working on the SAGE defense project. When the two struck up a conversation, they realized the potential for collaboration on the airline reservation problem. The reservation problem, after all, was a command and control problem: a network of distributed agents needed to send and receive data from a central reservation system. By 1960, the first experimental airline reservation system, built by IBM and named Semi-Automated Business Research Environment (SABRE, inspired by SAGE), was operational. By 1964, all of American Airlines’ reservations were handled by the SABRE system. To keep up with their competition, Delta and PanAm both contracted with IBM to implement their own reservation systems.130 Drawing on resources and knowledge generated by government defense funding, the SABRE project was critical to American Airlines and air travel more generally, but also to the general infrastructure of business that had begun to rely on air travel to conduct people from office to office in an increasingly nationalized corporate landscape.

The 1960s saw computers entering many more businesses and corporate applications; major corporations used computers, and software was devised for grocery warehousing, hotel and flight reservations, and other business contexts.131 Periodicals such as Datamation, Data Systems, and Data Processing, launched in the United States and United Kingdom in the late 1950s, aimed to explain computers to top management and help them choose systems to manage their businesses.132 Computers had been essentially a service industry in the 1950s, with the cost of software folded into the cost of hardware. This setup allowed IBM, the most established manufacturer of hardware, to dominate software as well. Because code per se wasn’t yet monetized, IBM’s SHARE program, launched in 1955 to help IBM clients share computer programs and operating systems and procedures, took advantage of the company’s vast user base and connected otherwise competitive businesses.133 IBM’s SHARE program was later emulated by other manufacturers. As software techniques developed rapidly in the 1960s, software became a product separate from hardware. The first software company went public in 1960, and others followed with high capital investments that reflected the public’s faith in the booming industry.134

Despite the debut of lower-cost manufacturers and independent software houses, IBM had a huge first-mover advantage: the most extensive services and a massive, free, shared codebase. But they still faced market pressures.135 IBM sought to keep its edge on the growing market for computers in the 1960s by releasing an integrated hardware and software line. At the time, each computer model had a unique operating system and method of programming it. IBM’s System/360 was an impressive engineering feat: a full line of computers that could be similarly programmed, allowing businesses to upgrade computers without the huge expense of rewriting their software. The operating system that was slated to run the line of computers—the OS/360—pointed the way forward to more standardized computer interfaces. Its innovation as well as the infamous challenges encountered in the development of the OS/360 make it the poster child for software development in the 1960s. Enterprise software projects such as the OS/360 were designed to solve one kind of information overload problem but created a new one: How does one manage the communication demands of a massive programming project with millions of lines of code and hundreds of programmers working together?136 As the project’s manager Frederick Brooks famously asserted, “there is no silver bullet” for managing software projects such as the OS/360; even with good management and tools, the information problems that software presents remain difficult.137

Despite the challenges of budgets and bugs, by the 1960s large businesses had begun to rely on software to organize their daily operations. The demand for sophisticated software and programmers to write it increased, but there were not enough programmers to write the software that was increasingly central to business. Large software projects like the OS/360 went over budget and missed deadlines. A “software crisis” emerged—the first of many.138 As a 1995 Scientific American article on the snafus of the baggage-routing software at the Denver International Airport points out, the software industry has been in perennial crisis since the 1960s.139 The fact that the 1960s launched the “software crisis”—that is, a critical shortage of programmers necessary to write the software demanded by industry—indicates the centrality of programming and programmers from that time forward.

Programmers in this era struggled to develop and learn programming languages that would be comprehensible to the computer yet also stretch their human capacity (see chapter 2 for a more detailed account of these trade-offs). New methods of programming were developed to help alleviate the crisis. The Garmisch Conference on Software Engineering in 1968 sought to reform software practices into predictable engineering protocols. “Structured design methodology,” the most widely adopted of the software engineering practices at the time, helped the programmer manage complexity by limiting his view of the code, and gradually directing attention down to lower and lower levels of detail.140 Programs such as Autoflow helped to automate the software planning process. Compilers and the “automatic programming” approach, developed by Grace Hopper in the 1950s, contributed to this effort to make programming easier. This premium on programmer time echoes an earlier moment when scribe time was highly valuable, when pressure on writing led to time-saving innovations such as cursive.

Advertisements for computer programmers in the 1960s reflect this labor crisis; they called not for experienced programmers, but instead for people with particular skills and personality traits willing to take tests to indicate whether they would make good programmers.141 In 1966, 68% of employers used aptitude tests, and a cottage industry of training for these tests was born.142 These ads also sometimes appealed directly to women—another possible sign of a labor shortage. One 1963 ad encouraged women programmers, saying that they could “pep up” the office, as well as be logical. Janet Abbate notes that this rhetoric suggests a low-skilled female programmer wouldn’t disrupt the regular hierarchy of a company.143 The software crisis threatened to make programmers more powerful through leverage in business, but the extensive recruiting tactics and language development mitigated that power.

As programmers working in universities and industry developed languages to represent problems and solutions more effectively, and as technology dropped in price, more businesses could afford to integrate computers into their workflow. For smaller businesses, however, the benefits of computers were not necessarily worth the cost until the late 1970s. Cash registers had handled monetary transactions, calculators had dropped in price and increased in capacity enough to accommodate most mathematical needs, and typewriters were still the best available technology for word processing. But in 1979, the VisiCalc spreadsheet program changed the equation.144 Businesses that wrote invoices, calculated payroll, or had incoming and outgoing flows of resources—in other words, most of them—could then see the value of computers, and some began to integrate them into their workflow. They finally had a compelling reason to adopt computers and commercial software for the management of preexisting information problems.145 Consequently, computers began showing up in smaller businesses in the late 1970s, and companies that made lower-cost commercial machines competed intensely for business.146 Word processing programs were also in development at this time, although computer-based word processing did not become integral to businesses until the combination of cheaper printers and user-friendly programs like WordStar emerged in the early 1980s. These affordable computers and useful software packages brought the computer into the home as well.

Another commercial development in the 1970s brought computers to smaller businesses with significant consequences to commercial infrastructure: the Universal Product Code (UPC). The UPC symbol, a collectively designed and code-based label for retail products, streamlined sales and stock but also forced retailers to invest in computers to scan and manage their goods. Larger retailers were more prepared to absorb these costs. This transition remade the industry by consolidating retailers and pushing small grocery stores out of business, but it also indicated something more profound about the computerization of everyday life, claim Campbell-Kelly et al.:

Alongside the physical movement of goods there is now a corresponding information flow that enables computers to track the movement of this merchandise. Thus the whole manufacturing-distribution complex has become increasingly integrated. … In a sense, there are two systems coexisting, one physical and one virtual. The virtual information system in the computers is a representation of the status of every object in the physical manufacturing and distribution environment—right down to an individual can of peas. This would have been possible without computers, but it would not have been economically feasible because the cost of collecting, managing, and accessing so much information would have been overwhelming.147

The UPC system remade sales through the way it symbolically merged physical and informational flows in commercial contexts, but it also made this virtual-material hybrid part of everyday life. Even more than with their airline reservations, people interacted with computers through their groceries.

Commerce helped to promote the development and adoption of writing in medieval England, from the centralized taxation system to rural estate managers, and in similar ways American businesses promoted the spread of computation. From the management of complex travel and global business to standardized operating systems to personal accounting, commercial programming was a fact of life by the 1980s, which further embedded code into the infrastructure of U.S. society. The trickle-down from government to industry to small businesses led next to the domestication of computers, which we investigate in the next section.

Computers Get Personal

Writing in the twelfth and thirteenth centuries moved from the church and government and began to touch individuals practically and personally through contracts, charters, writs, and other forms of documentation; similarly, computers emerged from universities and government and entered elementary schools and homes in the early 1980s. This move brought computation home, practically and personally, to many Americans. Prior to that point, computers were critical for certain sectors of the U.S. government and business, but their utility for individuals was not apparent. Computers were relegated to information-management situations such as censuses, wars, and large-scale corporate databases but were not a daily reality for most people. Had that continued to be the case, programming could have remained simply a specialized skill, operating tacitly—although still powerfully—in the background. But, instead, software has made its way into most Americans’ lives—our workplaces, communication methods, and social lives are saturated with the work of programmers. For writing, this domestication presaged the need to communicate with the technology—the need to be literate. What does it presage for computation? This is not yet known, but in this section we will look at the domestication of computational technologies with an eye to this parallel.

The seeds for personal interactions with computers were sown in the 1960s. At universities, students were sometimes exposed to computers and occasionally wrote their own punch-card routines. At Dartmouth, math professors John Kemeny and Thomas Kurtz developed BASIC as an accessible programming language as well as a time-sharing system that made expensive computer time available to undergraduates (Dartmouth Time-Sharing System; DTSS). In the 1960s, many students at Dartmouth and several other places, like New York University’s School of Business, were taught BASIC and used the innovative DTSS.148 Both of these innovations proved highly influential to individual access to computation in later decades. While BASIC has been derided by computer scientists such as Edsger Dijkstra,149 it created an entirely new group of computer users “who could develop their own programs and for whom the computer was a personal-information tool.”150 As chapter 1 described, BASIC profoundly influenced the popular programming movement.

In terms of hardware, spaces such as Dymax, led by People’s Computer Company founder Bob Albrecht, made machines available to the public in a revolutionary spirit.151 Given the cost of computer time during the early 1970s, this was impressive and not particularly common. Until the mid 1970s, high expense and expertise mostly relegated computers to government, university, and business applications. The development of minicomputers in this decade signaled a new era, bringing the computer not only to small businesses but also to hobbyists and other individuals.

We might mark the beginning of the minicomputer’s appeal to home users with the Altair 8800, famously announced in a 1975 Popular Electronics issue: “THE HOME COMPUTER IS HERE!” The editor’s introduction notes, “For many years, we’ve been reading about how computers will one day be a household item.” Finally, the Altair is here, a “within-pocketbook-reach sophisticated minicomputer system.” He compares it to another home machine people were familiar with: “Unlike a calculator—and we’re presenting an under-$90 scientific calculator in this issue, too—computers can make logical decisions.” Uses listed for the “home computer” (note that it is not yet called the personal computer) were as nebulous as they were various: it could be a home alarm, robot brain, autopilot, or recipe database. Advertisements invited owners to invent their own uses as well.152 The Altair required technical skills to assemble and program, and it did not come equipped with software, so it appealed to hobbyists interested in tinkering with the machine and eager to own a computer like those they had read about or experienced at work or university. It was wildly successful for hobbyists, but its steep technical learning curve meant the Altair could not capture a larger market; it sold only a few thousand units in 1975.153 More important than its sales figures, however, the Altair augured explosive growth in the home computer market. It also inspired a young Bill Gates and Paul Allen to write a version of the Dartmouth BASIC language for the computer, making its programming much easier—and, in the process, launching the software behemoth Microsoft.154

Also setting the scene for the early 1980s computer boom was research on user interaction. Douglas Englebart at Stanford had wowed the audience at the 1968 Joint Computer Conference when he presented his On-Line System, including the use of a mouse, networking, and a windowed screen interface. Some of this research was embraced by Xerox PARC,155 a venture made possible by the Xerox Corp’s domination of the photocopy market. Although PARC was not able to commercialize much of its technology, it incubated the development of object-oriented programming languages, the desktop metaphor, and the laptop. Steve Jobs and Steve Wozniak were able to poach and re-create some of the technology developed at PARC when they developed their Lisa and Macintosh computers in the early 1980s.156 Much of PARC’s research was driven by the visionary Alan Kay, who developed the influential object-oriented language Smalltalk and pioneered thinking about personal computers. Kay’s so-called KiddiComp, or Dynabook, a personal computer easy enough for kids to use, was considered a far-out idea even at the innovative PARC in the early 1970s.157 In 1972, Kay introduced an internal memo on “A Personal Computer for Children of All Ages” explaining that “it should be read as science fiction.”158 That Kay’s ideas about computing being personal were “science fiction” in the early part of the decade is one indication of how radical was the paradigm shift to come during the 1970s microcomputer revolution.159

In 1977, the Apple II was released, and for the first time a ready-made computer was affordable for middle-class families in the United States. Its success was due, in part, to its form and packaging; unlike the Altair, its components (keyboard, CPU, and CRT screen) were preassembled.160 A 1977 Scientific American ad announces that the Apple II is “The home computer that’s ready to work, play, and grow with you,” and its accompanying image suggests that the best place for the Apple II to be used is on the kitchen table—at the center of American family life.161 Later the same year, other affordable personal computers such as the Commodore PET and TRS-80 were released, each targeting a slightly different market.162 Popular magazines such as Compute! focused on these consumer computer models and helped people choose and write software for them.163 By 1980, there were more than 100 different (and incompatible) computer platforms.164 Computers were becoming more popular, although a 1980 interview with Atari’s marketing vice president noted that “Atari’s competitors in the personal computer market chuckle at what they see is the company’s attempt to develop the ‘home’ computer market, in the face of extensive market research that says the home market won’t ‘happen’ for another 4–5 years.”165

The home computer became more appealing with applications like VisiCalc in 1979, which made the computer useful to businesses, to families keeping track of personal budgets, and to people working from home. Because it was only available on the Apple at first, it helped to sell those computers. VisiCalc was part of a juggernaut of software and applications made available to minicomputer owners in the early 1980s. IBM’s PC, released in 1981 along with Lotus 1-2-3 and MS-DOS, made the minicomputer a serious system for productivity.166 But the burgeoning game industry also made the machine fun.167 Arcade games such as Frogger, Space Invaders, and Pac-Man could be played at home. In 1982, the summer blockbuster Tron, the first movie to use computer animation for its special effects, portrayed a scrappy hacker fighting with a computer in a stylish and iconic video-game scenario. Time magazine declared the computer “Machine of the Year” for 1982, and sales for home computers skyrocketed. The August 1982 issue of Compute! had a feature article on “The New Wave of Home Computers,” noting that IBM and Apple had adjacent booths at a recent electronics show, where “it was impossible to tell which company was the establishment giant and which was the cocky upstart. The home/personal computer firms … finally have achieved their place in the sun.”168 Speaking to the rapid developments in the PC market, they said “no fiction writer would come up with the developments we have seen in the personal computer industry in the past few months.”169

The Commodore 64 (C64), released in 1982, dominated the market because of its affordability and accessibility. Television and print advertising for the C64 suggested that the computer would benefit families with spreadsheets and word processing, kids could get an edge in school, and the whole family would enjoy the games they could play on it (figure 3.3). (This also is the year that a used C64 made its way into my house, along with dozens of floppy disks of pirated games.)

10655_003_fig_003a.jpg10655_003_fig_003b.jpg

Figure 3.3 This 1985 advertisement portrays the Commodore 64 as a family computer with multiple uses. Advertisement from Archive.org. Reprinted courtesy of Cloanto Corporation, rights holder for Commodore Computer.

As Campbell-Kelly and Aspray explain, in the early 1980s the tipping point with computers was reached “so that people of ordinary skill would be able to use them and want to use them. The graphical user interface made computers much easier to use, while software and services made them worth owning.”170

The educational applications of computers grew dramatically in the early 1980s, as marketing and educational games made claims, and as parents saw their kids use computers in schools. In 1980, computers were in 15% of elementary schools and 50% of secondary schools. By 1985, the numbers were up to 82% and 93%, and there were national, state, and locally mandated computer literacy courses.171 Computers entered schools in the 1980s in part because they were getting relatively inexpensive but also because the escalation of the Cold War and the economic threat from Japan increased funding to prepare a future workforce to work with computers.172 The most popular model in schools was the Apple II because Apple donated many to schools in a brilliant move to corner the educational computer market. Educational typing and math and spelling games were popular, as was the nominally historical game Oregon Trail. But programming was also par for the course at the time. Inspired in part by Seymour Papert’s educational claims for programming in the popular book Mindstorms (1980), the Logo language was taught in schools (such as mine) in the early 1980s. Logo for the Apple II was focused on graphics and allowed students to program a small triangle called a “turtle” to make multicolored patterns on the screen (see chapter 1).

As the Logo initiatives indicate, when computers moved into homes and schools, so did computation and programming. Computer magazines from the time reflect the home computer’s status as a programmable object as well as a platform for applications. Ads for programming tools filled magazines such as Compute! For instance, a 1982 ad by educational publisher John Wiley & Sons said, “Because you didn’t buy your Apple to practice your typing,” they offered “Practical manuals that show you how to program your Apple for business, learning and pleasure.”173 In 1980, an Atari spokesman emphasized that Atari computers were meant for people interested in programming or just applications: “More and more of the younger generation are learning to program and work with more sophisticated applications.”174

BASIC was often the language the younger generation learned on because it was included with many of the computers that made their way into homes in the early 1980s, including the C64 and PET, the Apple II series, and the Atari 400 and 800.175 Interaction with these machines often required typing in BASIC commands, and therefore users had to have at least a rudimentary knowledge of how code controlled these machines. For those who wanted to move beyond these rudiments, books on BASIC were available beginning in the mid-1970s.176 A culture of printing BASIC code in magazines, plus sharing it at computer fairs and among friends, contributed to the circulation of knowledge about programming home computers. Consequently, BASIC was the first language of many casual programmers at the time, and it became “the lingua franca of home computing” in the 1980s.177 So while software applications such as VisiCalc made computers practical for families and small businesses, BASIC introduced many of them to the language of programming.

After computers and computation entered many people’s everyday lives in the 1980s, computers began to shape who we were. The widespread applicability of programming and the experience of computation as central to everyday life has changed the ways that we perceive ourselves and our world. We can call this emerging shift in perception a computational mentality, after the literate mentality that people experienced in thirteenth-century Britain. What this mentality means for programming and literacy and the affective implications of the domestication of computers and texts are explored in the next chapter.

Conclusion: From Symbolic to Practical, Centralized to Distributed

Keeping track of populations, collecting money from them, and defending them all put pressure on the information-management systems of government. In response to these information challenges, we see a deliberate movement toward more systematic bureaucratic organization in eleventh- through thirteenth-century England and in nineteenth- and twentieth-century America. This organizational shift involved the adoption of an inscription system that would better facilitate the necessary increase in the systematicity of communication and information management. The government’s embrace of this inscription system—writing or code—helped popularize it with commerce and individuals.

Writing as a way of keeping track—bureaucracy—may have initially taken hold in centralized religion and government because they were faced with the most complex and copious information. In Beniger’s terms, churches and governments hit the “control crisis” first. But centralized institutions are also perhaps the only forces large and powerful enough to command the resources to set this infrastructural transition in motion. This becomes clear in the more recent history of code. As suggested by the American and British governments’ investment in “blue-sky” research on computation, the high-risk, high-reward nature of the venture could perhaps only be absorbed by very powerful, rich, and stable institutions with already established organizational infrastructure, such as government and (often government-sponsored) university and industry research centers. While writing may have had lower (but still significant) material costs than computing, centralized institutions still supplied the organizational structure to support the transition.

We can see that after some groundwork had been laid and the government began to rely on the technology, it forced commerce to adopt either writing or code through changes it made in bureaucratic requirements. The uses of writing or code technology in governance influenced citizens’ adoption of the technology to organize their own affairs and began to establish the technology’s associated literacy as a desired skill to have. As Furet and Ozouf write of the spread of French literacy in the seventeenth through nineteenth centuries, “The spread of literacy was born of the market economy.” This “market economy, backed by and relying upon the machinery of the centralized state, expanded the role of writing as a necessary condition of modernization.”178 In other words, these bureaucratic requirements set the stage for the inscription technology to become infrastructural to people’s everyday lives.

At first, these uses of writing were often symbolic rather than functional, as in the case of the Domesday Book. However, the increasingly central role of writing in people’s interactions with the state and commerce underscored the importance of writing—and following that, the importance of literacy, or understanding and responding to such uses of writing. Likewise, computers and programming began as practical solutions to complex governmental information problems, and they remained distant from the general American public through the 1950s. For both writing and programming, then, what began as a technology for information management by centralized institutions such as governments then came to structure individual lives and promise more diverse uses for individuals. At this point, the ability to write in that inscription technology is, in Andrea diSessa’s terms, a “material intelligence.” In the next chapter, we explore the point when a writing technology becomes so ubiquitous that it leads to a “mentality” among those who can use it as well as those who cannot. For writing, a widespread literate mentality prefigured literacy as an essential life skill. What will a computational mentality lead to?

The story I have told here and continue in the next chapter may strike some as too teleological or too adherent to a progress narrative in which bureaucracy employs technology for the benefit of citizens. The information-processing strategies of governments as they intersect with technology have, of course, been extensively critiqued. Because my objective is to trace threads across history to help us “understand the nature and shape of current technologies” as Christina Haas recommends, rather than rehash these critiques I will name a few salient problems with governmental uses of technology to “rationalize” citizens.

In his critique of “computationalism” as a form of Foucauldian “governmentality”—a neoliberal method of decentralizing government control and insidiously embedding it in individuals—David Golumbia offers one caution about our story of government and technology. For Golumbia, “computationalism” is a set of beliefs that the world is fully subject to computation and that this is a good thing; for example, Thomas Friedman’s much-maligned characterization of the world as “flat.”179 Computation is often presented as a way to liberate individuals from heavy bureaucracy (e.g., the “freedom” rhetoric surrounding talk about the Internet180). But Golumbia notes that computation has long been advantageous to the state, especially for war, and therefore it effectively consolidates rather than distributes power.181 Wendy Hui Kyong Chun also fears the coupling of the illusion of “freedom” and the Internet; for her, this computational network may promise freedom but has overwhelming capabilities of surveillance and control.182 Although I note that standardization allows for scaling-up information collection, storage, and transfer in a way that governments required during the periods I examine, it also can mean a loss of humanity, a mechanization of personal qualities and processes. To return to Beniger’s account of “rationalization” as a response to control crises, the reduction and standardization of information and peoples can delete complex social contexts and implement tacit power structures.

Indeed, for those subject to them, transitions from personal to bureaucratic systems of organization have inevitably caused anxiety about the relationship between humans and technology. Our focal period of medieval England was perhaps the first of these moments when government processes previously performed by people were automated in writing, but there were subsequent ones. In post-Revolutionary France, according to Ben Kafka, the government instantiated a complex bureaucracy to depersonalize and make transparent its processes and thereby achieve greater liberté, égalité, fraternité. But citizens could become so tangled up in this process that Kafka argues any sane person would be aggrieved were they able to see the system as a whole.183 Jon Agar argues that as nineteenth-century British government moved from personal dealings and positions based on social status to protocols and merit-based appointments, “changes were all marked by moves from the personal to the impersonal, from practices contingent on the individual to the systemic. Trust in the gentleman was being transferred, partially, to trust in the system,” with corresponding unsettling feelings of mechanization.184 Agar helps us make the connection between written bureaucracy and computation. The “government machine” of bureaucracy reached its “apotheosis” in the twentieth century with the computer, he argues, as computers mechanized government functions in a way that human civil servants were designed for, but never quite capable of.185 Twentieth and twenty-first-century science fiction provides a window on anxieties about depersonalization through computation: out-of-control mainframes (2001: A Space Odyssey; Colossus); digitization of humans in dystopian networks (Snow Crash; The Matrix Trilogy); and pervasive surveillance (Minority Report; Little Brother; Super Sad True Love Story).

These anxieties are well founded: computation is ascendant. We now find ourselves subject to the “Regime of Computation,” in Katherine Hayles’s language,186 in the ways that our personhood is translated into data to be surveilled, collected, and leveraged to control us or market to us. Computers make decisions that were once the province of humans, sometimes with disastrous consequences—flash crashes of the stock market, accidental bombings by drones, NSA flags on travelers with names similar to those of wanted terrorists, and so forth. But, as I described in chapter 2, the affordances of computation make its dominance in our lives perhaps inevitable. Powerlessness in the face of a new information technology ordering the world is one source of these anxieties. Another source of anxiety is ignorance: many people know that code and the people who write it are powerful, but because they don’t understand the medium, they don’t know what to do about the problem or even recognize that it could be a problem. Together, this ignorance and anxiety reflects the transitional nature of computation in our current moment. My goal is not to critique this ascendancy of computation, as Golumbia does, but instead to name a historical precedent for it, perhaps opening the possibility that we could alter its course and make it more humane. The next chapter’s continued examination of historical precedents in writing for our contemporary moment in programming takes us further in that direction.

Notes