4 Literacy for Everyday Life

Our twentieth century western form of literacy is not an invariable norm; it is as culture-bound and shaped by available technologies (computers, most obviously) as medieval manuscript literacy was. Comparing medieval norms with modern ones puts current questions about literacy in perspective. The best way of understanding the modern western literate mentality is to see where it came from.

—Michael Clanchy, From Memory to Written Record1

Our contemporary society can be characterized as a software society and our culture can be justifiably called a software culture—because today software plays a central role in shaping both the material elements and many of the immaterial structures which together make up “culture.”

—Lev Manovich, Software Takes Command2

By the end of the thirteenth century, everyday English citizens were familiar with the ways texts could shape the world: their birth, marriage, and death records as well as their religious and domestic spaces were saturated with writing. This bloom of bureaucracy meant that “literacy fingered its way into every corner of domestic life, providing a formidable instrument of control over family affairs,” according to Jack Goody.3 Documentation in everyday life externalized and formalized personal relationships that were previously dominated by interpersonal interactions. Nonliterates drew on the resources of literates to gain control of their affairs. People began to see their place in the world a bit differently: the world could be a kind of text, subject to interpretation. Regardless of their literacy skills, people recognized that they could be represented in writing.

This was the beginning of a “literate mentality”: a widespread worldview reflecting the power of writing among literates and nonliterates alike. As it became apparent that literacy was advantageous and as education became more readily available, people sought out literacy for themselves and their children. Governments and churches encouraged literacy in their flocks to strengthen collective spiritual and economic health. Literacy rates climbed. More institutions could build on the assumption that people were literate. Then, as institutions such as schools, the church, and the law began to assume literacy for participation, nonliterates became illiterates—defined by lack rather than difference. Illiteracy thus became a marked condition as early as the thirteenth century, although it was not until the nineteenth century when it could be assumed that most people in the West were literate. During the nineteenth century, literacy became part of the infrastructure of writing in government, education, and commerce. William Gilmore calls this the point where “literacy becomes a necessity for life.”4

The history of textual literacy—beginning with the domestication of writing, proceeding through its power to influence worldviews, and then becoming a material basis for literacy—provides a foil for our contemporary relationship with programming and computation, as it did for an earlier shift in chapter 3. That chapter followed the trajectory of computation and writing as they made their way into domestic spaces and lives. This chapter turns from the technology itself to the people who use it. When these technologies enter people’s everyday lives, how do ways of thinking and living in the world change? Medieval historians such as Brian Stock claim that a literate mentality accompanied the everyday use of texts, as communities formed around textual interpretations. Literacy scholar David Olson argues that ways of reading texts in several different eras led to revolutions in thinking, including the Reformation and early modern science, as well as the postmodern turn.5 Elizabeth Eisenstein has made a similar claim for the era of print.6 Deborah Brandt argues that in our current moment, as “writing is eclipsing reading as the literacy of consequence,” people are thinking differently about their reading and writing practices.7 Daniel Punday traces the intertwined metaphors for computing and writing that have shaped literary works since the twentieth century.8 That technologies other than writing shape our thought has also long been recognized; for example, Lewis Mumford’s idea that the clock produced a new concept of time that facilitated the Industrial Revolution or Martin Heidegger’s claim that technology “enframes” our world as resources to be processed.9 Significant public commentary on technology is devoted to the ways that human relationships and thinking have shifted with modern technology; for instance, the work of Nicholas Carr and Sherry Turkle.10 My claim that human society is undergoing a new “mentality” because of its extensive engagement with the technology of computation is preceded by many others.

First, a word about what it might mean to describe a “mentality.” The concept of “mentalities” comes from midcentury French thinkers such as Robert Mandrou, George Duby, Lucien Febvre, Marc Bloch, and, later, Jacques Le Goff, who together extended Durkheim’s “modes of thought” to describe a widely held collection of beliefs characteristic of an era.11 A history of mentalities gives flesh to bare economic and rational histories of peoples by describing the habits and patterns of thoughts that cut across individuals. As a method, the tracing of mentalities is imprecise; it deals with the “je ne sais quoi of history,” as Le Goff puts it.12 Although the mentalities method has been critiqued for offering a homogeneous style of thinking for a given era, Roger Chartier points out that historians tracing mentalities never claimed that thinking was the same across classes or that mentalities offered a perfect window into the thought patterns of a civilization.13 Mentalities are shared across a population, yet they interplay with the social realities of class and culture, Le Goff insists, and so we must understand them as reflecting social realities imperfectly.14

Because imprecision could make this method an “alibi for epistemological laziness,”15 historians must pay attention to the material sources from which they construct their history of mentality, Le Goff says.16 But historians must also attend to the social structures that support mentalities: “the history of cultural systems, systems of beliefs, values and intellectual equipment within which they were given shape, lived and evolved.”17 For Lucien Febvre, there were three critical assumptions for a history of mentalities: “that methods of reasoning and forms of thinking are not universal; that they depend above all on the material instruments and conceptual categories associated with a particular era; and that the development of mental tools is not marked by any continuous or necessary progress.”18 In the work of these historians, we can see an emphasis on the materials from which mentalities were constructed.

Here, as I frame my previous chapters’ technological histories, linguistic parallels, and social contexts with this concept of mentalities, I am mindful of the overbroad claims possible with this frame. Indeed, broad consequences for literacy on individual and societal mentalities, as claimed by writing scholars such as Walter Ong, Eric Havelock, Jack Goody, and Ian Watt, have been heavily critiqued, and justifiably so.19 No empirical evidence has been found to corroborate the huge claims of those earlier accounts proposing a “great divide” between oral and literate societies. It is difficult to separate schooling from literacy, and literacy by itself seems only to have isolated cognitive effects.20 Writing may facilitate a kind of objectivity or a certain kind of thinking, but it does not cause or guarantee it. What I describe as the “literate mentality” (and below, the “computational mentality”) is not an empirical phenomenon; it is, following Febvre and Le Goff, located broadly in culture rather than in individual literates.

As Chartier suggests, historians of mentalities often saw language and technology as platforms upon which beliefs were mounted.21 Andrea diSessa’s concept of “material intelligences,” which we explored in chapter 2, helps to describe the thinking that writing as a technology facilitates. David Olson elaborates:

If literacy is thought of as simply the basic skill of recognizing emblems or decoding letters to sound or words to meanings, the implications of literacy, while important, are bound to be limited. But if we regard literacy in the classical sense, as the ability to understand and use the intellectual resources provided by some three thousand years of diverse literate traditions, the implications of learning to exploit these resources may be enormous.22

In extending the literate mentality to the computational one in this chapter, I take cues from Olson’s later work, as well as from Michael Clanchy, who deliberately avoids the question of whether literacy restructures thought at the level of individual psychology or cognition. Clanchy uses the term “the literate mentality” “simplistically to describe the cluster of attitudes which literates in medieval England shared, and expressed in all sorts of ways in surviving records.”23 More recently, Brandt explicitly draws on Durkheim and the French tradition of mentalities in order to describe how a shifting emphasis in literacy from reading to writing affects the conditions of everyday writers in America.24 Similarly, I see the literate mentality as a kind of zeitgeist, a shared rather than individual pattern of thought.

We see this literate mentality emerge when the technology of writing begins to affect personal lives. Government uses of writing—or later, computation—redefine what it means to be a citizen. People using writing or computation in domestic and personal contexts begin to reflect its influences in theories of the world and self. These resulting theories are cultural phenomena and are shared independently of literacy status. As a mentality begins to emerge, and as texts or code become central to everyday activities, communities form around uses of the technology. Members can draw on what Norma González and Luis Moll have called a community’s “funds of knowledge,” pooling their collective human resources to navigate the spaces governed by the technology.25 But, in the case of writing, community funds no longer suffice when everyday life becomes saturated with the technology. Not only communities but also their members become marked by their uses of the technology: individuals can be labeled “illiterate.” The marking of individual reading and writing skills as present or absent signaled an eventual transition to mass literacy, and a point where reading and writing became a “necessity for life.” Now that some will admit to or are marked as being “computationally illiterate,” might we be headed into an era where computational skills could be a “necessity for life?” A point where computational literacy augments what we now think of as literacy more broadly?

Because a literate mentality or computational mentality is not an all-or-nothing state, the several examples of historical moments that I offer below can illustrate different degrees and kinds of literacy or computation operating in society. We look at the influence of writing on people in thirteenth-century England, eighteenth century France, and the long nineteenth century, when mass education and religious movements drove mass literacy across the West. I argue that a computational mentality began to emerge in America in the 1950s, at the point when the American public became aware of computers. It grew along with the influence of computers in public and private life. In the 1980s, when computers entered many middle-class American homes, and again in the 1990s, when they allowed people to communicate on the World Wide Web, we see significant jumps in their influence on everyday lives. Now that computers are embedded in handheld devices that travel with us everywhere, computation has quite literally become personal. Since the 1950s, philosophies of mind, social spheres, and fundamental concepts of self appear to increasingly reflect a feeling that we are “computed.” With the increasing ubiquity of computers, we are developing a computational mentality that parallels the literate mentality that emerged with the pervasiveness of texts. These shifts in culture and identity accompanying the penetration of writing and computation in everyday lives coincide with an integrated and personal relationship to the technology.

The impingements of computation and writing on daily lives lay the groundwork for people to understand the power of the technology and for them to desire and seek out literacy—prerequisites to a successful mass literacy movement.26 A literate mentality signaled a society’s dependence on text, and it also presaged a wider role for literacy in everyday life. What does a computational mentality suggest? Although few are now computationally literate—that is, able to read and write code—most people in the West are cognizant of the ways computation shapes our everyday life. Certain populations are already feeling the pressure of computational literacy on their professions and everyday lives; unless they learn to write code, they feel computationally illiterate. In light of a shift to a literate mentality, this chapter highlights the anxieties and ambitions associated with this new computational mentality, which, I argue, signals a move toward computation becoming the material basis for a new literacy. In the final sections of this chapter, I outline how an emerging computational mentality might presage a greater need for individuals to be able to read and write code, the language of computation.

Surrounded by Writing

How did it feel to be a subject of Edward I at the end of the thirteenth century in England? Experiences varied greatly for men, women, Jews, Christians, knights, serfs, craftsmen, and clergy, so it is impossible to describe the feelings of an “average” medieval subject. But chances are good that no matter who you were in late thirteenth century England, you were surrounded by writing. If you had been summoned to court, it would have been in writing. While there, you needed an attorney to prepare your written statement, and a written transcript would have documented the hearing.27 If you had bought or sold property, you would have used a charter to mark the transfer. If you couldn’t sign the document, you would have used a seal instead. If you were a serf and therefore unlikely to buy or sell property, your rights and obligations to your master might have been documented in writing.28 If you were an estate manager, you probably used writing to keep track of your affairs.29 If you were part of the king’s administration, you would have been able to access royal records because they were written and stored, and you might have been aware that their value for posterity and proof depended on their accessibility.30 If you were a monk, you would have had access to more than one book at a time, such that you could juxtapose and compare them in addition to meditating on them.31 If you were a nun, you might have been literate, as you would have lived like a (male) monk. But since women’s lives were circumscribed by the home and poorly documented, we don’t know much about medieval women’s literacy generally.32 However, if you were a woman of a higher class, you would have been immersed in writing, and you might have been able to read in Latin, French, and English. You even may have owned a precious Book of Hours, which was both a text and an aesthetic, devotional object, and demanded an extraverbal, visual and meditative kind of reading. But you were less likely to have been able to read alphabetic text, and much less likely to write, than your brother or husband.33 Regardless of your specific place in society, your experiences of writing probably would have been strange to your grandparents, who would have conducted more of their business face to face, on personal terms. This section explores what it might have meant to be surrounded by writing at this time. In a literate era, written documentation can codify lives, defining relationships, movement, and even the self-presentation of individual bodies.

Participation in Collective Literacy

As described in the previous chapter, writing began to replace human memory and social mechanisms of community and trust in the eleventh through thirteenth centuries. Because bureaucratic and commercial uses of writing circumscribed everyone, everyday English citizens were forced to acknowledge writing in their lives. Medieval historian Brian Stock claims that the twelfth century is the point when writing became so integrated into daily life that people began feeling compelled to engage with the technology.34 Through the proliferation of documents in archives, the spread of literate skills, and the general penetration of writing into ordinary lives, people began to absorb the functions and importance of documents into their modes of thinking and interacting. Clanchy claims that by the end of Edward I’s reign in 1307, “writing [was] familiar throughout the countryside. … This is not to say that everyone could read and write by 1307, but that by that time literate modes were familiar even to serfs, who used charters for conveying property to each other and whose rights and obligations were beginning to be regularly recorded in manorial rolls. Those who used writing participated in literacy, even if they had not mastered the skills of a clerk.”35

It is critical to note that participation in literacy is different from individual literacy or mass literacy. It signals a point when people are surrounded by and affected by writing. Because people knew their lives were shaped by texts, medieval historians generally consider English society to be literate at this time, despite low individual literacy rates. Franz Bäuml writes, “At all levels of society, the majority of the population of Europe between the fourth and fifteenth centuries was, in some sense, illiterate. Yet medieval civilization was a literate civilization; the knowledge indispensable to the functioning of medieval society was transmitted in writing: the Bible and its exegesis, statutory laws, and documents of all kinds.”36 Stock describes “textual communities” that pooled their literacy resources and navigated this literate civilization.37 Nicholas Orme claims that medieval English society was “collectively literate” at the end of the thirteenth century: “Everyone knew someone who could read, and everyone’s life depended to some extent on reading and writing.”38 A new category of person emerged: the “quasi-literate,” a person who couldn’t read or write but who had access to someone who could.39 Because of these circulation patterns of literacy, it is difficult to determine who had ready access to text through their personal connections, that is, was effectively literate.40 For example, Bäuml reminds us that the signature, often used as a historical measure of literacy, is a socially conditioned artifact. Even literates might have had professional scribes sign documents on their behalf, just as secretaries now sometimes sign documents on behalf of their bosses.41 And because literacy in medieval England was complicated by the circulation of three languages—English, French, and Latin—it would be impossible to assess an individual’s literacy without taking into account complex contexts for its multiple uses and forms. Looking at the same phenomenon of collective literacy hundreds of years later, David Vincent notes that we cannot look at individual literacy in the eighteenth and nineteenth centuries to determine access to text. A whole family or social unit could function in a literate society if just one member could read or write.42

The pooling of resources to navigate literacy challenges is a contemporary phenomenon, too, and well documented in ethnographic research. Norma González and Luis Moll show working-class Mexican communities in Arizona drawing on collective “funds of knowledge” to solve problems and support education of their members.43 Shirley Brice Heath describes residents of one Carolina Piedmont community interpreting important school and health documents together.44 Marcia Farr tracks the sharing of literacy and language skills among an extended family network of Chicanos in Chicago, showing that they can together respond to a number of documentary situations that might have challenged any one member alone.45 Kate Vieira demonstrates a similar collective coping strategy for written documents among Portuguese-speaking immigrants in Massachusetts.46 The fact that communities across time and space can draw on community resources to surmount the obstacles that their documentary societies throw at them means that we must consider literacy a social rather than individual phenomenon. What it means to be literate at any given time will always be shaped by dominant and niche uses of literacy, languages, genres, and community resources. Below, we will see how this collective literacy functions for current computational challenges as well.

Bureaucracy and Anxiety: Substituting Personal Relations with Writing

As described in chapter 3, the uses of writing for governance accelerated greatly at the end of the thirteenth century in England. But we can trace this phenomenon to other eras and places, including eighteenth-century France and nineteenth-century England. In all of these moments, attempts to increase the scope and efficiency of governance led to bureaucratic expansion and greater dependence on writing. For people during these times, however, textual changes were not limited to government. Texts began to substitute for human action, as in the case of written deeds or wills replacing corporal memory tactics. Through this substitution, texts became a kind of truth in themselves, rather than reflecting or recording some truth outside of them. For example, we now perceive written surveys of land as the authoritative description of property; if there are conflicts between neighbors about boundaries, the written legal description presides. But this was not always the case. Stock explains this transition: “People began to think of facts not as recorded in texts, but as embodied in texts, a change in mentality of major importance in the rise of methods for classifying, encoding and retrieving written information.”47

Similar to the Norman Conquest, the French Revolution was an impetus for a different kind of government. Ben Kafka describes the state’s shift from personal dealings to written ones. Under the ancien régime, civil service positions were obtained through personal connections; you got to where you were because of whom you knew.48 The Revolution’s violent distaste for nepotism abolished that system and looked to writing to achieve greater liberté, égalité, and fraternité. The new system presumed that writing could render the state more efficient as well as more democratic by making its operations more transparent. For example, Kafka cites a 1793 order containing provisions for efficiency, stipulating “All relations between all public functionaries can no longer take place except in writing.” Paperwork could facilitate surveillance. Surveillance of government would abolish corruption, and surveillance of citizens would ensure efficiency and fairness in governance. At least, these were the ostensible objectives of this emphasis on documentation. As Kafka points out, it seems impossible that all relations among public servants could have occurred in writing.49

A similar automation of governance through writing happened at the height of the British Empire. Inspired by the systematic management they saw in the American government, British government “mechanizers” sought to streamline administration and collect more statistical information about British citizens.50 These reformers wanted to divide labor and make the government into an efficient and transparent machine. Charles Babbage successfully used this management philosophy in his 1852 bid for funding his Difference and Analytical Engines: “There is no reason why mental, as well as bodily Labour, should not be economized by the aid of machinery.”51 Although his engines were important precursors to the computer, they did not help systematize governance; instead, what answered the call for systemization was a steady increase in civil service personnel as well as documentation and standardization of their duties. Reformers imposed what Jon Agar calls “a Domesday compilation of landholdings in which land was measured and rights were visibly assigned in written documents.”52 The goal was to make things simpler and systematic.53 These depersonalizing influences were particularly pronounced in British colonial India, where utilitarian philosophers such as Jeremy Bentham and James Mill wanted to replace the local tradition of personal rule with automated administration, “in which what governed was the symbolic abstraction of writing and the surveillance of facts.”54

These transitions from oral to written communication allowed government to scale up and avoid some aspects of nepotism and class preference. Governments could record, process, and store a greater quantity of information about their citizens or enemies. But the ideals of increased documentation failed to match reality. The twelfth-century Domesday Book, nineteenth-century British surveys of landholdings, and Revolutionary France’s attempts at comprehensive and authoritative documentation all had greater aspirations than applications. This was, in part, because the volume of information collected in these efforts was immense and impossible to fully process.55 Looking back at the Revolution, nineteenth-century French encyclopedist Pierre Larousse wrote about the problem of drowning in paperwork: thousands of clerks were generating and signing documents like automatons, without reading them, without their content passing through their minds at all.56 Ben Kafka offers an interesting account of an eighteenth century civil servant on trial who claimed that he signed but did not read a document because of “the physical impossibility of doing otherwise”: the volume of documents that came across his desk was too great to read.57 The hope that governments had for written documentation, however unachievable, suggests the power of writing and its association with knowledge during these eras.58

The substitution of paper for personal relations may have denigrated aspects of humanity even as it failed to achieve complete transparency or surveillance. The danger bureaucracy poses to humanity is, of course, a common theme taken up by writers from Franz Kafka to Max Weber. In the eras we observe, the idea that writing could stand in for a person was jarring. Writing can be forged, can disrupt established power structures, and can appear to assume a kind of autonomous power. The shifting boundary between the duties of humans and their technologies can even change perceptions of humanity itself. Such ruptures in the social fabric reveal a society growing increasingly dependent on writing.

Forgery can be a major social problem for a society learning to depend on writing. Writing can claim untrue things long after they have been disproved, as Socrates observed in Plato’s Phaedrus. This seems especially true in the twelfth century, when writing was becoming authoritative yet was also subject to manipulation; at that time, writing “implied distrust, if not chicanery, on the part of the writer.”59 For example, as a particularly literate class, twelfth-century monks knew that “a document which stated something untrue or unverifiable would continue to state it—and make it look authentic and proven—as long as that document existed.”60 Monks became notorious forgers; although they often framed these forgeries as translations of God’s will, they established ownership of land and property under dubious terms.61 Contemporary, widely distributed propaganda against the pope’s attempts to show overlordship of France at the turn of the fourteenth century through a series of papal bulls demonstrate a tension between the fact that writing was thought to be authoritative, and the fact that it could be manipulated: “[The pope] can easily acquire a right for himself over anything whatever: since all he has to do is write, and everything will be his as soon as it is written.” The “knight” in this fictional dialogue claims not to be familiar with letters and big words, but he knows that one cannot issue written decrees where one lacks jurisdiction.62 When truth can be embodied in texts, truth can be massaged through forgery or misrepresentation.63 Forgeries such as those written by monks or property documents required by new Norman laws were also good reasons for medieval people not to trust writing as it came to affect their lives more thoroughly. A literate society such as ours might trust documents because we believe individual human memory to be more fickle than our established social institutions and technologies that verify documents: standardized forms, indelible signatures, laws against forgeries, watermarks, digital passwords, and cryptography. Without these supports, societies shifting to writing wrestled with problems of verity and forgery.

Another anxiety about the shift to a writing-centered governance is located in the fact that it can disrupt traditional hierarchies. Writing allows new kinds of people to make decisions. Many medieval nobles were displeased with policy changes regarding written documentation because the new policies subjected them to legalistic seizures that had previously affected only the powerless.64 Ben Kafka notes that clerks in the new French state were able to combine the newly central role of writing with their access to paperwork to achieve “a degree of power out of proportion to their social and political status.”65 Because documentation of qualifications substituted for personal recommendations, this disparity between position and power was heightened. Agar sees a similar anxiety in nineteenth-century Britain. As civil servants were no longer selected because of their breeding and judgment but rather on an ostensibly meritocratic basis, men of lower birth—or even women—could use the power of writing to make decisions. Contemporaries wondered: without proper cultural training, would they make decisions in the “right” way?66

Perhaps most disconcerting about written bureaucracy was the idea that the technology itself might be in charge, fully displacing humans. Ben Kafka traces the origin of the word bureaucracy in eighteenth-century France as a critique of a system using technology to replace human roles in governance: while demo-cracy describes rule by the people, bureau-cracy describes “rule by a piece of office furniture.”67 Projecting forward from nineteenth-century Britain, Agar claims that the computer is “the apotheosis of the civil servant” because it does everything exactly as told, with no room for personal discretion.68 The boundaries between humans and technology are blurred through this substitution, first as writing replaces human memory, relationships, and discretion and again as computers replace humans doing the work of writing.69 Friedrich Kittler identifies mass education as one source of the blurring between the work of humans and texts—education being a budding bureaucracy of its own as well as an important contributor to the steep rise in literacy rates in the nineteenth century. He writes, “Compulsory education engulfed people in paper … Whatever they emitted and received was writing. And because only that exists which can be posted, bodies themselves fell into the regime of the symbolic.”70 As people became more literate and more enmeshed in bureaucracy, texts became more central in their lives as well as to their lives. The interpenetration of the technology of writing and human social structures led people to think of themselves differently.

For good reasons, writing was not immediately accepted as authoritative or as a valid alternative to traditional social interactions, just as the use of computation in place of those interactions is often viewed with skepticism now. There was often a generation gap between when a form of documentation was developed and when it was accepted.71 As David Levy argues, documents and writing could not serve as proxies for human witnesses without complex structures to support them and give them context,72 and those structures took considerable time and effort to build. Support structures must be built not only through material methods such as indexing, verification, and archives, but also in the culture that relies on writing. The mere presence of writing or literacy does not imply an understanding of legal codes or the other potential functions of writing, as Olson observes.73 Only when cultures begin to accept writing as a form of authority and learn to work with it can we can think of them as possessing a collective “literate mentality.”

This textual dominance continues, of course. The areas of life that were subjected to written documentation during the Middle Ages are still subjected to them now: marriage certificates, birth certificates, citizenship documents, and property deeds. Perhaps these forms of documentation have worn their way into our bureaucratic system so deeply that they are the most resistant to change.74

Shifts in Ways of Reading

A developing literate mentality in the Middle Ages influenced scientific methods and philosophical ideas as well as governance. Text allowed for some distance between speaker and idea; in science, this meant that methods could be scrutinized apart from their actor. Traditional cosmologies became distinct from scientific explanations of the world and nature. In philosophy, the way records could time-shift events highlighted the distinction between what was really happening and what was thought to be happening.75 The organizing structures that arose around texts in the medieval era made possible the “classifying, sifting, and encoding” of reality. Events could be edited and ordered, and differences between codified rules for behavior and actual behavior emerged.76 Stock demonstrates that certain interpretations of medieval heresies arose from these disparities between texts and actuality.77

The acknowledgment that text could embody a truth was a marked change over earlier ways of reading, argues Olson. Before that, when texts were scarcer and literacy rarer, the paradigm of reading was meditation on a text: a person would read one text over and over, allowing multiple meanings to emerge. We might think of the Books of Hours, discussed in chapter 3, as many elite women probably could not read but could contemplate the symbolic images and decorative text in their books. The proliferation of meanings that individuals could glean from texts became dangerous to religious authority over scripture, however, and there was a move originating from church factions to pin textual meaning down. Olson points to twelfth-century Hugh of St. Victor as pioneering the concept of literal interpretation, later embodied in Thomas Aquinas’s thirteenth-century Summa Theologica and eventually leading to Martin Luther’s complaint against what “truth” had become for the church.78 Thus, the twelfth century marks a key transition in literacy: the move from writing as reminder of truth to writing as representation of truth. Once the literal truth is thought to be represented in a text, it becomes discoverable with careful reading of that text.79 Olson says this way of “reading the Book of Scripture” for an inherent truth led to “reading the Book of Nature” in early modern science, which sought to peel scientific fact away from subjective interpretation. Robert Boyle’s scientific experiments serve as examples. Just as anyone could, with effort, look through a written text to see its “correct” meaning, Boyle’s witnesses could “read” the events all alike.80

The printing press’s facilitation of standard texts reinforced this notion of literal truth. Because the early print period is well studied, I will provide just a few illustrative and canonical examples here. Clanchy notes that there are many variations in texts of Magna Carta, indicating that exactness was not necessary or even possible in documents at the time. He writes, “Insistence on absolute literal accuracy is a consequence of printing, compounded by photocopying and computing.”81 Jay David Bolter points out that whereas writing practices like calligraphy draw attention to the text itself, print typography can standardize a text and deemphasize its materiality.82 Consequently, the typographic letter appears to lead to a more authoritative text and to an idea of reliable, standardized interactions with texts. Olson notes that the Latinate neologism verbatim was coined in the fifteenth century, coincident with the invention of printing.83 These standardized interactions with printed texts contributed directly to the Enlightenment, according to Eisenstein. The verbatim interpretation of texts is reflected in shifts in contract law as well, as courts moved from enforcing the perceived intent of a contract to its literal text in the nineteenth century.84

In his broad historical survey of shifts in ways of reading, Olson claims that cultural understandings of our world, science, and psychology are all “by-products of our ways of interpreting and creating written texts, of living in a world on paper.”85 Historical details of how different forms of text and textual practices shape different modes of thinking are important to acknowledge. Written texts provide fewer cues than oral delivery about how to “take” them; for example, ironically, sincerely, or jokingly. Readers must interpret texts, supplying the illocutionary force of the text themselves. Ways of reading have changed with the availability and style of texts, Olson argues, leading to broader changes about how we interpret the world. He argues that this perspective is different from the so-called great-divide theories of Levy-Bruhl, who tried to characterize a “primitive mind,” or of McLuhan, who tried to distinguish between “oral man,” “literate man,” and “electronic man.” The mentality that Olson exposes lies not in individual cognition, but in a cultural mindset and shared ways of thinking about the self and the world. Olson explains: humans could always reason, but literacy and its cultural contexts allow them to reason about reason.86 Texts as representations, whether of truth (modernism) or of other representations (postmodernism), allow for abstract reasoning. The result is different “theor[ies] of literacy and mind” corresponding with different ways of reading.87

The Limits of Collective Literacy: Literacy as a “Necessity for Life”

Prodded by government policies and aided by the domestication of books and technology of the printing press, reading and writing increasingly became generalized skills from the Middle Ages through the Renaissance in the West. But it was not until the eighteenth century that we saw something like mass literacy, in particular among women and people of the lower classes. Beginning around the late eighteenth century, literacy began to be so widely needed in everyday life that people increasingly sought it out, and consequently it began to be widely possessed. As people became literate across regions, sexes, and classes, literacy became even more necessary. As Lawrence Cremin observed of nineteenth- and twentieth-century America, “In an expanding literacy environment, literacy tends to create a demand for more literacy.”88 In other words, literacy amplifies itself. William Gilmore, who explored the transition to mass literacy in eighteenth- and nineteenth-century Vermont, called this the point where reading became a “necessity of life” because reading was embedded in the day-to-day activities of most people.89 Reading’s importance generally precedes that of writing because reading meant one could understand and participate, especially in religion. When the ability to participate in this world of letters—through reading and perhaps even through writing—becomes a “necessity of life,” we can see the beginnings of what we can consider literacy. Reading and, later, writing distinguish themselves from other important skills because of their widespread, powerful, and now infrastructural nature.

As forms of reading and writing become more useful because the scope of their application increases, people begin to perceive them as essential personal skills, not simply something for which one can lean on family or hire out. Here we can see the rhetorical layer of literacy forming, as discussed in chapter 1. Deborah Brandt’s ethnographic studies of literacy as it is lived in individual lives highlight this rhetorical shift. As the social and material systems that give literacy its value change, so does the value placed on various kinds of literacies.90 The skills that people have are no longer thought to be sufficient. Concepts of what literacy is morph in concert with actual and perceived shifts in demands on people’s skills.

When literacy is widely thought to be a necessity of life, it becomes essential for people to know certain forms of reading and writing to carry out their daily lives as citizens and workers or to maintain their social positions. In colonial New England, colonists wishing to maintain a higher social status began to need literacy in the eighteenth century, argues Kenneth Lockridge.91 Information management and commercial transactions increasingly demanded literacy. As in medieval England, business once conducted face to face, such as land purchases, became documented in writing. Few men were literate in New England in the seventeenth century, but there also wasn’t much need for literacy—the gap between literacy demand and literacy supply was narrow. But after the American Revolution, when adult male literacy in New England was almost universal, the gap was widened by voting requirements, commerce, and other demands of the new citizenry.92 As literacy became more common, it became more necessary—and vice versa.

The necessity of literacy for civic participation—one of the main justifications for national literacy campaigns (see chapter 1)—began to manifest itself in voting procedures, legal proceedings, and interpretations of contract law in the nineteenth century. As other voting requirements were gradually stripped away—location of birth, property ownership, race, and later sex—literacy and education began to stand in for those qualities in defining what it meant to be an American citizen.93 Until the Voting Rights Act of 1965, literacy tests were used to disenfranchise many African Americans, especially in the South. These tests were often administered prejudicially and so were not tests of reading per se. However, they reflect the way that the civic value of literacy established in the nineteenth century could provide cover for racial disenfranchisement.

Assuming they made it past any literacy tests in their state, illiterate voters encountered another challenge in the secret ballot. Edward Stevens quotes from a court’s interpretation of a late nineteenth-century Pennsylvania statute that the secret ballot “was well calculated to promote the cause of general education by compelling the masses to learn to read and write as a condition precedent to the exercise of the right to suffrage.” Moreover, this ballot would also “punish the illiterate by compelling them to admit their ignorance in public, by asking aid in preparation of their ballots.”94 A number of late nineteenth century cases basically said “tough luck” to illiterate voters: because their ballots were the same as everyone else’s, their personal deficits were their own misfortune.95 The secret ballot assumed individual literacy as a default. This literate default was a reflection of higher literacy rates among American voters, but it also marked illiterates and may have driven them to seek literacy.

Illiteracy was not only a misfortune at the ballot box, but also in contract law beginning in the nineteenth century. Through the eighteenth century, paternalism in the courts had protected illiterates who had unwittingly agreed to unfair contractual terms. But at the same time the idea of a verbatim contract became dominant, signing a bad contract became an individual’s responsibility. This shift was connected to rising literacy rates, as courts began to assume that parties could all execute their “freedom of contract,” regardless of their literate status. In the seventeenth century, it was not uncommon for documents to be drawn up with half of the signers using a mark in place of a signature, suggesting that literates and illiterates mixed in business and legal affairs.96 Through cases in the nineteenth century, illiterates were not thought of as infirm or incapable of assenting to contracts; however, their degree of knowledge about literal terms of a contract were unclear. Courts that once had acted as though illiterates needed to be protected began to assume the illiterate party’s obligation to find someone to read the contract to them. Those who did not were negligent, and negligence is not protected under the law.97 Stevens notes that this made illiteracy an ethical problem for courts, in terms of contract enforcement as well as jury selection. In the later nineteenth century, there were cases that marked illiterate jurors as potentially incompetent.98 Currently in the United States, not being able to fill out the juror form or understand and speak English can disqualify jurors.99

The growing importance of literacy does not mean that people who were illiterate could not survive in a literate society or that literacy was or has ever been fully universal. As Furet and Ozouf lament, “Wherever we look, in every period, social stratification presides over the history of literacy.”100 The demands and supply of literacy varied widely across echelons of wealth and between men and women. Lockridge notes that “literacy was intimately connected with sex, wealth, and occupation, and that to an appreciable degree such forces determined literacy,” at least until the rise of mass schooling.101 Upper-class Bostonian men were all literate by the end of the seventeenth century, whereas women’s literacy rates at that time appear to have plateaued at 45%.102 Not until the mid-eighteenth century did rural men catch up somewhat to urban men.103 Stratifications were much more pronounced in England, Pennsylvania, and Virginia. In France, literacy rates lagged in rural areas, in the southwest, and among women at least until the late nineteenth century.104 In industrial areas, children left school at younger ages—before they were taught to write—and so factories had a depressing effect on literacy.105 Even now, literacy is not evenly distributed in most societies and across the globe, nor is the need for it. Women’s literacy levels lag behind men’s in underdeveloped countries,106 and even in a country with near-universal literacy like the United States, race and class disparities mark literacy skills.107 Even when literacy becomes a “necessity of life,” it is more necessary and more accessible for some than for others.

As this section has outlined, beginning in the late eighteenth century and accelerating in the nineteenth century, literacy—especially reading literacy—grew more common throughout Europe and North America because of mass schooling and mass literacy campaigns, the rhetoric of building the nation, increased economic activity, and personal motivation. Increased literacy levels corresponded to an increased importance of texts in people’s daily lives—newspapers that cataloged both local and global events, almanacs that offered advice to farmers, and accounts that kept track of debts. The pervasiveness of texts created a kind of collective literate mentality among people, regardless of individual literacy levels. As texts became a central conduit for culture and knowledge, reading became not just useful, but necessary. This appears to have become the case, at least to varying degrees, across much of the West in the nineteenth century. In the next section, we look at the potential indications and implications of a transition to mass computational literacy in light of this history of the transition to mass literacy.

Surrounded by Computation

Since at least the 1980s, computers and software have been part of most Americans’ daily lives; together, they shape how we work, play, and maintain relationships. Behind the paper tokens of our identities and relationships such as photo IDs and marriage certificates are massive digital databases held by our governments, social networks, and employers. And many of our other daily transactions are now built on systems running computer code—personnel records, bank accounts, e-mail communication.

At the parallel point in the history of writing, writing’s pervasive presence prodded individuals to be literate. And, indeed, computation has now become so ubiquitous that skills in working with it—most obviously in the physical form of computers but also in responding to its bureaucratic and commercial manifestations—can no longer be safely bracketed into specialized professions. Although these skills are still a material intelligence rather than a full-fledged literacy because they are not yet a “necessity of life,” they are increasingly in demand. To navigate many professions and the demands of life in the twenty-first century, we need to have computational skills, or at least know someone who does.

In chapter 3, our historical survey traced computers’ increasing prominence in American professional and personal lives from the 1940s to the 1980s, when computation was tied to physical computers. Above, we explored the eras when writing was so pervasive that people began to have a “literate mentality,” regardless of whether or not they were literate. In this half of the chapter, I argue that just as people developed a literate mentality in an era saturated with texts, today we can see a “computational mentality” in our society so dominated by computers. This computational mentality manifests itself in dominant theories and discussions of mind and self and society as well as in our language and habits. It also suggests that we are rapidly approaching a limit to our collective computational literacy and a horizon for when this literacy might be needed for everyday life.

Computers Enter Public Consciousness

Government-sponsored research in the heady years after World War II heightened computational capacities for war, defense, and other information-processing pressures. This period also brought computers into the public and inspired both optimism and skepticism about where computation could go. Exemplifying the optimism associated with this period is Vannevar Bush’s canonical “As We May Think,” published in the Atlantic in 1945. Bush generated excitement about computers when he claimed that a future device called a “Memex” would be “an enlarged intimate supplement to … memory” and would help to catalog and retrieve the collected knowledge of civilization.109

The Mark I had caught the imagination of the general population in 1944 and was “an icon for the computer age,” according to Campbell-Kelly and Aspray.110 Depictions of computers in the 1950s echo the assertion by Edmond Berkeley in his popular 1949 book that computers weren’t automatons but “Giant Brains, or Machines that Think.” Building on ideas from scientific management and intelligence testing in the 1920s, as well as his background in Boolean algebra and symbolic logic, Berkeley imagined computers would be able to answer complex social questions in systematic and rational ways.111 Then came the UNIVAC I, the commercially available computer designed by the team that developed the ENIAC at the Moore School. The UNIVAC was used by the U.S. Census Bureau in 1950112 and was famously showcased on CBS as a television publicity stunt to predict election results in 1952, which it did—more accurately than the pollsters. Campbell-Kelly et al. note that “the appearance of the UNIVAC on election night was a pivotal moment in computer history.”113 Although people may have heard about computers before, they were unlikely to have seen one before the broadcast.114 New Yorker cartoon editor Robert Mankoff writes, “When Univac correctly predicted the results of the 1952 Presidential election, the public became aware of ‘thinking machines,’ and New Yorker cartoonists began to endow what were then called ‘electronic brains’ with the ability to manage more than numbers.”115 In one cartoon, a computer manages a baseball team (figure 4.1).

10655_004_fig_001.jpg

Figure 4.1 In this 1955 New Yorker cartoon, a computer coaches a baseball team, showing that computers were thinking as well as calculating machines. Joseph Mirachi, New Yorker, June 18, 1955. Reprinted with permission from Condé Nast.

In another popular depiction of computers, the 1957 movie Desk Set starring Spencer Tracey and Katherine Hepburn, a computer that takes over a television network research department goes haywire and fires everyone. Bernadette Longo notes that the metaphors of both brain and robot obtained for computers in this era. Robots—a concept made publicly prominent through the play R.U.R. in the 1920s—had already captured the idea that automation to improve human life could also supplant it. When combined with the metaphor of the brain, computers became both fearsome and awesome: “We feared these new electronic helpers even as we embraced them,” she writes.116

The U.S. government’s SAGE missile defense project represents another facet of this era’s computational fantasies. For example, the Pittsburgh Press presented SAGE in 1956 with the headline “U.S. Unveils Push-Button Defense” and called its central computer “a flashing, clicking monster larger than the average home.”117 The terminals were controlled with light guns pointed at screens, accentuating the system’s futuristic military applications.118 A promotional video produced by IBM’s Military Products Division titled “On Guard!” tells us, over footage of technical equipment and personnel, “You are listening to the heartbeat of the SAGE computer. Every instrument in this room is constantly monitoring, testing, pulse-taking, controlling. For this is the programming and operations center for the SAGE computer, which surrounds it.”119 Over an image of a little girl sleeping and her parents watching over her, the film concludes, “And as long as we’re on guard, as long as we’re ready to look ahead and move ahead, the future of America is secure.” At the height of the Cold War, SAGE must have been reassuring but also alien to the American public.

These were the years of Big Science, both comforting and intimidating in its authority. Science fiction writers such as Arthur C. Clarke and Isaac Asimov probed the power as well as the dangers of computers. The film 2001: A Space Odyssey (1968), based on a Clarke story from 1951, intertwines exploration of the frontier of space with the problems of computers that can make decisions. Less commercially successful, though still revealing, was Colossus: The Forbin Project (1970, based on a 1966 story), which pitches American and Soviet sentient supercomputers against each other with disastrous results.

J. C. R. Licklider’s work with ARPANET in the early 1960s augured a new approach in human-computer interaction against the dominant and dominating visions of command and control systems. He posited the computer as an aid to human communication in his visionary essays “Man-Computer Symbiosis” (1960) and “The Computer as Communication Device” (1968, with Robert Taylor).120 The cartoon above serves as an early example of the ways computers were portrayed as artificial brains that threatened the human monopoly on rational thought. In these accounts, the fundamental qualities of what it means to be human were called into question. In contrast, Licklider carved out an important role for computers while preserving human exceptionalism:

In the anticipated symbiotic partnership, men will set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations. Computing machines will do the routinizable work that must be done to prepare the way for insights and decisions in technical and scientific thinking. Preliminary analyses indicate that the symbiotic partnership will perform intellectual operations much more effectively than man alone can perform them.121

As computers began to pierce the public consciousness of Americans, theories began to emerge in both academic and popular venues about how these “thinking machines” and humans should interact. Anxiously or optimistically, these midcentury visions imagined a necessary relationship between humans and computers in the division of intellectual labor.

These representations of the mainframe era suggest the uneasy but increasingly pervasive relationship between humans and computation. We might think of this stage as parallel to that of the early twelfth century in England, when the Domesday Book and other documentary innovations made writing familiar to many, although intimidating and perhaps even world-ending. The uses of writing and computation were still relatively limited; however, most people would have seen or heard of some of what the technologies were capable of doing. Computers began to be part of the landscape of human thought, and in these examples, we can see the horizon of a computational mentality.

Computational Models of the World

The midcentury ideas that spawned research and development in code and computation led to new ways of thinking about and modeling the world. While these models may have circulated among scientists, technologists, and philosophers, they nonetheless reflect a trajectory toward a more popular computational mentality. For example, in 1943, Erwin Schrödinger proposed that the basis of life was a “genetic code-script,” not unlike Morse code.122 A code-script was his answer to the question of how life was able to seemingly defy the second law of thermodynamics; it could allow for both variation and order. While the genetic code is quite different from what Schrödinger suggested, his code model was key to unlocking some of the mystery of genetics.123

Also reflecting influence from early theories of code and computation, the theory of “cellular automata”—individual computational units that follow programmed rules—has been used to explain various phenomena from biological systems to group behavior. Automata have simple individual rules, but when grouped together they emulate “emergent” behavior. The simple rules and grouping of cellular automata make them particularly amenable to computation—and, conversely, theories of computation helped to spawn the concept of cellular automata. The theory of cellular automata has multiple links to computational research: it was inspired by Turing’s 1936 model of the “Turing machine,” described more explicitly by John von Neumann in 1951 and extended in the 1980s by Stephen Wolfram.124 Mathematician John Conway proposed his “game of life” in 1970, which models this complex behavior emerging from simple rules. Simulating the game of life is a now common programming exercise.125 Benoit Mandelbrot’s fractal generation relies on similar principles of simple equations and high processing power to result in complex mathematical structures. Konrad Zuse—the German inventor of the computer and innovator of programming languages—suggested that physical models of the world could be influenced by ideas from computation in his Rechnender Raum (“Calculating Space”). He proposed “digital physics” in 1967, based on the idea that the universe itself is a computer.126 Years later, as N. Katherine Hayles details, this idea of a “universal computer” was taken up by Seth Lloyd, Edward Fredkin, and others.127

For systems such as the universal computer, fractals, and cellular automata, there is no shortcut via an equation for determining an end result of a process. Only by letting the system run—by computing it—can we see what emerges. Because this computational model cannot be described ahead of its enactment, it runs counter to Enlightenment ideals of being able to describe systems perfectly as well as the presupposition of a “transcendental signified,” which Derrida names as logos, or God.128 It is a stepwise way of viewing and analyzing the world: lots of little independent entities and calculations accrete into complex systems, rather than complexity being designed or described top-down. For this reason, cognitive scientists Francisco Varela, Evan Thompson, and Eleanor Rosch argue that this computational model signals a Kuhnian revolution in the way we think about the world.129 Wolfram calls it “a new kind of science.”

There is an important tension underlying the work of Wolfram and others: is computation a model for the world or does it constitute the world? For Hayles, this tension is more interesting than its potential resolution. It tells us “what it means to be situated at a cultural moment when the question remains undecidable—a moment, that is, when computation as means and metaphor are inextricably entwined as a generative cultural dynamic.”130 She notes that computation deployed as metaphor has real effects in the world, and thereby becomes ontological. For instance, American military strategy evolved from “command and control” to “network-centric warfare” in parallel with similar developments in computation. The results of this strategy were, of course, not simply metaphorical, though they began that way.131 For David Golumbia, this elevation of computation from metaphor to model—what he calls “computationalism”—is dangerous and totalizing.132 If everything from DNA to the shape of space-time is influenced by computation, Golumbia notes, “The power and universal application of computation has made it look to us as if, quite literally, everything might be made of computation.”133 This is a bad thing, Golumbia argues, because it means that we are substituting a model of the world for the world itself. Treating this ontological phenomenon more neutrally, Hayles calls the means-and-metaphor model of the world the “computational universe.”

Regardless of its veracity or moral valence, I would argue that the mere existence of this model of the world reflects a computational mentality at work, at least in scientific sectors of society. Even if metaphors are not operationalized as military strategy or models for DNA as Hayles mentions, they nevertheless reflect a mindset. We “live by” metaphors, as George Lakoff and Mark Johnson argue: “our conceptual system is largely metaphorical [and so] the way we think, what we experience, and what we do every day is very much a matter of metaphor.”134 Because computers have become everyday objects and computation an everyday process, their metaphors have entered our everyday language. Sherry Turkle pins this uptick in computer metaphors to the 1980s, the personal computing era.135 Wendy Hui Kyong Chun observes that, more recently, computer code, software, and hardware have become metaphors for genetics, culture, nature, memory, and cognition.136 Although Chun resists these metaphors, especially the metaphorical conflation of computer storage and human memory, her wide-ranging discussion of the various metaphors to which computers have been subjected suggests how thoroughly they have become embedded in our ways of thinking about our bodies, brains, and behaviors. Computers are now evident in the language and models we use to describe our worlds.

Computational Models for Mind and Self

Immediately following their military applications, computers also became integral to the simulation of human thought, to scientific models for the functions of the human brain. As Turkle explains, research in artificial intelligence (AI) and human psychology converged in interesting ways beginning in the 1950s: the top-down Freudian model of ego, superego, and id were displaced by newer models of distributed cognition, inspired by concepts of computer engineering.137 In linguistics, Noam Chomsky proposed algorithmic approaches to language acquisition. Chomsky’s “universal grammar” operated like a computer chip, programmed by a human’s early linguistic environment. It is no coincidence that Chomsky’s theory was developed along with a flurry of research on computers and artificial intelligence at MIT.138 Analytic philosophy offered theories of mind influenced by this branch of linguistics and the rise of the computer. The idea was that by discovering certain things about algorithms, we would also understand aspects of human thought.139 According to Hilary Putnam, philosophers began asking, “Are thinking and referring identical with computational states of the brain?” Putnam himself proposed the computer as a model for the mind.140

Consolidating some of these threads of academic inquiry and centered on the computer as a model for the mind, cognitive science emerged.141 Cognitivism—a branch of cognitive science often mistaken for the whole—posits that cognition “is the manipulation of symbols after the fashion of digital computers.” It works from the premise that both humans and computers operate only with representations rather than “reality.”142 Connectionism, another area of cognitive science, specifies that the way human brains cope with the representation of reality is distributed across a network.143 For instance, the theory of “perceptrons” describes the ways that small chunks of thought can be networked across the brain and how a computer might be able to simulate that thought.144 Like Chomsky’s linguistics, connectionism has close ties to the computer: it was developed by Seymour Papert and Marvin Minsky, who worked in artificial intelligence at MIT and used the computer as a model.145 Varela, Thompson, and Rosch point out that in cognitive science, the computer is both a model of mind and a tool of research to learn more about the mind.146 As Hayles suggested with her idea of the “computational universe,” the mind and the computer have served as models and simulations of each other in cognitive science and AI research since the 1950s.

Ideas about the computational mind are not isolated to academic inquiry. Varela, Thompson, and Rosch point to Turkle’s best-selling work on how people see themselves reflected in computers to indicate the popularity of computational ideas about human cognition. Turkle reflected in 2005 that her 1984 book “The Second Self documents a moment in history when people from all walks of life (not just computer scientists and artificial intelligence researchers) were first confronted with machines whose behavior and mode of operation invited psychological interpretation and that, at the same time, incited them to think differently about human thought, memory and understanding.”147

The form that computation took in each era influenced the models of the self that were generated. Turkle claims that in contrast to monolithic mainframe computers, minicomputers and windowing operating systems that took hold in the 1980s encouraged people to think of themselves as multiple.148 This concept of a multiple, fragmented self was paralleled in theories of postmodernism and of cognitive science.149 Turkle’s later work traces shifting concepts of self as distributed across networks such as MUD (multi-user dungeon) game platforms in the 1990s and social media sites in the early 2000s.150 Extending her theory, we can perhaps trace mobile computational devices to the genesis of the “quantified self” movement, adherents of which aim to collect massive quotidian data from themselves. With a goal of “optimizing” brain or body performance, self-trackers chart minutiae of their diet, exercise, sleep, and performance to find patterns. This movement has gone mainstream with commercial computational devices such as the popular FitBit or Apple Watch or software applications to facilitate this tracking. Many self-trackers report that they conceive of themselves and the way they spend their time quite differently when quantifying and processing it as multiple, discrete data points rather than as ephemeral and subjective sensations.151

The massive online databases of personal information held by Facebook, LiveJournal, Orkut, Twitter, and other social media websites also enable new forms of self-presentation. In the medium of Facebook, for instance, relationships are formalized on the basis of the programmatic structure the site provides. “Friending” is binary—someone is or is not a friend—and means something different from making friends in person. “Facebook friends” can be business acquaintances, celebrities, or others one has never met “IRL” (in real life). Facebook has recently expanded the ways that gender and relationships can be designated, but they are still definitively encoded. A status of “In a relationship” marks a social relationship that may not have been so starkly and publicly formalized offline. Even the newer “It’s complicated” designation for relationships erects borders around the previously amorphous beginnings and endings of romantic entanglements.

The ubiquity of smartphones that can rapidly post text, video, and images to globally accessible social media sites such as Facebook means that the identities and relationships people construct there are deeply integrated with “real life.”152 Online social networks travel with us, and our real-life networks are echoed online. Teens largely use these online social networks to augment their social experiences in their local communities, observes danah boyd.153 Nathan Jurgenson argues that, at least for the generation of people used to cataloging their lives online, it is no longer possible to separate their online and offline lives. We learn to see with a “Facebook eye”: “Facebook fixates the present as always a future past. … Social media users have become always aware of the present as something we can post online that will be consumed by others.”154 We choose our words in terms of what is potentially reportable on Facebook or Instagram or Twitter, we report our physical locations and habits through Yelp and Foursquare, and we see the activities of our children and friends as videos or pictures to be viewed on Instagram or Vine. The “shock” video site World Star Hiphop serves as an interesting and disturbing example; videos sometimes feature bystanders chanting “World Star,” anticipating the action’s later appearance on the site. As the Gothamist commented, “The site’s popularity has created a sort of voyeuristic feedback loop, in which disassociated bystanders immediately videotape violent incidents and act as if they're already watching a video on the Internet.”155 Diamond Reynolds’s choice to live-stream the shooting death of her boyfriend Philando Castile on Facebook Live on July 6, 2016 led to immediate and widespread outrage and protest over police brutality. Even in the heat of that tragic moment, Reynolds was savvy enough to know the affordances of live-streaming to “get out the truth.”156

Because the popularity of these social sites changes regularly, these specific examples may already be dated when this book is published. But as long as some of the work of relationships is carried across online social networks, the corporate policies of these sites will affect the status of real relationships and self-presentation: we think of ourselves in their terms, according to their affordances. That corporate entities controlling social media networks have so much influence on the ways we see ourselves should perhaps be a concern. The external, formal, global expression of the self—particularly on social networking sites—leads to a kind of self-editing for presentation in the new medium, just as documentation has done. The people who saw texts move into their lives in the Middle Ages developed a pattern of thinking, habits, and assumptions of a literate mentality. These examples of the shifting ways we see ourselves, our worlds, and our relationships suggest that we have developed a computational mentality as computational devices have become integrated into our professional and personal lives. Although few people know how to program the devices on which code and computation ride into our lives, their manifestations influence us anyhow. In various ways, we have learned to see ourselves as “computed” by the devices and networks on which we depend.

From Computational Mentality to Literacy?

As people now think of themselves as “computed,” they are also beginning to need to know how to compute. Surrounded by all of this computation, many people find themselves needing to access or acquire new computational skills to navigate their personal and professional lives. As with writing, the requirements for these skills vary widely, from what we might think of as basic skills to highly complex compositions and niche knowledge. Because computational skills can often be shared between people and across social groups, we are not yet to the point where every individual must be computationally literate. As we saw earlier, however, this social borrowing of literacy skills to navigate essential demands of life, along with a collective literate mentality that was based on the pervasiveness of texts in everyday life, presaged a time when textual literacy was demanded of individuals. Code has only recently become central to our lives, but the diversity and number of people who can program has increased dramatically since the birth of computer code. Writing also became a more widely held skill as it grew in importance. The ability to program is useful to not only computer specialists but also a variety of professions. And there are hints of the kinds of institutions that might be built on the assumption that many people know how to program. We are also seeing evidence of the consequences of individuals not knowing how to read or write code, especially in spheres of government and commerce. Has programming become “a necessity for life?” Not yet perhaps, but the disadvantages of a lack of computational literacy are beginning to manifest themselves. I discuss a few of these indications of our potential turn to mass computational literacy below.

The Limits of Collective Computational Literacy

Like skills with reading and writing, computational skills are often loaned from one community or family member to another. The Pew Internet and American Life Project reports that 48% of technology users need help fixing their phones, computers, and other devices when they break down. Eighteen percent of people with computer failures seek help from family and friends.157 In many social or family groups, there are often one or two “go-to” people who come to the rescue, who are more skilled than others at problem solving or communication with computers. They serve as tech support for parents or friends whose computers are saddled with viruses or need reorganizing or replacing. Skills in making websites, databases, and image editing are also loaned across groups.158 While these fix-it sessions may not require programming per se, they do require a more intimate knowledge of file systems, memory organization, and software conventions and available capabilities. Individuals called on to perform these tasks can compound their computational literacy as they learn to perform each new complex task.

Within groups, one or two people’s skills can facilitate access to computational literacy for everyone, just as those who can read might read aloud for their families and thus share their access to text. This loaning of skills within groups is possible because people of varying levels of computational literacy mix easily, especially within families. People with different levels of textual literacy also mixed freely in eras where literacy was stratified by sex and age; however, this is no longer common in areas with near-universal literacy. Now, textual literacy levels stratify by class and race and tend to accumulate in some groups more than others. Mixing can happen more easily across languages; for instance, bilingual children of immigrants often serve as interpreters for their families or communities.159 Although various levels mix now in computational literacy, it appears to be collecting in affluent or already advantaged groups—a phenomenon related to what is sometimes called the “digital divide.” Consequently, some groups have greater “funds of knowledge” to draw on than others.

When collective literacy strained under the weight of a society so fully enmeshed in writing, it incentivized individual literacy. Now that we are fully enmeshed in computation, is our collective computational literacy hitting its limits? Perhaps the promises of natural language programming, robust and easy-to-use code libraries, WYSIWYG interfaces, and career specialization will prevent computational literacy from becoming a necessity of life.160 But the precedent set by writing suggests otherwise. The shift in mentality and the pervasiveness of computation suggest that it is changing what it means to be literate. Computation and coding are now skills upon which many other forms of communication and knowledge are built. Computation is increasingly used as a model of the world and self. The next section looks at how a computational mentality might lead to computational literacy.

Programming Is No Longer a Specialized Skill

What does it look like for other skills to be built on top of computational literacy, for it to move from a specialized and niche ability to something relevant to a diversity of fields and applications? With the rise of interest in programming among artists, journalists, and others, programming is no longer a domain solely for computer scientists. Indeed, programming never was a domain exclusively for specialists. As I described in chapter 1, programming demonstrated potential to be more generally useful from the outset, which John Kemeny and Thomas Kurtz capitalized on at Dartmouth College in the 1960s. What does it mean for computational literacy to become a platform literacy?

Although we do not have figures on programming in the general population, the audience and demand for the learn-to-code materials now available online indicate that a wide variety of people are teaching themselves to code. One reason people are driven to learn programming is that it is becoming more useful across a number of professions. Clay Shirky describes this pressure of programming on employment as “downsourcing,” a twist on outsourcing; downsourcing is “the movement of programming from a job description to a more widely practiced skill.”161 Just as the need to use software has begun to permeate job descriptions, the need to couple programming with domain knowledge in jobs is accelerating. Scientists, economists, statisticians, media producers, or journalists who also know something about programming can streamline or enrich their research and production.

Journalism is one of the professions most acutely affected by this downsourcing of programming. Online journalism—whether on blogs or traditional news organizations’ websites—now involves the integration of visual, audio, and programmatic elements, echoing Deborah Brandt’s finding that workplace literacies have become more complex throughout the twentieth century.162 Alongside traditional writing, interactive graphics and information displays are now ubiquitous on websites such as the New York Times, Vox, and FiveThirtyEight, leading the way toward a code-based approach to conveying the news.

The press, anxiously experiencing as well as reporting on their own state of affairs, has picked up on this shift in information conveyance from alphabetic text to code-based digital media. An article on the Web magazine Gawker describes the “Rise of the Journalist Programmer”: “Your typical professional blogger might juggle tasks requiring functional knowledge of HTML, Photoshop, video recording, video editing, video capture, podcasting, and CSS, all to complete tasks that used to be other people’s problems, if they existed at all. … Coding is the logical next step down this road. … You don’t have to look far to see how programming can grow naturally out of writing.”163 In other words, the tasks that once belonged to other people’s job descriptions have now been “downsourced” into the daily routines of today’s typical journalist.164 The kinds of functional knowledge that Gawker lists differ in their technical requirements; for example, HTML is a way of formatting text for display and not a full programming language. But this knowledge is related to computational literacy: how to issue formal commands to the computer, work with various protocols, and build chained procedures that process data.

As programming moves into more domains and professions, it has diversified in appearance. Defining someone who “knows how to program” is as difficult as defining someone who “knows how to write.” As literacy studies has taught us, it is notoriously difficult to measure literacy. Literacy resources can be shared among family groups, allowing many people the benefit of one contributor’s literacy. Historical studies of signatures are only a proxy for a limited concept of functional literacy, and in contemporary studies we must ask: literate in what genre? With what audience? For what purpose? People can be literate at different levels in the different fields of academic writing, journalism, technical writing, short stories, novels, writing for social media, letter writing, and so forth, and the genre conventions of each of these areas mean that literate skills do not translate perfectly across them. In the same way, the proliferating genres of programming complicate our ability to measure who is computationally literate: designing and programming an operating system, computer game, website, or mobile phone app, scripting an Excel datasheet or writing a Firefox plug-in draw on vastly different skills and even call on different kinds of programming languages. And yet, as with writing, we may consider these skills as part of the same constellation of programming abilities. As Jack Goody suggested with his concept of “restricted literacy,” and as the social turn of New Literacy Studies (NLS) reflects, societies and cultures all construct and use literacy in different ways. The diversifying applications for programming, which shape its practices in different ways, are one indication of its more widespread and literacy-like behavior.

Computational Literacy for Civic Contexts

Computational literacy has become useful in civic contexts, again indicating its relevance beyond specialized applications. For instance, Michele Simmons and Jeff Grabill examine civic organizations, catalog how a community group can struggle and succeed with code-based technology to get their messages out, and conclude that programmatic database manipulation can no longer be relegated to technical disciplines. They suggest that computational literacy appears to have a growing role in new forms of civic organizations and expression as they assert: “Writing at and through complex computer interfaces is a required literacy for citizenship in the twenty-first century.”165 To extend Simmons and Grabill’s claims about civic literacy, we can look to the “crisis camps” set up in major world cities after the 2010 earthquake in Haiti, where teams of programmers used geographic data available from Google maps and NASA to write a Craigslist-style database that would match donations with needs and help locate missing persons.166 Along these lines, the organization Code for America (launched in 2009) uses the Teach for America model to embed programmers within local city governments to help streamline some of their specific bureaucratic processes. Adopt-a-hydrant, one of the apps designed by Code for America fellows, matches up fire hydrants in cities like Chicago with local volunteers who agree to keep them clear of snow in case of emergency.167 Code for America “brigades” have sprouted in many major cities, including my own—Pittsburgh—where meetings collect people with various interests and skills around the use of city data. Some of these civic activities do not require extensive skills in programming, but all draw on concepts of database construction and simple code-based procedures. In other words, elements of programming support writing that can make a difference in the world.

These widespread uses for programming in individual and civic applications display some of the promise that it brings to personal lives and governmental structures. With the integration of programming into more aspects of our lives, we can also see some of the hazards of ignorance about code, especially in terms of legislation. In 1993, Bonnie Nardi argued that it is important for end users to know how to program “so that the many decisions a democratic society faces about the use of computers, including difficult issues of privacy, freedom of speech, and civil liberties, can be approached by ordinary citizens from a more knowledgeable standpoint.”168 In moments like the congressional debates on anti-spam laws for e-mail in the mid 1990s169 and the proposed Stop Online Piracy Act (SOPA) of 2012, we saw what happens when U.S. public officials do not have the general knowledge Nardi argued for. In those cases, fundamental misunderstandings of how computer code works obscured the terms of debate and nearly led to crippling or unenforceable laws.

Debates around “net neutrality,” the Apple Store developer license and data storage, and sharing on social networking sites also foist techno-ethical quandaries onto voters. What does preferential data packet-switching mean for one’s home broadband service? Is it censorship to restrict the programming languages in which people can write for particular platforms? Do I have the right to program a computational device that I own? What rights does Facebook have to my network of friends when I trust my data to them? Is using a programmatic script to scrape data from websites on a massive scale something people should be allowed to do, and, if not, how would we prevent it? What constitutes online security and illegal hacking? What does it matter if public officials use private or public servers for their e-mail? Twenty-first-century American citizens and lawmakers are forced to consider such sophisticated technological questions daily.

One of the justifications of nineteenth century mass literacy campaigns was that a democratic society needed to read in order to consider the questions they were asked to vote on. Should we know enough about the operations of programming in order to recognize and regulate these practices? We must now, for example, understand and trust the technology of writing in order to accept the ways that quotidian bureaucratic documents govern our lives. As we have seen, however, this trust was not always warranted or freely given by a citizenry when the bureaucratic applications of writing were innovated. The ways that texts began to shape the social and political relations of individuals suggests that we will also need to pay greater attention to the ways that code is cataloging and surveilling our lives. We are now seeing critical gaps in governance and communication when lawmakers and citizens are not knowledgeable enough about programming and computational architecture to understand what is possible or desirable to regulate about it. If citizens and lawmakers respond in the way they did to the pressures of texts in governance, we may see computational literacy added to the complex mix of skills we now think of as literacy.

Conclusion

Just as writing was propelled from a specialized to a generalized skill by initiatives from centralized institutions and its adoption into commerce and personal lives, programming—once highly specialized and limited to big-budget operations—is now central to government, large- and small-scale commerce, education, and personal communications. The penetration of text into everyday lives meant that people participated in literacy regardless of their own literate status. People began to acknowledge the way texts are able to shape lives and actions and redefine what it means to be a human in the world. As we are being surrounded by computation in the same way people were once surrounded by writing, we are developing a computational mentality. We are recognizing the role that computation plays in our lives and adjusting our models of the world and ourselves to reflect that role.

A literate mentality was one result of writing becoming an infrastructural technology in society; an increased pressure on individual literacy was another. As writing became so pervasive and as the ability to read and write was increasingly called upon for everyday activities, these skills could no longer be shared by social groups. Collective literacy hit its limits, and now it appears as though collective computational literacy might be doing the same thing. Although code is embedded in the infrastructure of our workplaces, government, and daily communications, programming is not yet a skill required for participation in social, civic, and commercial life in America; it is not yet a “necessity of life.” But diversifying applications for programming, the critical need to understand aspects of computation in order to understand proposed laws and privacy rights, and the individual interest in learning programming that reflects those phenomena all seem to point to an increased role for programming in everyday life—perhaps even to a future of computational literacy.

What might a world with mass computational literacy look like? We might begin to see practices and institutions being built on the assumption that everyone can program. We might see computational solutions to organizational problems that have previously been addressed by bureaucracy. These shifts might decrease the relative value of textual literacy. In big and small ways, mass computational literacy could destabilize the institutions built on textual literacy and the hierarchies established along textual literacy lines (e.g., education). And, as it turns out, we can see glimmers of these effects already. Code-based networks, in various forms, are the most prominent examples of our reliance on computation as infrastructure. But a few of these networks signal something more: they are deeply disruptive of current hierarchies that are based on literacy and bureaucracy. Because it is difficult to regulate code across international boundaries, these networks can even undermine or challenge sovereign states. While most of these networks are reasonably accessible to those without computational literacy, they still order their participants according to what I would call their literate skills, and they rely on significant numbers of participants who are computationally literate.

The World Wide Web is the most obvious example of a code-based network that relies on widespread distribution of computational literacy and which has disrupted literate hierarchies. Since its inception in the early 1990s, the Web has allowed people to put information online using simple forms of code: HTML, or Hypertext Markup Language. More recent developments in HTML and Web standards, and interfaces with more complex programming languages such as Javascript, allow for interactions between users and complex databases and algorithms. Templates allow people to put things on the Web without knowing programming, of course, and people hire Web design firms to build sites just as people would hire scribes to take care of their writing needs when its applications were still restricted. But someone who can build her own site or even customize a Wordpress template to make a unique site retains more control and may have an advantage in certain areas over those who outsource these tasks, especially as the economy shifts toward contract and contingent labor, where people must promote their work to participate. The Web has so far fallen short of being the mass code-writing platform that Tim Berners-Lee originally envisioned it as,170 but it has instead become a mass text-writing platform.171 And as a platform that combines text and code, it has ruffled enough feathers to inspire a raft of handwringers lamenting the ways it has decreased textual literacy skills and upset hierarchies that are based on them.172

Networks built on top of the Web and Internet have been destabilizing in political ways, too: consider the roles of Facebook and Twitter in the Moldovan and Iranian elections of 2009 and the Arab Spring beginning in Tunisia in 2010. These networks helped people organize both physical and online protests across national borders and largely without high-profile leaders. And despite trying to block Internet traffic, states were unable to fully shut down these lines of communication. The U.S. State Department even intervened to ask Twitter to delay network maintenance so that Iranian protestors could keep tweeting.173 While it was possible for individuals to post to Facebook and Twitter without computational literacy, it appears to have taken a significant number of people on the ground setting up alternative networks to route around the attempted state controls. The legacies of these protests have been mixed, but they did generally succeed in their immediate objectives to overthrow established governance, and they certainly succeeded in getting publicity out on global networks. More recently, Internet conferencing, live-streaming, and Twitter have helped to disseminate word about government disruption with the short-lived July 2016 coup in Turkey and the June 2016 sit-in over gun control in the U.S. House of Representatives.174

The cryptocurrency Bitcoin presents another code-network-based disruption to state sovereignty and to the hierarchies of the international monetary system. Bitcoins are virtual, digital currency, unique numerical codes that circulate through networks in transactions computationally verified peer-to-peer, across multiple Bitcoin users. A central registry of transactions is designed to prevent fraud while the lack of international regulation and anonymity of the transactions facilitate almost any kind of purchase (see, for instance, the Tor-based Silk Road market that ran in the period 2011–2013). Its legality in certain countries is tenuous, but its mathematical and computational basis means that it is resistant to regulation. Governments can curb official exchanges with their own currencies, but to effectively ban Bitcoin one would need to ban computational networks altogether. In this way, Bitcoin undermines a major function of centralized governments and the hierarchy of international banks.175 Gaining ground on the heels of the 2008 financial collapse and widespread outrage about the way investment banks deal with the money supply, it is hard not to see Bitcoin as a code-based critique of the international banking system and a potentially disruptive economic—and thus political—force. Because Bitcoin relies on trust in computational systems and speculation and market tools, it thus favors computational literates and even implies a kind of computational mentality. It is not surprising that Bitcoin has a strong following among programmers.

As the ways that programming can be used are diversifying, and as more people appear to be learning to program for their jobs or personal interests, we can begin to see some initiatives and institutions being built on the platform of more widespread computational literacy. While historical findings indicate that literacy does not, independent of other factors, empower people or lift them out of lower incomes or social classes, illiteracy can be an impediment in a world where text and literacy is infrastructural to everyday life. The illiterate person is “less the maker of his destiny than the literate person,” as Stevens observed about colonial New England.176 Now, it seems that people who are not computationally literate must, in growing numbers of cases, rely on others to help them navigate their lives. As more communication, social organization, government functions, and commerce are being conducted through code, we are seeing an increased value on the skills to use and compose software. We cannot collapse the various affordances, technologies, and histories of writing into a perfect parallel with programming. However, the historical patterns of literacy that I’ve outlined in the past two chapters gesture toward answers to these essential questions about code’s critical role in our daily lives.

Notes