Introduction

This book is inspired by my experience of more than a decade of designing software for telecommunications for a number of companies across two continents. I was fortunate enough to have witnessed, and made a small contribution to, the birth of the second generation of mobile telephony, or GSM (a geological stratum of current G4/IMT Advanced networks).[1] In the early 1990s, I wrote the 2G SS7 TCAP (Transaction Capabilities Application Part) protocol for a European telephone exchange maker and enjoyed a protracted struggle with C language, UNIX, a few types of Assembler languages, and a range of European and worldwide standards and recommendations. I also experienced the expansion of digital mobile telephony into Russia and China in the early to mid-1990s and developed software for SMS (Short Message Service) at a time when nobody used the term ‘texting’ and when adding text messaging to mobile communications was considered by many (including myself) an unpromising idea.[2]

However, over time I started questioning my own engagement with technology. Perhaps a mix of my background in the humanities, my general wariness of the corporate environment, and the political commitment to ‘thinking differently’ that came from my involvement with the Italian Left throughout the 1990s made me quite conscious of the limits of the merely technical understanding of technology. Thus, this book also stems from my desire to ask different questions about technology from those posed by the technical environment in which I had an opportunity to work. In 2004, when I first began investigating the nature of software from a nontechnical point of view, the context of media and cultural studies presented the most interesting academic framework for such an enquiry—although I have also ended up questioning media and cultural studies’ approach to technology in the process.

The principal aim of Software Theory is to propose a theoretically informed analysis of software which in turn can shed light on the group of media and technologies commonly referred to as ‘new’ or ‘digital’. Taking into account the complexity and shifting meanings of the term ‘new technologies’, I understand these technologies as sharing one common characteristic: they are all based on software. Software Theory takes both ‘software’ and ‘theory’ seriously by engaging in a rigorous examination and in a deep questioning of the received concepts of ‘technology’, ‘society’, ‘culture’, ‘politics’, and even the ‘human’ that media and cultural studies rely upon. Although in this book ‘theory’ functions as a synonym for a deconstructive tradition of thought that draws mainly (but not exclusively) on Jacques Derrida’s work, it also indicates a broader engagement with Continental philosophy on the question of technology. In fact, I want to argue that media and cultural studies can highly benefit from a productive dialogue with philosophy on the subject of technology. In a way, Software Theory uses ‘theory’ to shed light on software, while at the same time using software to shed light on ‘theory’, and ultimately on the relationship between technology and knowledge (including academic knowledge). I engage in a close and sustained examination of both technical literature and computer code in order to show how the conceptual system underlying software works and how software has become what it is through a process of constant and unstable self-redefinition. Through such deconstructive reading, I destabilize some of the oppositions currently deployed in the cultural study of software (such as those between writing and code, textuality and materiality, the transcendental and the empirical, the human and the technical, technology and society), and I show how software cannot be fully understood merely as an instrument. Instead, I propose an understanding of software as constitutive of the very concept of the human, which is always already machinic and hence also inhuman. I argue for the importance of thinking software philosophically if we are to think it politically, and ultimately—as ambitious and radical as this conclusion might sound—I suggest that we can only imagine a political future today by thinking it with and through technology, and therefore also with and through software.

When I started my investigation of what were then referred to as new technologies, the currency of the term ‘software’ in media and cultural studies did not yet match that of ‘new technologies’, although its appearance was becoming more and more frequent. A decade later, software has become a fashionable object of study. New academic fields have emerged that concentrate on the cultural and philosophical analysis of software, such as Software Studies, Critical Code Studies, and, at least to a certain extent, Digital Humanities. As Wendy Chun argues in her 2011 book entitled Programmed Visions, software is increasingly considered the focal point of digital media studies because of the perceived need to address the invisibility, ubiquity, and power of digital media.[3] This has led many scholars in the field to wonder whether a global picture of ‘digital media’ as a concept and a discipline is possible at all. In this light, software has been positioned as a ‘minimal characteristic’ of digital media.

This growing interest in software can be observed both in the United States and in Europe. It mobilizes scholars from very different disciplines, including social sciences, computer science, and the humanities. Since in The Language of New Media Lev Manovich called for ‘software studies’ as the direction to be taken by new media studies, a number of books have been published that tackle the topic of software—one can think here of volumes such as Alexander Galloway’s Protocol, Katherine Hayles’s My Mother Was a Computer, Adrian Mackenzie’s Cutting Code, Matthew Fuller’s Software Studies, Wendy Chun’s Programmed Visions, David Berry’s The Philosophy of Software, Florian Cramer’s Anti-Media: Ephemera on Speculative Arts, Nick Montfort and others’ 10 PRINT CHR$(205.5+RND(1)); : GOTO 10, and Lev Manovich’s own recent book entitled Software Takes Command.[4] Conferences have been held—for instance, the Working Groups on Critical Code Studies organized by the Humanities and Critical Code Studies Laboratory (HaCCS Lab) at the University of Southern California, which began in 2010 and involved more than a hundred academics and professionals from a variety of fields. These initiatives, in turn, have also given rise to electronic publications, blogs, and other forms of cultural debate around software.

Overlapping with software and code studies, the digital humanities embrace ‘all those scholarly activities in the humanities that involve writing about digital media and technology as well as being engaged in processes of digital media production, practice and analysis’.[5] In Gary Hall’s broad definition, the digital humanities encompass not just the creation of interactive electronic archives and literature, or the production of online databases, wikis, and virtual galleries and museums, but also the development of new media theory and the investigation of how digital technologies reshape teaching and research. In what some have described as a ‘computational turn’, the humanities increasingly see techniques and methodologies drawn from computer science—image processing, data visualization, network analysis—being used to produce new ways of understanding and approaching humanities texts. And yet, Hall points out that, however significant, the computational turn does not exhaust the theoretical and methodological approaches encompassed by the digital humanities. Recent debates have highlighted the importance of a critical study of software for this field. One can think here of the 2011 issue of the academic journal Culture Machine devoted to ‘The Digital Humanities Beyond Computing’, as well as of Katherine Hayles’s book of 2013 entitled How We Think, in which she positions software and code studies as subfields of the digital humanities.[6]

In sum, the cultural study of software appears crucial today inside and outside the university, at an international level and across disciplines. Yet it is not only humanities scholars or software professionals who are taking an interest in the cultural debate on software. Artists have become interested in software as well. A number of code art pieces have been produced since the digital art festival Ars Electronica chose ‘Code’ as its topic—for instance, Zach Blas’s ‘Disingenuous Bar’, or Lorenz Tunny’s ‘TransCoder’, presented at the Documents Event #1–UCLA in June 2007. Exhibitions have been organized that interrogate the nature of software, such as the ‘Open Source Embroidery: Craft + Code’ exhibition held in 2008 at the HTTP Gallery in London. While the university and the arts are witnessing the emergence of such a vibrant and diversified new field of investigation, the wider public is also becoming increasingly interested in the cultural discussion of software. This is due, again, to the pervasiveness of software and to the fact that media users are constantly being asked to make decisions about software-based technologies in everyday life (for instance, when having to decide whether to use commercial or free software, or whether to upgrade their computers).

And yet, it seems to me that the meaning of ‘software’ remains not only shifting but also rather unclear. Therefore, throughout the course of Software Theory I keep returning to what I see to be a fundamental question that needs to be posed and re-posed in the present cultural context: ‘what is software?’. I argue that this question needs to be dealt with seriously if we want to begin to appreciate the role of technology in contemporary culture and society. In other words, I call for a radical ‘demystification’ of new technologies through a demystification of software. I also maintain that in order to understand new technologies we need first of all to address the mystery that surrounds their functioning and that affects our comprehension of their relationship to the cultural and social realm. As anticipated above, this will ultimately involve a radical rethinking of what we mean by ‘technology’, ‘culture’, and ‘society’.

I am particularly interested in the political significance that our—conscious and unconscious—involvement with technology carries. Therefore, with Software Theory I also seek a way to think new technologies politically. More precisely, I argue that the main political problem with new technologies is that they exhibit—in the words of Bernard Stiegler—a ‘deep opacity’. As Stiegler maintains, ‘we do not immediately understand what is being played out in technics, nor what is being profoundly transformed therein, even though we unceasingly have to make decisions regarding technics, the consequences of which are felt to escape us more and more.’[7] I suggest that, in order to develop political thinking about new technologies, we need to start by tackling their opacity.[8]

However, my aim in Software Theory is to propose quite a different approach to the investigation of software from the ones offered by software and code studies so far. Rather than looking at what people do with software (that is, at the practices of software producers, users, and consumers), or at what software does (that is, at how it works), I want to problematize software as software, by ‘undoing’ the conceptual system on which it relies for its functioning. In fact, I want to show how, to a certain extent, software is always in the process of ‘undoing itself’. To be able to expand on this point, let me start from a brief examination of the place of new technologies, and particularly of software, in today’s academic debate. We have already seen how new technologies are an important focus of academic reflection, particularly in media and cultural studies. With the formulation ‘media and cultural studies’ I mean to highlight that the reflection on new technologies is positioned at the intersection of the academic traditions of cultural studies and media studies. Nevertheless, to think that technology has only recently emerged as a significant issue in media and cultural studies would be a mistake. In fact, technology (in its broadest sense) has been present in media and cultural studies from the start, as its constitutive concept. The intertwining between the concepts of ‘medium’ and ‘technology’ dates back to what some define as the ‘foundational’ debate between Raymond Williams and Marshall McLuhan.[9] In his work, McLuhan was predominantly concerned with the technological nature of the media, while Williams emphasized the fact that technology was always socially and culturally shaped. At the risk of a certain oversimplification, we can say that media and cultural studies has to a large extent been informed by Williams’s side of the argument—and has thus focused its attention on the cultural and social formations surrounding technology, while rejecting the ghost of ‘technological determinism’, and frequently dismissing any overt attention paid to technology itself as ‘McLuhanite’.[10] Yet technology has entered the field of media and cultural studies precisely thanks to McLuhan’s insistence on its role as an agent of change.

One must be reminded at this point that, in the perspective of media and cultural studies, to study technology ‘culturally’ means to follow the trajectory of a particular ‘technological object’ (generally understood as a technological product), and to explore ‘how it is represented, what social identities are associated with it, how it is produced and consumed, and what mechanisms regulate its distribution and use.’[11] Such an analysis concentrates on ‘meaning’, and on the way in which a technological object is made meaningful. Meaning is understood as not arising from the technological object ‘itself’, but from the way it is represented in the discourses surrounding it. By being brought into meaning, the technological object is constituted as a ‘cultural artefact’.[12] Thus, meaning emerges as intrinsic to the definition of culture deployed by media and cultural studies. This is the case in Williams’s classical definition of culture as a ‘description of a particular way of life’, and of cultural analysis as ‘the clarification of the meanings and values implicit and explicit in particular ways of life’, as well as in a more recent understanding of ‘culture’ as ‘circulation of meanings’ (a formulation that takes into account that diverse, often contested meanings are produced, shared and communicated within different social groups, and that they generally reflect the play of powers in society).[13]

When approaching new technologies, media and cultural studies has therefore predominantly focused on the intertwined processes of production, reception, and consumption, that is, on the discourses and practices of new technologies’ producers and users. Such an approach has been substantially inherited by the emerging fields of software and code studies. From this perspective, even a technological object as ‘mysterious’ as software is addressed by asking how it has been made into a significant cultural object. For instance, in his book of 2006 entitled Cutting Code, Adrian Mackenzie investigates ‘software as a social object and process’ and demonstrates its relevance as a topic of study essentially by examining the social and cultural formations that surround it.[14] Even though one of the commonly declared aims of software and code studies is to address ‘software (or code) itself’—or, in Matthew Fuller’s words, ‘the stuff of software’[15]—it seems to me that these fields constantly alternate between searching for social and cultural meanings in software on the one hand, and offering technical expositions of ‘how software works’ on the other.

This, indeed, is the approach adopted by Manovich’s aforementioned call for software studies in The Language of New Media. Manovich clarifies: ‘to understand the logic of new media we need to turn to computer science.’[16] According to Manovich, since contemporary media have become programmable, computer science can provide media scholars with the terminology and the conceptual framework to make sense of them: ‘[f]rom media studies, we move to something which can be called software studies; from media theory—to software theory.’[17] This ‘turn to software’, which has been positioned by many as the seminal moment of software and code studies, can hardly be seen as uncontroversial. For instance, again following Gary Hall, one could ask: is computer science really that well equipped to understand itself and its own founding object, let alone help media studies (or, for that matter, the humanities) in understanding their own relation with computing?[18]

In fact, in his recent book, Software Takes Command, Manovich acknowledges how this ‘turn to software’ risks positing computer science as an absolute truth and advances the proposition that ‘computer science is itself part of culture’ and that ‘Software Studies has to investigate the role of software in contemporary culture, and the cultural and social forces that are shaping the development of software itself.’[19] Ultimately, Manovich recommends that software studies focus on software as a cultural and social object—that is, as ‘a layer that permeates all areas of contemporary societies’, a ‘meta-medium’ that has replaced most other media technologies that emerged in the nineteenth and twentieth centuries.[20] For Manovich, understanding software is a mandatory condition of being able to understand contemporary techniques of communication, interaction, memory, and control. And yet, to understand software we need to understand its history and the social and cultural conditions that have shaped its development.[21]

This ambivalence—one could even say, this circular movement—between the technical and the social seems to me to characterize the whole field of software and code studies. Individual academic contributions can choose to privilege one term over the other, but ultimately they turn to the technical in order to explain the social, and then revert to the social in order to explain the technical. Representative of a stronger emphasis on the technical aspects of new technologies are studies such as Nick Montfort and Ian Bogost’s 2009 book on the Atari video computer system, entitled Racing the Beam, which has been positioned as the inaugural text of platform studies. Overlapping with software studies, platform studies pays particular attention to the specific hardware and software systems on which ‘expressive computing’ (another name for Manovich’s ‘cultural software’—that is, software applications for video games, digital art, and electronic literature) is based. Montfort and Bogost recommend ‘delving into the code’ and giving ‘serious and in-depth consideration [to] circuits, chips, peripherals and how they are integrated and used’ in order to make sense of new media.[22] Also intent on the functional unpacking of technology is the classical work of Alexander Galloway, Protocol (2004), which offers a welcome counterargument to the widespread discursive use of digital networks as metaphors of horizontal connectivity and intrinsic political empowerment. By unpacking the hierarchical workings of ‘real’ networks, which are functionally based on layers of software that ‘wrap up’ the contents of communication, Galloway also unpacks and critiques the rhetoric of freedom that surrounds digital networks. For instance, he demonstrates how protocols incorporate (‘embed’) censorship and other mechanisms of distributed control, and how these technical characteristics embody a logic of governmentality which he ultimately associates with the military origins of the Internet. ‘Distributed networks’, Galloway states, ‘are native to Deleuze’s control society.’[23] In sum, for Galloway, discovering the control mechanisms which found the technical functioning of networks is a way to complicate the political discourse on networks inside and outside the academy. Although Galloway’s approach remains very important and politically meaningful for the cultural study of software, it also runs the risk of positioning the technical as the ultimate truth, as showing what ‘truly’ lies behind any discourse on technology (celebratory or otherwise). In the end, this approach rests on a strategy that draws on technical explanation as a way to reach a ‘deeper’ understanding of technology—an understanding supposedly located ‘behind’ or ‘beyond’ the social.

This ‘quest for depth’ in the analysis of software, which positions software as the truth of new technologies while also intertwining software with issues of ‘power’ (or ‘biopower’), ‘control’ and ‘governmentality’, is also present in more socially orientated studies of software, such as David Berry’s 2011 book, The Philosophy of Software. For Berry, in order to understand the way in which one experiences software today and to develop a ‘phenomenology of software’, software studies need to ‘unpack’ software, to understand what it ‘really does’, to reach a technical grasp of it.[24] Yet, again, this process of unpacking leads to the analysis of the practices of production, usage, and consumption that shape and are shaped by software. Importantly, according to Berry, such a deeper understanding of software—which, for instance, unmasks the decisions made by big corporations such as Google with regard to their software—is ultimately an emancipatory practice, because it leads individuals to better-informed political decisions. An analogous emancipatory aim characterizes Robert Kitchin and Martin Dodge’s book, Code/Space (2011), which investigates the way in which software shapes social space by enabling ‘forms of automation, the monitoring and controlling of systems from a distance, the reconfiguring and rejuvenation of established industries, the development of new forms of labor practices and paid work, the reorganization and recombination of social and economic formations at different scales’ and many other innovations.[25] Kitchin and Dodge develop the idea of ‘automated management’ in order to explore how software enables a whole new range of movements and transactions across space while also automatically and systematically capturing and storing data on these transactions, thus bringing about new opportunities for personal and collective empowerment which are also new possibilities for regulation and control.

The intertwining between digital technologies and power is also at the core of the articulated analysis of software offered by Wendy Chun in her recent book, Programmed Visions (2011). Chun argues that computers, and particularly software, ‘embody a certain logic of governing or steering through the increasingly complex world around us’, which she calls ‘the logic of programmability’.[26] Programmability gives users a sense of empowerment by ultimately incorporating them into the logic of a machine that they do not fully understand but feel somewhat in control of. In other words, we feel empowered by software because we feel in control of something we do not have a grasp of. In fact, Chun argues that our fascination with software resides precisely in the fact that we do not understand it. Instead, we view it as something hidden, obscure, difficult to pin down, and generally invisible, which nevertheless generates visible effects in the world (for instance, lights flickering on a screen or the functioning of telecommunication networks). Rather than aiming at dispelling the mysteriousness of software, she analyzes how software functions as a ‘metaphor of metaphors’ in a number of discourses and fields of knowledge precisely because of its relative obscurity. She writes: ‘[Software’s] combination of what can be seen and not seen, can be known and not known—its separation of interface from algorithm; software from hardware—makes it a powerful metaphor for everything we believe is invisible yet generates visible effects, from genetics to the invisible hand of the market; from ideology to culture.’[27] Although critical of those approaches of software studies that view knowing software as ‘a form of enlightenment’, in her own analysis Chun still combines historical-cultural narratives (for instance, the history of programming languages) with technical explanations (for instance, the exposition of how digital circuits work) in order to complicate her cultural account of technology and particularly of software.

This political commitment with the analysis of software has been taken further by a number of studies on software which draw on neomaterialism, media archaeology, and object-oriented philosophy.[28] In their quest for thinking software as a material entity or process which spreads into economics, politics, and—again—the whole logic of control society as an ‘immanent’ force understood in a Simondonian and Deleuzian sense, these studies tend to ‘ontologize’ software as the condition of possibility of contemporary life, and to privilege software studies as a master discourse capable of making visible the foundational but otherwise invisible, hidden, embedded, off-shored, or forgotten nature of software and code. And yet, one could ask: To what extent can software and code be privileged in this respect? On what basis can they be said to constitute the conditions for revealing the truth of human life or society?[29]

So, to recap, not only is the number of existing studies of software still relatively limited, but these studies also give an account of software that is based on the analysis of the processes of software production, reception, and consumption combined with the technical exposition of how software ‘really’ works (either as a technical artefact or as the all-pervasive logic of control societies). Although I recognize that the above perspective remains absolutely relevant to the political and cultural study of technology, I suggest that this approach should be supplemented by an alternative, or I would even hesitantly say more ‘theoretical’, and yet more ‘direct’, investigation of software—although I will raise questions for both these notions of ‘theory’ and ‘directness’ later on. As I have suggested above, in order to understand the role that new technologies play in our lives and the world as a whole we do need to shift the focus of analysis from the practices and discourses concerning them to a thorough investigation of how new technologies work, and, in particular, of how software works and of what it does. And yet, in Software Theory I propose an investigation of software that takes historical-cultural and technical narratives as a starting point, rather than as a final one, and I suggest that these narratives should be problematized rather than treated as explanatory. Such an investigation of software will help me to problematize the intertwining of the technical and the social aspects of software which currently preoccupies Software Studies. At the same time, I will refrain from making any sweeping ontological claims about what software is and will instead engage in a critical examination of the alleged relations between ‘software’/’code’ on the one hand and ‘ontology’ and ‘materiality’ on the other hand, as conceptualized by cultural theorists of computation such as Katherine Hayles. This is why earlier on I suggested that the title of this book, Software Theory, hints at a different engagement with theory from what Lev Manovich had in mind.[30] Let me now explain how such an investigation of software can be undertaken.

By arguing for the importance of such an approach, I do not mean that a ‘direct observation’ of software is possible. I am well aware that any relationship we can entertain with software is always mediated, and that software might well be ‘unobservable’. In fact, I intend to take away all the implications of ‘directness’ that the concept of ‘demystifying’ or ‘engaging with’ software may bring with it. I am particularly aware that software has never been univocally defined by any disciplinary field (including technical ones) and that it takes different forms in different contexts. For instance, a computer program written in a programming language and printed on a piece of paper is software. When such a program is executed by a computer machine, it is no longer visible, although it might remain accessible through changes in the status of the machine (such as the blinking of lights or the flowing of characters on a screen)—and it is still defined as software. Chun has shown how one particular form of software, the source code (that is, computer programmes written in high-level programming languages), became the most representative form of software in the 1970s in order to make software copyrightable, and thus profitable, in the context of the software industry—a process that she names the ‘fetishization’ of source code, or the emergence of software as a commodity and a ‘thing’.[31] Different software studies scholars rely on different definitions of software, from the broadest possible ones to more restricted notions of ‘cultural’ or ‘expressive’ software.[32] This obvious difficulty in finding a point of departure when studying software—a difficulty shared by computer science—hints not just at the elusiveness and ‘opacity’ of software but most importantly at the mediated nature of our access to it.

In Software Theory I investigate different forms taken by software, from so-called specifications to source code and run-time code. However, I want to start from a rather widely accepted definition of software as the totality of all computer programs as well as all the written texts related to computer programs. This definition constitutes the conceptual foundation of software engineering, a technical discipline born in the late 1960s to help programmers design software cost-effectively. Software engineering describes software development as an advanced writing technique that translates a text or a group of texts written in natural languages (namely, the requirements specifications of the software ‘system’) into a binary text or group of texts (the executable computer programs), through a step-by-step process of gradual refinement. As Ian Sommerville explains, ‘software engineers model parts of the real world in software. These models are large, abstract and complex so they must be made visible in documents such as system designs, user manuals, and so on. Producing these documents is as much part of the software engineering process as programming.’[33]

This formulation shows that ‘software’ does not only mean ‘computer programs’. A comprehensive definition of software also includes the whole of technical literature related to computer programs, including methodological studies on how to design computer programs—that is, including software engineering literature itself. The essential move that such an inclusive definition allows me to make consists in transforming the problem of engaging with software into the problem of reading it. In Software Theory I will therefore ask to what extent and in what way software can be described as legible. Moreover, since software engineering is concerned with the methodologies for writing software, I will also ask to what extent and in what way software can actually be seen as a form of writing. Such a reformulation will enable me to take the textual nature of software seriously. In this context, concepts such as ‘reading’, ‘writing’, ‘document’, and ‘text’ are no mere metaphors. Rather, they are software engineering’s privileged mode of dealing with software as a technical object. It could even be argued—as I shall throughout this book—that in the discipline of software engineering, software’s technicity is dealt with as a form of writing.

As a first step, it is important to notice that, in order to investigate software’s readability and to attempt to read it, the idea of reading itself needs to be interrogated. In fact, if we accept that software presents itself as a distinctive form of writing, we need to be aware that it consequently invites a distinctive form of reading. But to read software as conforming to the strategies it enforces upon its reader would mean to read it the way a computer professional would, that is, in order to make it function as software. I argue that reading software on its own terms is not equal to reading it functionally. For this reason, I develop a strategy for reading software by drawing on Jacques Derrida’s concept of ‘deconstruction’. However controversial and uncertain a definition of ‘deconstruction’ might be, I am essentially taking it up here as a way for stepping outside of a conceptual system while simultaneously continuing to use its concepts and demonstrating their limitations.[34] ‘Deconstruction’ in this sense aims at ‘undoing, decomposing, desedimenting’ a conceptual system, not in order to destroy it but in order to understand how it has been constituted.[35] According to Derrida, in every conceptual system we can detect a concept that is actually unthinkable within the conceptual structure of the system itself—therefore, it has to be excluded by the system, or, rather, it must remain unthought to allow the system to exist. A deconstructive reading of software therefore asks: what is it that has to remain unthought within the conceptual structure of software?[36] In Derrida’s words, such a reading looks for a point of ‘opacity’, for a concept that escapes the foundations of the system in which it is nevertheless located and for which it remains unthinkable. It looks for a point where the conceptual system that constitutes software ‘undoes itself’.[37] For this reason, a deconstructive reading of software is the opposite of a functional reading. For a computer professional, the point where the system ‘undoes itself’ is a malfunction, something that needs to be fixed. From the perspective of deconstruction, in turn, it is a point of revelation, one in which the conceptual system underlying the software is clarified. Actually, I want to suggest that Derrida’s point of ‘opacity’ is also simultaneously the locus where Stiegler’s ‘opacity’ disappears, that is where technology allows us to see how it has been constituted. Being able to put into question at a fundamental level the premises on which a given conception of technology rests would prove particularly important when making decisions about it, and would expand our capacity for thinking and using technology politically, not just instrumentally.[38]

Let me consider briefly some of the consequences that this examination of software might have for the way in which media and cultural studies deals with new technologies. We have already seen that the issue of technology has been present in media and cultural studies from the very beginning, and that the debate around technology has contributed to defining the methodological orientation of the field. For this reason, it is quite understandable that rethinking technology would entail a rethinking of media and cultural studies’ distinctive features and boundaries. A deconstructive reading of software will enable us to do more than just uncover the conceptual presuppositions that preside over the constitution of software itself. In fact, such an investigation will have a much larger influence on our way of conceptualizing what counts as ‘academic knowledge’. To understand this point better, not only must one be reminded that new technologies change the form of academic knowledge through new practices of scholarly communication and publication as well as shifting its focus, so that the study of new technologies has eventually become a ‘legitimate’ area of academic research. Furthermore, as Gary Hall points out, new technologies change the very nature and content of academic knowledge.[39] In a famous passage, Derrida wondered about the influence of specific technologies of communication (such as print media and postal services) on the field of psychoanalysis by asking ‘what if Freud had had e-mail?’[40] If we acknowledge that available technology has a formative influence on the construction of knowledge, then a reflection on new technologies implies a reflection on the nature of academic knowledge itself. But, as Hall maintains, paradoxically ‘we cannot rely merely on the modern “disciplinary” methods and frameworks of knowledge in order to think and interpret the transformative effect new technology is having on our culture, since it is precisely these methods and frameworks that new technology requires us to rethink.’[41] According to Hall, cultural studies is the ideal starting point for a study of new technologies, precisely because of its open and unfixed identity as a field. A critical attitude towards the concept of disciplinarity has characterized cultural studies from the start. Such a critical attitude informs cultural studies’ own disciplinarity, its own academic institutionalization.[42] Yet Hall argues that cultural studies has not always been up to such self-critique, since very often it has limited itself to an ‘interdisciplinarity’ attitude understood only as an incorporation of heterogeneous elements from various disciplines—what has been called the ‘pick’n’mix’ approach of cultural studies—but not as a thorough questioning of the structure of disciplinarity itself. He therefore suggests that cultural studies should pursue a deeper self-reflexivity, in order to keep its own disciplinarity and commitment open. This self-reflexivity would be enabled by the establishment of a productive relationship between cultural studies and deconstruction.[43] The latter is understood here, first of all, as a problematizing reading that would permanently question some of the fundamental premises of cultural studies itself. Thus, cultural studies would remain acutely aware of the influence that the university, as a political and institutional structure, exercises on the production of knowledge (namely, by constituting and regulating the competences and practices of cultural studies practitioners). It is precisely in this awareness, according to Hall, that the political significance of cultural studies resides. Given that media and cultural studies is a field which is particularly attentive to the influences of the academic institution on knowledge production, and considering the central role played by technology in the constitution of media and cultural studies, as well as its potential to change the whole framework of this (already self-reflexive) disciplinary field, I want to argue here that a rethinking of technology based upon a deconstructive reading of software needs to entail a reflection on the theoretical premises of the methods and frameworks of academic knowledge.

To conclude, in Software Theory I propose a reconceptualization of new technologies, that is of technologies based on software, through a close, even intimate, engagement with software itself, rather than through an analysis of how new technologies are produced, consumed, represented, and talked about. To what extent and in what way such intimacy can be achieved and how software can be made available for examination are the main questions addressed in this book. Taking into account possible difficulties resulting from the working of mediation in our engagement with technology as well as technology’s opacity and its constitutive, if unacknowledged, role in the formation of both the human and academic knowledge, I want to argue via close readings of selected software practices, inscriptions, and events that such an engagement is essential if we are to take new technologies seriously, and to think them in a way that affects—and that does not separate—cultural understanding and political practice.

In Chapter 1, I suggest that the problem of digital technologies, and of the kind of knowledge that can be produced about them, cannot be addressed without radically reconsidering what we mean by ‘knowledge’ in relation to ‘technology’ in a broader sense. I argue that, as a preliminary step, the received concepts of technology—that is, the ways in which technology has been understood primarily by the Western philosophical tradition—need to be put into question. I outline two traditions of philosophical thought about technology: the dominant Aristotelian conception of technology as an instrument, and an alternative line of thought based on the idea of the ‘originary technicity’ of the human being. Subsequently, I draw on the work of thinkers belonging to the latter tradition (Martin Heidegger, Bernard Stiegler, André Leroi-Gourhan, Jacques Derrida) to develop a different way of thinking about technology, one that will ultimately prove more productive for my investigation of software.

I argue for a radical rethinking of the conceptual framework of instrumentality if an understanding of technology, and specifically of software, is to be made possible. A pivotal role is played here by the idea of linearization, which also belongs to the tradition of originary technicity: it was developed by Leroi-Gourhan and subsequently reread by Derrida in Of Grammatology in the context of his own reflections on writing.[44] By becoming a means for the phonetic recording of speech, writing became a technology—it was placed at the level of a tool, or of ‘technology’ in its instrumental sense. Derrida relates the emergence of phonetic writing to a linear understanding of time and history, thus emphasizing the relationship between technology, language, and time. Ultimately, I draw on Stiegler’s thought on technology and on his own rereading of Derrida’s work in order to call for a concrete analysis of historically specific technologies—including software—while keeping open the significance of such an analysis for a radical rethinking of the relationship between technology and the human.[45]

To what extent and in what way such an investigation of software can be pursued is the focus of Chapter 2, which deals with the concept of ‘writing’ in relation to ‘language’ and ‘code’ more closely and examines the usefulness of these concepts for the understanding of software. I draw mainly on Hayles’s understanding of the ‘regime of code’ as opposed to the regimes ‘of speech’ and ‘of writing’, and on her suggestion that writing and code are intertwined in software. Nevertheless I question her assumption that software, as a technical object, is ‘beyond metaphysics’, and I propose a different reading of Derrida that views his notion of ‘writing’, as well as of deconstruction, as a promising theory of technology capable of inspiring innovative strategies for the analysis of software. I argue that, since his earliest works (in particular, Of Grammatology), Derrida has defined writing as a material practice. For Derrida, we need to have an understanding of writing in order to grasp the meaning of orality—not because writing historically existed before language, but because we must have a sense of the permanence of a linguistic mark in order to recognize it. Ultimately, a sense of writing is necessary for signification to take place. In other words, language itself is material; it needs materiality (or rather, it needs the possibility of an ‘inscription’) to function as language. Software itself can function only through materiality, because software is (also) code, and materiality is what constitutes signs (and therefore codes). But if every bit of code is material, and if the material structure of the mark is at work everywhere, how are we supposed to study software as a historically specific technology? I argue that the historical specificity of software resides in its constitution through the continuous undoing and redoing of the boundaries between ‘software’ itself, ‘writing’, and ‘code’. In order to do so I take a critical approach to Stiegler’s distinction between ‘technics’ and ‘mnemotechnics’, which becomes untenable when applied to software. Ultimately, I argue that software remains to some extent implicated with the instrumental understanding of technology while at the same time being able to escape it by bringing forth unexpected consequences

In Chapter 3, I show how, in the late 1960s, ‘software’ emerged precisely as a specific construct in relation to ‘writing’ and ‘code’ in the discourses and practices of software engineering. I explain how the discipline of software engineering emerges as a strategy for the industrialization of the production of software at the end of the 1960s through an attempt to control the constitutive fallibility of software-based technology. Such fallibility—that is, the unexpected consequences inherent in software—is dealt with through the organization and linearization of the time of software development. Software engineering also understands software as the ‘solution’ to preexistent ‘problems’ or ‘needs’ present in society, therefore advancing an instrumental understanding of software. However, both the linearization of time and the understanding of software as a tool are continuously undone by the unexpected consequences brought about by software—which must consequently be excluded and controlled in order for software to reach a point of stability. At the same time, such unexpected consequences remain necessary to software’s development.

In Chapter 4, I turn to the evolution of software engineering in the 1970s and 1980s into a discipline for the management of time in software development. I start by examining two of the fundamental texts on the management of time in software development, both written by Frederick Brooks in the mid-1970s and mid-1980s, respectively, in order to show how the mutual co-constitution of ‘software’, ‘writing’, and ‘code’ develops in these texts.[46] The sequencing of time that software engineering proposed in the 1970s and 1980s, which was justified through an Aristotelian distinction between the essential and the accidental (the ideal and the material), ultimately gave rise to an unexpected reorganization of technology according to the open source style of programming in the 1990s. The framework of instrumentality that, as I showed in Chapter 3, emerged from the ‘software crisis’ of the late 1960s—that is, from the need to control the excessive speed of software growth—was enforced on software production through the 1970s and 1980s, and it enabled the development of software during these decades as well as the unexpected emergence of open source. I investigate a classic of software engineering in the ‘open source era’, Eric Steven Raymond’s ‘The Cathedral and the Bazaar’, which responds to Brooks’s theories from the point of view of the mid-1990s. I show how open source still takes software engineering as its model—a model in which the framework of instrumentality is once again reenacted.[47] The aim of open source is still to obtain ‘usable’ software, but ‘usability’—that is, a stable version of a software system—is not something that can be scheduled in time. Rather, it is something that happens to the system while it is constantly reinscribed. In sum, in Chapters 3 and 4, I argue that software engineering is characterized by the continuous opening up and reaffirmation of the instrumentality of software and of technology. Furthermore, in Chapter 4, my examination of the political consequences of Linus Torvald’s technical decisions when developing Linux allow me to show in what sense politics and technology could and should be thought together.

Finally, in Chapter 5, I leave software engineering and turn to the examination of a number of classical works on programming languages and compilers in order to show how software is constituted as a process of unstable linearization.[48] I explain how software is understood by using concepts derived from structural linguistics—namely language, grammars, and the alphabet—and I discuss the concepts of formal notation, formal grammar, programming language, and compiler. I specifically concentrate on the process of ‘compiling’—that is, the process through which a program is made executable through its ‘translation’ from a high-level programming language into machine code. I show how, in the theory of programming languages, the instrumentalization of technology and the linearization of programming languages go hand in hand, and how software exceeds both. In fact, software is defined as a process of substitution—or reinscription—of symbols, which always involves iteration. The theory of programming languages is an attempt to manage iteration through linearization—an attempt ultimately destined to fail, since iteration is always open to variation. Actually, iteration is a constitutive characteristic of software, just as fallibility, and the capacity for generating unforeseen consequences, is constitutive of technology. Every time technology brings about unexpected consequences, it is fundamental to decide whether this is a malfunction that needs to be fixed or an acceptable variation that can be integrated into the technological system, or even an unforeseen anomaly that will radically change the technological system for ever. Ultimately such decisions cannot be made purely on technical grounds—they are, instead, of a political nature.

Taking my clue from the analysis of a number of media examples, in the Conclusions of Software Theory I explain how obscuring the incalculability of technology leads to setting up an opposition between risk and control and ultimately to the authoritarian resolution of every dilemma regarding technology. Once again I emphasize that technology (as well as the conceptual system on which it is based) can only be problematized from within, and how such a problematization needs to be a creative and politically meaningful process. Even more importantly, technology—as well as software—needs to be studied in its singularity. In particular, the opacity of software (which is the problem we started fom) cannot be dispelled merely through an analysis of what software ‘really is’—for instance, by saying that software is ‘really’ just hardware. Rather, one must acknowledge that software is always both conceptualized according to a metaphysical framework and capable of escaping it, that it is instrumental and generating unforeseen consequences, that it is both a risk and an opportunity (or, in Derridean terms, a pharmakon). If the unexpected is always implicit in technology, and the potential of technology for generating the unexpected needs to be unleashed in order for technology to function, every choice we make with regard to technology always implies an assumption of responsibility for the unforeseeable. Technology will never be calculable—and yet decisions must be made. The only way to make politically informed decisions about technology is not to obscure such uncalculability. By opening up new possibilities and foreclosing others, our decisions about technology also affect our future. Thus, thinking politics with technology becomes part of the process of the reinvention of the political in our technicized and globalized world. Rethinking technology is a form of imagining our political future.

Notes

1.

GSM was originally a European project. In 1982, the European Conference of Postal and Telecommunication Administration (CEPT) instituted the Groupe Spécial Mobile (GSM) to develop a standard for a mobile telephone system that could be used across Europe. In 1989, the responsibility for GSM was transferred to the European Telecommunications Standards Institute (ETSI). GSM was resignified as the English acronym for Global System for Mobile Communications, and Phase One of the GSM specifications were published in 1990. The world first public GSM call was made on 1 July 1991 in a city park in Helsinki, Finland, an event which is now considered the birthday of second-generation mobile telephony—the first generation of mobile telephony to be completely digital. In the early 1990s, Phase Two of GSM was designed and launched, and GSM rapidly became the worldwide standard for digital mobile telephony. Two more generations of mobile telecommunications followed: 3G (which was standardized by ITU and reached widespread coverage in the mid-2000s) and the current 4G, which provides enhanced mobile data services, including ultra-broadband Internet access, to a number of devices such as smartphones, laptops, and tablets. See, for instance, Juha Korhonen, Introduction to 4G Mobile Communications (Boston: Artech House, 2014).

2.

TCAP is a 2G digital signaling system that enables communication between different parts of digital networks, such as telephone switching centres and databases. SMS is a communication service standardized as part of the GSM network (the first definition of SMS is to be found in the GSM standards as early as 1985), which allows for the exchange of short text messages between mobile telephones. The SMS service was developed and commercialized in the early 1990s. Today SMS text messaging is one of the most widely used data application in the world.

3.

Wendy Hui Kyong Chun, Programmed Visions: Software and Memory (Cambridge, MA and London: MIT Press, 2011), 1.

4.

Lev Manovich, The Language of New Media (Cambridge, MA and London: MIT Press, 2001); Alexander Galloway, Protocol: How Control Exists after Decentralization (Cambridge, MA: MIT Press, 2004); Katherine N. Hayles, My Mother Was a Computer: Digital Subjects and Literary Texts (Chicago: University of Chicago Press, 2005); Adrian Mackenzie, Cutting Code: Software and Sociality (New York and Oxford: Peter Lang, 2006); Matthew Fuller, ed., Software Studies. A Lexicon (Cambridge, MA and London: MIT Press, 2008); Chun, Programmed Visions; David M. Berry, The Philosophy of Software: Code and Mediation in the Digital Age (Basingstoke: Palgrave Macmillan, 2011); Florian Cramer, Anti-Media: Ephemera on Speculative Arts (Rotterdam: NAi Publishers and Institute of Network Cultures, 2013); Nick Montfort, Patsy Baudoin, John Bell, Ian Bogost, Jeremy Douglass, Mark C. Marino, Michael Mateas, Casey Reas, Mark Sample, and Noah Vawter, 10 PRINT CHR$(205.5+RND(1)); : GOTO 10 (Berkeley: MIT Press, 2013); Lev Manovich, Software Takes Command: Extending the Language of New Media (New York and London: Bloomsbury, 2013).

5.

Gary Hall, “The Digital Humanities beyond Computing: A Postscript,” Culture Machine 12 (2011), www.culturemachine.net/index.php/cm/article/download/441/471.

6.

“The Digital Humanities beyond Computing,” [Special Issue], ed. Federica Frabetti, Culture Machine 12 (2011); Katherine N. Hayles, How We Think: Digital Media and Contemporary Technogenesis (Chicago: University of Chicago Press, 2012).

7.

Bernard Stiegler, Technics and Time, 1: The Fault of Epimetheus (Stanford, CA: Stanford University Press, 1998), 21.

8.

Alfred Gell develops an interesting reflection on the relations between art, technology, and magic (Alfred Gell, “The Technology of Enchantment and the Enchantment of Technology,” in Anthropology, Art, and Aesthetics, ed. Jeremy Coote and Antony Shelton [Oxford: Clarendon Press, 1992], 40–63). Drawing on the example of Trobriand canoe-boards, Gell argues that the foundation of art is a technically achieved level of excellence that a society misrepresents to itself as a product of magic. Gell views art as a special form of technology and art objects as devices to obtain social consensus. For Gell ‘the power of art objects stems from the technical processes they objectively embody: the technology of enchantment is founded on the enchantment of technology’ (Gell, “The Technology of Enchantment,” 44). The magical prowess, which is supposed to have entered the making of the art object, depends on the level of cultural understanding that surrounds it. The same can be said of technology: ‘the enchantment of technology is the power that technical processes have of casting a spell over us so that we see the real world in an enchanted form’ (Gell, “The Technology of Enchantment,” 44). This is what Gell names the ‘halo effect of technical difficulty’ (Gell, “The Technology of Enchantment,” 48; Christopher Pinney and Nicholas Thomas, eds., Beyond Aesthetics: Art and the Technologies of Enchantment, [Oxford and New York: Berg, 2001]). He argues that art objects are made valuable precisely by virtue of the ‘intellectual resistance’ they offer to the viewer (Gell, “The Technology of Enchantment,” 49) and that ‘technical virtuosity is intrinsic to the efficacy of works of art in their social context’ because it creates an asymmetry in the relationship between the artist and the spectator (48–52). Gell’s reflection contributes to show the cultural character of the sense of enchantment that surrounds technology. Furthermore, by suggesting that such sense of enchantment has a role in the creation of social consensus, it provides evidence to the fact that any attempt to change the processes through which we understand technology has significant political consequences.

9.

Martin Lister, Jon Dovey, Seth Giddings, Iain Grant, and Kelly Kieran, New Media: A Critical Introduction (London and New York: Routledge, 2003).

10.

Lister et al., New Media, 73.

11.

Paul DuGay, Stuart Hall, Linda Janes, Hugh Mackay, and Keith Negus, Doing Cultural Studies: The Story of the Sony Walkman (London: Sage/The Open University, 1997), 3.

12.

DuGay et al., Doing Cultural Studies, 10.

13.

Raymond Williams, The Long Revolution (Harmondsworth: Penguin, 1961), 57. See also Stuart Hall, ed., Representation: Cultural Representations and Signifying Practices (London: Sage/The Open University, 1997); DuGay et al., Doing Cultural Studies.

14.

Mackenzie, Cutting Code, 1.

15.

Fuller, ed., Software Studies, 1.

16.

Manovich, The Language of New Media, 48.

17.

Manovich, The Language of New Media, 48.

18.

Hall, “The Digital Humanities beyond Computing,” 3.

19.

Manovich, Software Takes Command, 10.

20.

Manovich, Software Takes Command, 16.

21.

Manovich, Software Takes Command, 4.

22.

Nick Montfort and Ian Bogost, Racing the Beam: The Atari Video Computer System (Cambridge, MA and London: MIT Press, 2009), 2.

23.

Galloway, Protocol, 11.

24.

Berry, The Philosophy of Software, 9.

25.

Rob Kitchin and Martin Dodge, Code/Space: Software and Everyday Life, Cambridge (MA and London: MIT Press, 2011), 9.

26.

Chun, Programmed Visions, 9.

27.

Chun, Programmed Visions, 17.

28.

Matthew Fuller and Tony D. Sampson, eds., The Spam Book: On Viruses, Porn, and Other Anomalies from the Dark Side of Digital Culture (Cresskill: Hampton Press, 2009); Matthew Fuller and Andrew Goffey, eds., Evil Media (Cambridge, MA and London: MIT Press, 2012); Jussi Parikka, “New Materialism as Media Theory: Medianatures and Dirty Matter,” Communication and Critical/Cultural Studies 9, no. 1 (2012): 95–100.

29.

I would like to thank Gary Hall for pointing this out to me.

30.

Manovich, The Language of New Media, 48.

31.

Chun, Programmed Visions, 20.

32.

Cf. Berry, The Philosophy of Software; Montfort and Bogost, Racing the Beam; Manovich, Software Takes Command.

33.

Ian Sommerville, Software Engineering (Boston: Addison-Wesley, 2011), 4.

34.

Cf. Jacques Derrida, Writing and Difference (London: Routledge, 1980).

35.

In “Structure, Sign, and Play in the Discourse of the Human Sciences”, while reminding us that his concept of ‘deconstruction’ was developed in dialogue with structuralist thought, Derrida speaks of ‘structure’ rather than of conceptual systems, or of systems of thought (Derrida, Writing and Difference). However, ‘structure’ hints at as complex a formation as, for instance, the ensemble of concepts underlying social sciences, or even the whole of Western philosophy. See also Jacques Derrida, “Letter to a Japanese Friend,” in Derrida and Différance, ed. Robert Bernasconi and David Wood (Warwick: Parousia Press, 1985), 1–5.

36.

I am making an assumption here—namely that software is a conceptual system as much as it is a form of writing and a material object. In fact, the investigation of these multiple modes of existence of software is precisely what is at stake in my book. In the context of the present introduction, and for the sake of clarity, I am concentrating on the effects of a deconstructive reading of a ‘structure’ understood in quite an abstract sense.

37.

According to Derrida, deconstruction is not a methodology, in the sense that it is not a set of immutable rules that can be applied to any object of analysis—because the very concepts of ‘rule’, of ‘object’, and of ‘subject’ of analysis, themselves belong to a conceptual system (broadly speaking, they belong to the Western tradition of thought), and therefore are subject to deconstruction too. As a result, ‘deconstruction’ is something that ‘happens’ within a conceptual system, rather than a methodology. It can be said that any conceptual system is always in deconstruction, because it unavoidably reaches a point where it unties or disassembles its own presuppositions. On the other hand, since it is perfectly possible to remain oblivious to the permanent occurrence of deconstruction, there is a need for us to actively ‘perform’ it, that is, to make its permanent occurrence visible. In this sense deconstruction is also a productive, creative process.

38.

This methodology for reading software sets itself apart from the general call to ‘reading code’ advanced by scholars in the field of critical code studies, such as the analyses of source code proposed by Mark Marino. Critical code studies (CCS) ‘emerged in 2006 as a set of methodologies that sought to apply humanities-style hermeneutics to the interpretation of the extrafunctional significance of computer source code. . . . The goal of the study is to examine the digitally born artifact through the entry point of the code and to engage the code in an intensive close reading following the models of media archaeology, semiotic analysis, and cultural studies’ (Mark Marino, “Reading Exquisite Code: Critical Code Studies of Literature,” in Comparative Textual Media: Transforming the Humanities in the Postprint Era, ed. Katherine N. Hayles and Jessica Pressman [Minneapolis and London: University of Minnesota Press, 2013], 283). For instance, in his article ‘Disrupting Heteronormative Codes: When Cylons in Slash Goggles Ogle AnnaKournikova’, Marino interprets a mouse click as the inscription of normative heterosexuality in the source code of a computer virus (Mark Marino, “Disrupting Heteronormative Codes: When Cylons in Slash Goggles Ogle AnnaKournikova,” in Digital Arts and Culture Proceedings [University of California Irvine, 2009], http://escholarship.org/uc/item/09q9m0kn). In Software Theory, I propose a more radical problematization of code’s own presuppositions—that is, those presuppositions that make code work as code.

39.

Gary Hall, Culture in Bits: The Monstrous Future of Theory (London and New York: Continuum, 2002); Gary Hall, Digitize This Book! The Politics of New Media, or Why We Need Open Access Now (Minneapolis: University of Minnesota Press, 2008).

40.

Jacques Derrida, Archive Fever: A Freudian Impression (Chicago: University of Chicago Press, 1996).

41.

G. Hall, Culture in Bits, 128.

42.

G. Hall, Culture in Bits, 115. For the scope of the present Introduction, I assume Hall’s term ‘cultural studies’ as roughly equivalent to what I have previously named ‘media and cultural studies’, since this passage refers to a constitutive debate around the field’s conceptual framework.

43.

Cf. Gary Hall and Clare Birchall, eds., New Cultural Studies: Adventures in Theory (Edinburgh: Edinburgh University Press, 2006).

44.

Jacques Derrida, Of Grammatology (Baltimore: The Johns Hopkins University Press, 1976); André Leroi-Gourhan, Gesture and Speech (Cambridge, MA: MIT Press, 1993).

45.

Cf. Bernard Stiegler, Technics and Time 1–3 (Stanford, CA: Stanford University Press, 1998–2011).

46.

Frederick. P. Brooks, The Mythical Man-Month: Essays on Software Engineering (Reading, MA: Addison-Wesley, 1995); Frederick. P. Brooks, “No Silver Bullet: Essence and Accidents of Software Engineering,” IEEE Computer 20, no. 4 (1987): 10–19.

47.

Eric S. Raymond, The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary (Cambridge, MA: O’Reilly, 2001b).

48.

John E. Hopcroft and Jeffrey D. Ullman, Formal Languages and Their Relation to Automata (Reading, MA: Addison-Wesley, 1969); Alfred V. Aho and Jeffrey D. Ullman, Principles of Compiler Design (Reading, MA: Addison-Wesley, 1979).