What is it that we call “social?” What is a society, and what are the implications of our conceptions of what societies consist of? How do studies of the “social” organize this understanding and construct the limits of our sociological imagination? Dealing with the social aspects of the technological, especially in the field of artificial intelligence (AI), is becoming increasingly urgent. For example, MIT Media Lab researchers recently proposed to construct a completely new field called machine behavior (Rahwan et al., 2019); their aim is to conceive of a field between the AI sciences and the social sciences in which the machine’s behaviors and the social world around them mutually influence one another. The growing hype around AI (perhaps just before its third winter, as some would have it) has created much conversation about how these technologies are becoming part of everyday lives and legitimated existing academic debates over the social impact of these phenomena. Initiatives such as Data & Society and AI Now have been working on the social and political consequences of algorithmic cultures for the better part of the past decade. The dramatic shift of attention to the social and political aspects of AI is a testament to the necessity of including social scientists in core debates about the development and circulation of these technologies. However, these conversations, which are very much vital in the current political climate, do not necessarily attempt to make any significant theoretical claims about the status of machinic intelligences and/or how to deal with them conceptually.
This chapter proposes another possibility for dealing with this phenomenon, one that would necessitate a transformation of the boundaries of the common conceptions of the social sciences in general and sociology in particular. First, the chapter will elaborate on Luciana Parisi’s (2015) argument that indeterminacy and uncertainty are becoming paradigmatic concerns rather than limits in computational theory. Then, it will bring together ideas from Alan Turing and George Herbert Mead, specifically emphasizing their conceptions of novelty; from this reading it will advance a proposal for a relational sociology of AI. In doing so, this chapter aims to contribute to a conceptual paradigm that would create the possibility for looking at these technologies not just as harbingers of capitalist notions of efficiency and productivity, but as contributors to concepts of social agency and novelty. Formulating AI as a social agent that is dynamic and intertwined with indeterminacy would make it possible to theoretically open AI agents up to becoming part of other worldings (Wilson & Connery, 2007). The critical literatures dealing with the social implications of AI would take them as integral parts of the political economies that lie behind the machines themselves. This chapter acknowledges such a route, and yet diverges from it in that it seeks to destabilize the close ties between machinic agencies and capitalist relations. Following this, it allies with an insurgent posthumanist position that contributes to “the everyday making of alternative ontologies” (Papadopoulos, 2010, p. 135). The aim here is thus to expand the sociological landscape to include AI agents in ontologies of the social. The driving question is: how would the everyday practice of sociological imagining shift if it incorporated machinic intelligences as social entities into its purview?
In order to start thinking about AI as an integral part of a social interaction, and not just a mechanical tool that is the extension of already established structures,1 it is appropriate to focus on the very dynamism that underlies the operations of these intelligences. What separates some genres of AI from other machinic entities and straightforward computational processes and makes it a potential sociological being, is its capacity for interaction, which in turn takes its force from uncertainty. I will examine Luciana Parisi’s conceptual work on the centrality of uncertainty in computational processes and turn to Alan Turing to locate this uncertainty in his theory for computational intelligence. This opening, then, will be read through George Herbert Mead’s sociology of mind so as to position sociological thinking at the core of AI theorizations. This could be a significant contribution in that the proximity between the theories of Turing and Mead has not yet been made explicit in the literatures that deal with the sociality of AI. As we shall see, with the increasing emphasis on the notions of dynamism, interaction, and indeterminacy in discussions about developing AI, a sociological approach to the study of the machinic mind becomes more appropriate. I argue that Mead’s perspective makes it possible to see the relational basis of AI agency and to open up this agentic black box to sociological inquiry.
Sociology of AI
Why should sociology deal with AI? The obvious answer is that AI is increasingly becoming part of our everyday lives. AI automates certain sociotechnical processes and invents new ones, and, in so doing, it introduces certain preconceptions, often in black-boxed form, to the social realm, all the while redistributing the inherent injustices or inequalities of the systems that humans reside in. Issues such as algorithmic biases, the narrow visions of social roles that AI agents take—for instance, Amazon’s conversational AI Alexa’s contribution to gender and race dynamics is still controversial—and the consequent reproduction of already existing power structures have been problematized in the literatures that deal with AI’s social impact (Burrell, 2016; Caliskan, Bryson, & Narayanan, 2017; Hannon, 2016; Parvin, 2019; Phan, 2019). This work is necessary in order to discuss how such technologies take on the historical forces of capitalism, colonialism, patriarchy, and racism and disseminate and rigidify these logics in societies, asymmetrically influencing social groups. The social sciences have taken up the task of discussing and revealing the work that AI phenomena are actually performing as well as speculating on the work that AI might perform in the world. In this line of scholarship, AI emerges as an instrument of technocapitalism and has no real agency on its own; AI can only further the agenda of the systems in which it is embedded. It adds speed and efficiency to processes that are already broken from the perspective of social justice. While all this is true, and while the work that focuses on these aspects of AI is very important (especially as policies to manage the implementation of these technologies are negotiated), there are other and perhaps more consequential ways to think of AI sociologically.
The science and technology studies scholar Steve Woolgar, in a previous generation of AI research, proposed a perspective that would substantiate sociological conceptions of AI. Woolgar (1985) asks the question “Why not a sociology of machines?” to provoke a rethinking of a foundational claim of sociology, namely, that the social is a distinctly human category. He starts by criticizing the narrow role given to the sociologist in discussing AI research; their contribution is generally taken as assessing the “impact” of these technologies, i.e., how they influence the societies that surround them, rather than detailing research processes. This, he claims, contributes to a division between the notions of the technical and the social, thus maintaining the divide between nature and the social. He argues for a methodological shift that would put this distinction into question by bringing the genesis of AI into sociological perspective. His argument points toward an extension of laboratory studies (Cetina, 1995; Latour & Woolgar, 1979) and unsettles the belief that scientific or technological advances occur in a vacuum devoid of interests or meanings. He elaborates his point by providing an analysis of expert systems, which exemplify both how the AI enterprise feeds on the dualisms that pervade the modern sciences and how it maintains its “extraordinary” character. Woolgar thus suggests focusing on the assumptions that go into AI discourse and practice, so as to highlight what kinds of meanings are mobilized to legitimize certain actions and research agendas.
Thinking about why one should study the sociology of AI, a less obvious answer could be the introduction of new modes of thought that algorithmic automation makes possible. For example, with reference to the work of mathematician Gregory Chaitin, Luciana Parisi shows that the assumed algorithmic relationship between input and output has been disrupted; Chaitin’s work expands the limits of computational theory by integrating randomness into the core relation between input and output. Parisi (2015) then shows how this entropic conception of information points to the emergence of an “alien mode of thought” (p. 136). This became the case as information theory started treating the “incomputable” as a central tenet of computational processes; she claims that this points to a transformation in the very logic of algorithmic automation. This is interesting for a number of reasons, and Parisi frames this transformation as pointing toward the limitation of critiques of instrumental reason. In conversation with Bernard Stiegler’s (2014) argument that thought and affect become engines of profit under technocapitalism and Maurizio Lazzarato’s (2012) claim that all relations are reduced to a general indebtedness through apparatuses of automation, Parisi (2015) carves out another possibility that rests on the context of this “all-machine phase transition” (p. 125). She complicates this reading of algorithmic automation that frames machines as linear extensions to capitalistic agendas. She maintains that there is a shift toward dynamism in algorithmic automation, which, if taken into account, challenges the assumed direct relationship between computational intelligence and instrumental reason. She shows how an interactive paradigm is starting to take center stage in computational theories, where notions such as learning, openness, and adaptation come to define such systems.
Possibility of Interactivity in Machinic Intelligences
What is important here is that this dynamism, not canonically considered to be a logic of computational intelligence, becomes the central notion of digital minds.2 Before the introduction of dynamism, understandings of automated intelligence rested on a static view, wherein the relationship between input and output was taken to be direct and undisturbed—information unproblematically flows between symbolic circuits, and data is computed with a discernibly rule-based, programmed logic in a closed system. There is a certain input, and programming allows that input to be transformed into the desired output. In this paradigm, error, or any form of deviation in the processing of the program, necessarily brings about a break in the system. The flow is interrupted, the machine is broken, the process is severed, and a finality has been reached in the computational procedure. In this paradigm, then, when a break is experienced due to a deviation, human bodies flock to the moment of error, finding the cause of the disruption and reinstating the procedure that is the representation of a pathway to a (pre)determined output from a certain input. It is in this sense that algorithmic automation reflects a mode of intelligence that has a purpose and a finality, or rather, reason. The relationship between input and output is direct, or at least logically structured, which makes computational intelligence a goal-oriented reasoning process. This is why computational processes are taken as hallmarks of order, to the extent that they carry out the reasoning of their programming/programmer. Yet, as Parisi points out, this is not the only manner in which, in her words, “algorithmic automation”—and for us, machinic intelligence—unfolds in social reality. Rather, she argues, as indeterminacy or uncertainty become fundamental to the functioning of computation, these systems become dynamic, open to interactivity, and thus active in the world.
The true progressive perfecting of machines, whereby we could say a machine’s degree of technicity is raised, corresponds not to an increase of automatism, but on the contrary to the fact that the operation of a machine harbors a certain margin of indeterminacy. … A purely automatic machine completely closed in on itself in a predetermined operation could only give summary results. The machine with superior technicality is an open machine, and the ensemble of open machines assumes man as permanent organizer and as a living interpreter of the inter-relationships of machines. (p. 5)
For Simondon, the possibility for humans to co-work with machines lies in the revealing of such a degree of indeterminacy, which is veiled by the black-box quality of the machine. This point is significant, as indeterminacy allows the possibility for an interactive organization to take place across humans and machines. The conditions of possibility for an emergent interaction order (Goffman, 1967, 1983) lie in the recognition of this indeterminacy.3
As Parisi shows in more concrete terms, computational theory already deals with randomness and infinities and does not cast them aside as irrelevant or beyond the scope of computation. Rather, machinic intelligence (or algorithmic automation) turns “incomputables into a new form of probabilities, which are at once discrete and infinite. … The increasing volume of incomputable data (or randomness) within online, distributive, and interactive computation is now revealing that infinite, patternless data are rather central to computational processing” (Parisi, 2015, p. 131). In Parisi’s explanation, derived from Chaitin’s more mathematically oriented work, indeterminacy and randomness are taken as productive capacities in communication systems,4 as randomness challenges the equivalence between input and output. This randomness emerges from an entropic transformation that occurs in the computational process, where the compressing of information in effect produces an increased size in the volume of data. Computational processes are traditionally taken as a process of equilibrium, i.e., a straightforward interfacing between different modalities of data. However, Chaitin shows that there is an indeterminacy and incalculability intrinsic to the computational process.
Irreducibility of Machinic Intelligence
This incomputable element makes machinic thinking irreducible to humanist notions of thought. Rather, machinic intelligence is transformed to include randomness in its algorithmic procedures. The incomputable marks the point at which interactive machinic systems come into being.5 For Parisi, this point holds the potential for automated intelligences to encompass a landscape that exceeds the logic of technocapitalist instrumentalism, all the while saving the concept of reason from the clutches of market-driven capitalism. She argues that “the incomputable cannot be simply understood as being opposed to reason. … These limits more subtly suggest the possibility of a dynamic realm of intelligibility, defined by the capacities of incomputable infinities or randomness, to infect any computable or discrete set” (2015, p. 134). In Parisi’s explanation, then, the machine does not simply operate in the intelligible realm of computability, but includes randomness that creates the conditions for its interactivity and dynamism, in the sense that the initial conditions of the algorithmic process become malleable. The best example of this irreducibly nonhuman/machinic intelligence can be found in financial systems. As high-frequency trading systems work with large amounts of data and include stochastic programming, their logics of operation spill away from rule-based, linear procedural space; in practice, financial algorithms are usually developed with randomness recognized as part of their computational processes.6
Randomness thus becomes intelligible, albeit in a closed manner. Deviating from Simondon’s foreshadowing, these incalculables become intelligible, and yet they cannot be synthesized by a subject. The randomness resists an assimilation into sameness. Parisi interprets this as suggesting that “computation—qua mechanization of thought—is intrinsically populated by incomputable data” (p. 134). She emphasizes that this is not an error or a glitch in the system that awaits fixing but rather a part of the processes of computation. This contributes to a conceptualization of machines as entities in their own right and makes possible the emergence of “the machine question” (Gunkel, 2012), in that machinic intelligences can be considered as legitimate social others, i.e., entities that are capable of “encounter” in a social sense as they cannot be absorbed into a sameness in the interaction. The relationalities that emerge from the encounter between human and machinic intelligences have the capacity to evolve in novel ways due to the irreducibility that stems from uncertainty in the computational process. As Parisi (2015) describes, “incomputables are expressed by the affective capacities to produce new thought” (p. 135). The possibility for novelty, then, lies in the recognition that these incalculables are part of computational thinking, as they “reveal the dynamic nature of the intelligible.” This novel form corresponds to a “new alien mode of thought,” as Parisi calls it, that has the ability to change its initial conditions in ways which would reveal ends that do not necessarily match human reasoning.
Interestingly, Alan Turing also talked about intelligence as being equivalent to machines’ capacity to change their initial instructions. In his 1947 lecture to the London Mathematical Society where he first disclosed his ideas about a digital computer, he elaborates on the conditions through which a machine would be taken as being intelligent. The machine that he describes is different from his Turing machine in that it follows the instructions given by a human and yet has the capacity to change its initial programming. What is significant here is that the machine actively contributes to the production of outputs by deviating from the original procedure—designed by the human—between input and output. Turing (1947), then, recognizes that the perfect and seamless processing of information stands against any conception of intelligence in computation: “if a machine is expected to be infallible, it cannot also be intelligent (p. 13).”7 He points toward failure, or even error, as a necessary part in the process of cultivating intelligence in machines.
Here it is important to specify that not all AI agents operate socially, as it is the case that not all AI are “intelligent” in the same way. However, there is more concrete investment and output in developing “intelligent” systems of a dynamic kind. The examples that fall into this category use Deep Learning techniques such as supervised and unsupervised learning through neural nets. The famous Go-playing AI, AlphaGo (developed by DeepMind), is one such example. This agent “learns” how to play the game either with (e.g., supervised learning; Silver et al., 2016) or without human knowledge (Silver et al., 2017). Also, the still emerging intersection of creative AI can fall under this “irreducible” category; for instance, Generative Adversarial Networks (GANs) that work on stochastic principles to generate content—e.g., images, text, sound—are an example of algorithmic intelligences that work through a kind of dynamism. These technologies are either designed to function in social realms or to operate interactively, rendering possible relational analyses. This relational character could open up other ways with which AI could be thought, and not as “just” a technology. In the next part, I will try to account for the agency of AI in a sociological sense by reading Turing’s formulations through George Herbert Mead’s sociology of mind, and I will consider the implications that this reading would have on the conception of sociality and agency.
An Encounter: George Herbert Mead and Alan Turing
George Herbert Mead’s influential work Mind, Self, and Society (1934) deals extensively with how meanings and selves are formed through societal processes. His efforts were concentrated toward giving a sociological explanation for the phenomenon of consciousness, and thus his ideas form an early “sociology of mind.” His formulations were, paradigmatic for the time, very much influenced by humanist ideas. In his thought, the human mind is largely constituted by societal forces, and human (inter)action is guided by communication. Even so, he does not give a completely socially deterministic account, such as a social structure determining the actions of an agent. As will become clearer later in this section, he puts a great deal of emphasis on the novelties and surprise effects, the incalculable and unpredictable, as harbingers of social change. It is once again this idea of incalculability that brings closer the computational “mind”8 to a social conception of mind.
The potential for novelty was of interest to Turing as well, especially in his famous article “Computing Machinery and Intelligence” (1950). Turing seems to have two perspectives on novelty when it comes to computers. These two perspectives might appear to be contradictory at first, as will become clearer in the following; however, the contradiction between his two answers to the question of whether machines can do anything new doesn’t necessarily make these views mutually exclusive. The first view emerges in his consideration of “Lovelace’s objection,” taking up Lady Lovelace’s assertion that a machine can “never do anything really new.” Turing suggests that the appreciation of novelty rests in one’s “creative mental act,” and that if one brings in a deterministic framework to make sense of the world, then the surprise effect produced by the computer will never be captured. For Turing, then, the question is not whether the computer can do anything new, but whether humans have the right attitude to be able to perceive its surprise effect. In this line, the capacity to attribute agency to machines rests on humans’ conception of these machines. Who gets included in the “human club” (Lestel, 2017) depends not only on the frames with which humans interpret the agency of machinic intelligences, but also on extending interpretive charity in their interactions so as not to dismiss the machine as a simple tool that crunches numbers. This move might be taken in the framework of social creativity (Graeber, 2005), as the interactions with the machines might pave the way for the emergence of different social practices than the ones that circulate in current imaginaries. While this opens up a way to think about how to establish different relations with machines, it must be stressed that this is an anthropocentric approach in that it puts the human as the ultimate responsible9 entity that could extend agency, within the bounds of one’s own reason. In this conceptualization, the machine is at the receiving end of humans’ extension of agency and is itself actually a passive or determinable entity. This chapter will instead take more concern with the incalculable, the unexpected, and the surprise that can be brought about by the agency of machinic intelligence.
Even so, Turing’s affective articulation with regard to machines—such as when he states that machines take him by surprise with great frequency—might be considered part of this social creativity. He also puts the emphasis on the conditions under which the machine is performing and says that if they are not clearly defined, machine’s intelligence will indeed emerge as a surprise. Turing opens the door for this unpredictability and furthers his argument by stating that a fact does not simultaneously bring its consequences to the purview of the mind. A fact’s novelty thus might rest in a potentiality, in a sense. There remain parts, or aspects, of a fact that remain undisclosed, that are “temporally emergent” (Pickering, 1993). Therefore, even the crunching of numbers, or undertaking a pre-given task, can be thought of as part of novelty; the newness rests on the machine’s act of calculation, and we can only observe it if we have a creative conception of the machine. It is from this point that the present chapter takes its inspiration. Indeed, a creative conception of machine intelligence is what the sociology of AI would take as its core problematic.
The second notion that Turing brings up in relation to novelty is error. In his defense of machinic intelligence, he elaborates a potential critique of AI that would rest on the idea that “machines cannot make mistakes.” This critique would stem from the speed and accuracy with which machines calculate arithmetic problems; this, then, would always lead to a defeat in an Imitation Game. An interrogator can pose a problem, and the machine would either answer very fast, or, if the machine is to confuse the judge, would deliberately make mistakes so as to pass as a human (who is prone to error). But in this case, the mistake would be attributed to its design or a mechanical fault, and still the status of the thinking machine would remain questionable. He states that this particular criticism confuses two kinds of mistakes: “errors of functioning” and “errors of conclusion”; this is where his two perspectives on novelty seem to converge. The former would cover for the kind of mistake that the example presupposes, namely, that error would emerge from a fault in the system. These kinds of errors can be ignored in a more philosophical discussion, as they would not carry an abstract meaning. It is on the errors of conclusion that Turing puts more weight. These arise when a meaning can be attached to the output of the machine, i.e., when the machine emits a false proposition10: “There is clearly no reason at all for saying that a machine cannot make this kind of mistake” (Turing, 1950, p. 449). The capacity for the machine to make errors, for Turing, makes it possible for it to enter into the realm of meaning.11 It is a deviation not only from the expectation that the machine makes perfect calculations but also from the machine’s own process of calculation. The machine’s process is uninterrupted, in that its error does not emerge as a break, and if the output is faulty, it constitutes a deviation from the system itself. It is this deviation from the designed system that enables a discussion of the agency of the machine, as it creates a novelty that comes with a surprise effect; the ensuing socialities then bear the potential to shift the already existing system.
This novelty-through-surprise effect can be captured in George Mead’s sociology of mind as well. His theory will be discussed in relation to the machine’s capacity for novelty, so as to arrive at a distributed understanding of agency. Mead makes the case that the essence of the self is cognitive and that mind holds the social foundations of “self,” which is composed of two parts: the me and the I. The me forms the social component that calculates the larger context in which one is located; it takes a kind of “world picture” and comprises organized sets of attitudes of others. In a sense, me is nested within an organized totality; it can be thought as the rules of the game or the objectified social reality. It is the system in which the individual self acts or, rather, the individual’s image of that system. It is a general conception of others (the “generalized other”) and is produced a posteriori to the moment of action. Mead uses I as that which emerges in the moment of (inter)action as a response to that which is represented in me. He emphasizes that the response to the situation is always uncertain, and it is this response that constitutes the I. The resulting action is always a little different from anything one could anticipate; thus I is not something that is explicitly given in me. Lovelace’s conceptualization of a computer as a machine of calculation (the Difference Engine is one such machine) may be compared to the operations of me. It calculates and provides a representation; the machine, again, emits a world picture. However, in the moment of action, as Turing contends, there is always room for deviation, and, in the case of machines, this happens—perhaps more often than desired—through error.
The errors of conclusion, then, can be compared to Mead’s formulation of the I as the subject of action that does not rest on its preceding calculations. For both Turing and Mead, the possibility of newness and change comes from the agent’s ability to dissociate from the calculable realm, and not through an act of conscious choice, but by what can be termed coincidence or spontaneity.12 The I of the machine, then, comes as a surprise effect. Even though the me may calculate things in advance, these calculations may be surpassed by the action of the I in the very instant where action comes into being. The mind, according to Mead, can thus reveal itself in a completely novel way, defying anticipation. The I, then, stands for subjectivity and the possibility of social action; it harbors the bedrock of agency.
Both for Turing and Mead, the possibility of newness and change does not reside in the act of conscious choice; it necessarily arises out of the agent’s capacity to step away from the calculable realm. The novelty emerges in the moment of action, as the relationalities that constitute the agent both provide the ground for calculability and weave different realities that are realized by the emergence of novel action. Those actors who can incite novelty in the world can engender new socialities. In Mead’s discussion, sociality refers to a state that is between different social orders. It is an in-betweenness where the response of the I has not yet been integrated and objectified in the me, and thus an alternative order can be anticipated. Considering machinic agencies with the capacity to incite such sociality, then, requires our methodological attention to be honed toward the moment of interaction.
Agency in Sociality
Talking about interaction without presupposing the existence of humans is not exactly part of sociological tradition. The concept of social interaction would generally assume that human individuals exist prior to interaction and there is a consciousness pertaining to the humans that precedes the interaction. One of the canonical discussions in this line is Max Weber’s (1978) theory of social action, where he focuses on the meanings that actors give to their actions and comes up with his famous four ideal types of social action that are determined by the intentionality of the actors (pp. 24–25).13 The concepts that classical sociologists and their descendants have utilized to make sense of social interactions set the tone for this practice to be only intelligible on the level of humans: consciousness, intention, self-identity, reflexivity, other-orientation, active negotiation, and language-based communication (Cerulo, 2009, p. 533). The modern humanist tradition privileges certain types of humans over others and attributes a totality to interiority (an enclosed mind) as opposed to incorporating an exterior world upon which action is taken. This tradition presupposes a gap between the human prior to (inter)action and a static empirical world that receives the action.
Shifting the focus from before the action (intention, consciousness) to the moment of interaction itself dissolves the self-enclosed individual and allows for the possibility of considering actors’ thinking as being constituted by the interaction. Thus, the agencies that contribute to an ongoing social interaction come to be defined a posteriori, which allows uncertainties and incalculables to become part of the analysis. Furthermore, this notion of the social also opens up the possibility of including nonhumans as participants in the constitution of social reality14 as their capacity for encounter becomes legitimized.
When the notion of the social is uncoupled from the human, it also becomes possible to see agency not as bound to an entity but as a constellation of forces that produce an effect in the world. Put more clearly, agency is an effect of the relation between objects—both human and nonhuman. This is a slightly different take than what Actor-Network Theory (ANT) offers. ANT scholars (Callon, 1986, 1987; Latour, 1987, 1984/1988, 1996, 2005; Law, 1992) place much emphasis on conceiving of actants as nodes in a network, working heterogeneously with one another; they do not pay much heed to the notion of the ontologically social. Tim Ingold (2008) points to this gap between the heterogeneous entities that operate in an actor-network and proposes, rather, that action is produced through the lines of the network—in his words, the meshwork—which is intimately bound up with the actors. In contrast, agency in ANT is the effect of action, and it is distributed among the actants who are in a network by virtue of the actions that they perform. While this action cannot be reduced to a single entity, it is still understood as capacities of the individual constituents that reside in the network. Ingold suggests, rather, that action “emerges from the interplay of forces that are conducted along the lines of the meshwork. … The world, for me, is not an assemblage of heterogeneous bits and pieces but a tangle of threads and pathways” (p. 212).15 Through our reading of Mead’s sociology of mind, this chapter argues a similar point to Ingold’s while also retaining the concept of the social. Instead of agency being a capacity of an individual, agency emerges as a collective notion, one that is constituted by various processes; it is in this sense that nonhumans in general, and AI in particular, become relevant to the sociological dimension. The notion of agency does not rely on the doer but on the doing in which different actors are constituted by their relationalities. So perhaps, the social is the way of relating, the accumulation of actions, the relationalities that become sedimented by continuous encounters and interfaces.16
By thinking of AI as having a capacity for novelty, it also becomes possible to see that these neural network models are not just instruments to human conduct. Rather, they are entangled with and through other humans and nonhumans by way of data. AI finds itself embedded in multiple positions, and through its actions, partakes in the construction of a new or different world. AI is an unstable object of study, as it does not fall within the traditional and pure bounds of the human vs. the nonhuman. Rather, AI emerges from entanglements of socio-material relations, and its part in the emergence of agency enables us to cast it as a being that resides and encounters in the social realm. However, I do not mean to enclose AI as such—this is not an operation of rigid definition. The point, rather, is that this can be yet another way to think of AI and that, in this way of thinking, the social is not an exclusively human arena. Instead, the social is about an encounter, about relationality, and can contribute to an expansion of sociological thinking and enable it to look symmetrically at the entities that enter into relations. By setting AI as inherently social, we make it the subject of the sociological gaze. And in focusing on the moment of the encounter, we reveal the manner in which meanings, selves, and societies are produced in relation to machinic intelligences.
Sociology of AI as a Program
One of our main questions, then, could be thought of as, what are the conditions for a successful sociology of AI? There are three themes that would enable the emergence of such a program. The first is that the sociology of AI would not be about boundary policing. Our questions would not concern themselves with whether a social actor is human or nonhuman; nor would we indulge in a further categorization of the empirical world. Rather, we would aim to understand the transgressions and mutations of these boundaries while raising questions about the work that they do in the world. In this sense, a sociological approach to AI would not do the work of the modern sciences; it would not engage in processes of purification (Latour, 1991). Instead, it would itself get entangled in the AI by recognizing the multiplicity and complexity of the subject matter at hand.17 Secondly, it would incorporate a theory of mind by grounding AI in social interaction. In this sense, a sociological approach to AI could be read alongside arguments about the extended mind (Clark & Chalmers, 1998), and yet it would take seriously social relations as constituents of the minds that come into interaction. It would thus contribute to the social interactionist school but with a different approach to social interaction. Here, the interaction order would not take place among human subjects. Instead, social interactions—which construct the minds that are in the interaction—come to include machinic intelligences, specifically those that have the capacity to encounter.18 The third and last aspect of any future sociology of AI is that it would incorporate a theory of novelty; it would take seriously the capacity to create new possibilities, even if through error, and aim to highlight the new socialities that come about through this newness. Such a critique of AI could still take on the shape of a critique of capitalism, very rightfully so. Many of these intelligences are produced under capitalist relations and circulate with capitalist logics. However, by engaging with the machinic intelligence’s capacities for breaking away from intended use and by focusing on their deviations or irreducibilities that become visible at the level of interaction, it would be possible to locate the moments that potentiate novelty and, thereby, promise new socialities.
Conclusion: Why Does the Sociology of AI Matter?
The social-scientific discourse on technology in general, and artificial intelligence in particular, revolves around a critique of capitalism that takes its direction from a technological-deterministic position. The common critique is that these machines will take over our infrastructures and dominate the lives of humans from an invisible position; or they will automate human social interactions and thus force a new era of Weber’s Iron Cage. This chapter respectfully locates itself away from such critique. Rather, it shows how nonhumans unravel in unexpected ways, creating possibilities for different forms of interaction that do not obey the determinations of the affordances of technology, and nor do they entirely follow a capitalistic logic. Their interactions—while taking shape in the context of neoliberal capitalism and thus amenable to reproducing those already existing relations—are not necessarily exhaustible under such categories, and assuming all interactions work to serve the capitalistic agenda is a totalistic approach to mapping reality. I instead argue that focusing on the nature of interaction itself would reveal the ways in which these relationalities can unfold in an unforeseeable manner and thus escape being totalized under the logics of late capitalism. This focus on relationality will demonstrate new ways of imagining differences between humans and machines while retaining their relevance to the sociological gaze.
Questions concerning technologies have traditionally been left to engineering fields, and the social sciences were thought to only be equipped to deal with the social phenomena that emerge around technologies (Woolgar, 1985). However, this study proposes another approach, taking the relations with and of the machines as pertinent to social relations. AI presents a borderline case in which sociology can try its hand at a nontraditional field of inquiry and discover to what extent the discipline’s boundaries can be reworked. In this sense, this effort is a response to the so-called crisis of the social sciences. Postmodern critiques in the past have pointed to the limitations of taking the human as the foundation of all things,19 and as the modern human subject purportedly disappeared in endless, neoliberally charged mutations, the humanities and social sciences were thought to be moving toward a point of crisis. By contrast, this chapter finds inspiration in the idea that humans and technologies coexist in multiple forms and raises the stakes of investigating in what ways their relations and agencies unfold and construct complex realities.
The question is not whether AI technologies are “really real” or whether they are legitimate moral subjects with rights. The present AI hype is ridden with notions of creating the next generation of intelligent beings, speculations on the conception of an artificial general intelligence, and various forms of armchair-philosophical “trolley problems.” In a cultural landscape that can only think of machinic intelligences through the image of the Terminator, some of these questions might fall on deaf ears. Furthermore, the common response to attempts to situate AI within sociology often remains within the bounds of ethics, but this response would be an attempt to discuss the morality of the machines, an approach which is the result of the discipline’s long engagement with the human as the ultimate image of a social world. The implicit assumption is that the machine question would persist in being about humans, asking how humans are affected or how to make machines more human. While this line of critique is very much necessary, if we want to push the boundaries of the disciplines (both sociology and AI), another potentiality must be explored. My call for a relational sociology of AI is an invitation to shift our analytic gaze and ask the questions that are not yet asked or are not dared to be asked.
By attending to the ways in which AI escapes definition and categorization, and yet recognizing that these phenomena have deep implications for the way in which societies unfold, this chapter represents a call to think of the mutability of all things that are considered to be hallmarks of social order. How society is conceived, the ontologies of the social, and the assumptions that go into how relationalities unfold in social reality all have defining influence on the (re)organization of the world. As the world, especially in the North American context, is increasingly becoming a programmable, manageable, controllable, closed entity, it becomes all the more important to critically engage with the meaning of the social and practice our sociological imagination. Thinking about thinking-machines through Mead’s sociology of mind makes it possible, for instance, to see them as dynamic parts of unfolding interactions in a social space. They are not simply passive black boxes that compute information in a linear manner. The explainability problem in machine learning, the increasing complexity of neural networks, and the growing influence of algorithmic trading are all contributing to the argument that these intelligences cannot be reduced to being “just technologies.” They take active part in meaning making, as illustrated by how the calls for more “context-aware” AI materialize precisely as they become parts of decision-making processes. Being able to read AI through core sociological theories also points to the possibility, or rather the already-established conditions, for undertaking social science in a posthumanist mode. Here, it will be important to not fall into a mainstream posthumanism that appears to be a continuation of traditional liberal subjectivities (Hayles, 1999). Rather, the aspiration here is “to world justice, to build associations, to craft common, alternative forms of life” (Papadopoulos, 2010, p. 148). As such, this chapter proposes the building of alternative ontologies that can lead to different imaginaries, in which machines and other entities could coexist in a social manner.
- 1.
Such structures span from cultures of corporations and start-ups in the tech industry to those of computer science departments and research institutes.
- 2.
There were many practitioners of AI who worked on dynamic systems and resisted representational approaches to building machinic intelligence in the earlier days of AI. Rodney Brooks’ projects fall within this paradigm of computation; they take the notions of interactivity and environment very seriously (Brooks, 1987). His students Phil Agre and David Chapman have also dealt with dynamic computational procedures that could deal with the complexity of everyday life (Agre, 1997; Agre & Chapman, 1987).
- 3.
There are many social theories that put uncertainty as the primal condition for interaction. Bakhtin’s (1981) dialogical theory is one such theory.
- 4.
For the purpose of this work, randomness and indeterminacy enable the conceptualization of machinic intelligences as agents in social interaction. Machinic intelligences are dynamically unconcealed, and this dynamism renders them as part of social relationalities.
- 5.
Similar works have been produced that point to a shift from an algorithmic to an interactive paradigm in computation. An enthusiastic incursion in this line is Peter Wegner’s (1997) “Why Interaction Is More Powerful Than Algorithms?,” where he announces the transition as a necessary continuation of the closed system of Turing machines: “Though interaction machines are a simple and obvious extension of Turing machines, this small change increases expressiveness so it becomes too rich for nice mathematical models” (p. 83). Wegner is also making a link between indeterminacy and dynamism.
- 6.
For more on high-frequency trading, please refer to Lange, Lenglet, and Seyfert (2016) and Mackenzie (2018).
- 7.
The Dartmouth proposal (McCarthy, Minsky, Rochester, & Shannon, 1955/2006) makes a similar introjection while talking about how to formulate an artificial intelligence: “A fairly attractive and yet clearly incomplete conjecture is that the difference between creative thinking and unimaginative competent thinking lies in the injection of some randomness” (p. 14).
- 8.
Although mentioned here, the present argument does not deal extensively with the question of the mind.
- 9.
Even infinitely responsible, echoing Emmanuel Levinas’ (1979) ethical philosophy.
- 10.
Turing’s proposition makes it possible to formulate the intelligence of the machine in the realm of meaning as stemming from its capacity to move away from its initial programming. However, this is not the only manner in which AI could be said to be contributing to meaning making. Some branches of AI, such as computer vision, natural language processing, or context-aware algorithms in general, can contribute to decision-making processes. As they become part of the agency that results in action, it could be said that they also operate in the realm of meaning. Turing does not talk about different genres of programming, as his discussions are rooted in Turing machines and learning machines; for this reason, I have not indulged in detailing more specificities of such technologies.
- 11.
This claim can be read with analogy to Langdon Winner’s famous argument about politics-by-design and inherently political technologies. Winner suggests that there are two ways in which technologies are political. Politics-by-design suggests that the technologies might reflect some politics that go into the design and implementation of a technical system. Whereas inherently political technologies refer to “systems that appear to require, or to be strongly compatible with, particular kinds of political relationships” (Winner, 1980, p. 123). Taking his formula under the concept of error, one can talk about error-by-design or inherently erroneous machines. Error-by-design would once again bring the analytical focus onto the designer or some mechanical fault. However, if the machine is inherently erroneous, then our analysis would have to deal with the agency of the machine.
- 12.
It could be said that Turing had insight into the sociological workings of the mind, even if he did not explicitly deal with these questions. Indeed, a recent article highlights how Turing’s life and work reflects the three features of sociological imagination of C. Wright Mills (Topal, 2017), as Turing was (a) able to work out the relations between what is close (human mind) and what is distant (machine); (b) through these analyses, was able to define new sensibilities; and (c) had the ability to imagine a future sociological reality.
- 13.
Weber doesn’t use the term intentionality but rather “feeling states” (p. 25).
- 14.
This is in line with Actor-Network Theory’s emphasis on symmetrical treatment of the entities—human and nonhuman—that go into a social analysis. Bruno Latour’s (1992) famous essay “Where Are the Missing Masses” is a critique of sociology’s exclusion of nonhumans from the ontologies of the social. While controversial, this argument opened up a rich avenue for analyzing the construction of reality with particular—symmetrical—attention to materiality and nonhuman actants.
- 15.
Ingold uses the metaphor of SPIDER as opposed to ANT; much like the spider produces the webs around itself through the materials that come out of its body, he suggests that relationalities, as such, are also intimately—and materially—bound together.
- 16.
This formulation takes force from Mead’s emphasis on encounter as well as Emile Durkheim’s (1912/1986) discussion of the intensity and materiality of social forces in “Elementary Forms of Religious Life.”
- 17.
The work of the sociologist, or the anthropologist, could then be considered to be contributing to the field of AI as it could not be separated from the object of analysis. This follows the argument Nick Seaver (2017) makes “that we should instead approach algorithms as ‘multiples’—unstable objects that are enacted through the varied practices that people use to engage with them, including the practices of ‘outsider’ researchers” (p. 1).
- 18.
The irreducibility of the machinic “intelligence” to a straightforward equilibrium between input and output provides this capacity of the machines to encounter in a social sense.
- 19.
I am referring to Foucault’s critique of humanism. Humanism is not only a theory that attempts to explain social life in terms of “natural” characteristics of the human subject but also a meta-theory (especially after the reflexive turn) that underlies much of modern social sciences’ methodologies that stem from self-understanding (Paden, 1987). More significantly, this unchanging notion of the human is the product of the Enlightenment in the West, and, as such, it is deeply entangled with the processes of colonization and capitalist exploitation.