14

Automation Need Not Impoverish Education: We Welcome Our New Robot Colleagues.

Despite the powerful technoscientific recoding of the sector explored above, we argue that tangible opportunities remain for teachers to shape the agenda of algorithmic technologies in education. This chapter builds on the critique presented in chapter 13, addressing in further detail the topic of automation. It does this by attempting—perhaps surprisingly—to engage not just with its problematic aspects, but also by assessing some of the positive ways in which teachers might engage with the algorithmic and data-driven technologies of automation.

The automation of teaching has been a long-standing aim of for-profit technocorporations, technorationalist education management, and others seeing technology as an instrument for scaling up education with low-cost implications or achieving other efficiencies in existing education systems. It has for these reasons also been an enduring point of resistance for the profession, and a highly politicized trope within schools and institutions of higher learning. Dramatic accounts of the potential replacement of teachers by technology, in particular by artificial intelligence (AI) in a context of wider concerns around the future of work, surface periodically in education (von Radowitz 2017). Yet research and development to date has been directed more toward the incremental automation of specific teaching functions than of teachers themselves (plagiarism detection is one example, discussed in part V).

The manifesto point we address here is often commented on as sitting oddly with the generally critical focus of other aspects of the manifesto. Yet we argue that if we do not feel ready at this point to actively welcome our robot colleagues, we should at least be prepared to open the door to them. To the extent that aspects of automation may prove to be genuinely beneficial to teachers, it seems important to remain open to the idea that it may allow us to explore new kinds of critical pedagogies, new creative possibilities, and new kinds of usefulness to our students. The key point we wish to make here is that for this to be the case, research and development of automation technologies in teaching should not be developed for teachers but by teachers. Teachers, the act of teaching, and the learning and well-being of students, not efficiency imperatives or fantasies of frictionless scaling up of education, should be placed at the center of the way we think about automation.

Two driving ideas tend to underpin the push for automation of aspects of teaching: the aim of providing personalized, direct student-to-teacher tuition and the commitment to efficient pedagogic delivery. Crucially, both of these ideas reveal underlying assumptions about the relationships between humans and technologies that tend to limit the ways automation manifests in educational practices. The narrative of personalization is particularly prominent here, where data-driven algorithmic technologies are promoted as ways of tailoring educational content or feedback for an individual student. This idea has attracted significant interest—or example, in the form of funding from the Gates Foundation (Newton 2016) and the development of the Personalized Learning Plan software associated with Facebook (Herold 2016). Underlying this work is often an explicit aim of producing a personalized, one-to-one tutor for every student using automated, artificially intelligent, and data-driven technology. Prominent educational commentator Sir Anthony Seldon has promoted this idea, suggesting “the possibility of an Eton or Wellington education for all” through the idea that “‘everyone can have the very best teacher’ driven by ‘adaptive machines that adapt to individuals’” (von Radowitz 2017). Milena Marinova, director of artificial intelligence at educational publisher Pearson, appears to share this vision, describing an idealized world of educational AI in which “every student would have that Aristotle tutor, that one-on-one, and every teacher would know everything there is to know about every subject” (see Olson 2018). This is a fantasy that extends back decades, to the early years of educational technology, when Patrick Suppes (1966) predicted that “in a few more years millions of school children will have access to what Philip of Macedon’s son Alexander enjoyed as a royal prerogative: the personal services of a tutor as well-informed and responsive as Aristotle” (207).

As Friesen (2019) elaborates, the ideal of the one-to-one personal tutor is a well-established and orthodox basis for education, stretching back through Rousseau, Comenius, and Socrates (through Plato and Xenophon). Rather than being straightforwardly innovative, the automated tutor manifests as a “a kind of repetitive continuity” (4). For Friesen, the promise of the automated AI tutor can be understood as part of a contemporary “technological imaginary” of the global availability of learning (Friesen 2019, 2). Friesen further questions the “mythology” of the one-to-one educational relationship, and the way it has been taken up by computing science as representative of an authentic form of pedagogy. For Friesen, “dialogue, in short, is a ubiquitous yet irreducible experience” (2019, 12), and attempts to replicate it with technical systems merely produce systems of control: “For education or any other aspect of social activity to fall so completely under the dominance of a total vision of social and technical engineering would be ‘totalitarian’ in and of itself” (13).

Automating technologies adopt computing and data science methods that tend to be oriented toward the categorization of individual students and predefinition of possible routes through supposedly personalized environments. In this sense, they codify and systematize educational conversations and relationships that are generally seen by teachers themselves as more malleable and emergent. As we have suggested, an economic rationale tends to underpin the development of such systems, which position automation as a way of introducing efficiency into teaching practice. This drives the idea that automation provides a straightforward labor-saving function that can free teachers from the routine and repetitive aspects of teaching and create space for the nurturing of abilities assumed to be uniquely human in both teachers and students.

Phillips (2018), for example, sees data-driven technologies, such as machine learning and AI, as having the ability to diminish “the mundane duties that consume a teacher’s time,” notably here singling out “grading papers and tests” as examples of the banal tasks of the teaching profession (Phillips 2018). Key to this rationale is the idea that a more authentic mode of teaching can be achieved once teachers are liberated from such burdens, with technological development—rather than, for example, a rethinking of the way we approach assessment and grading—seen as the solution. Extolling the virtues of artificial intelligence in education (or “AIEd”), Luckin et al. (2016) suggest: “Freedom from routine, time-consuming tasks will allow teachers to devote more of their energies to the creative and very human acts that provide the ingenuity and empathy needed to take learning to the next level” (31).

This is a key aspect of the discourse around automation in education, which positions the technology as passive and instrumental while portraying teaching and learning as creative, entrepreneurial, and, ultimately, more authentically human. Distinguishing a specific role and set of capacities for humans using automated technologies is well established, with Moffatt and Rich (1957) suggesting that a future society is “likely to put a premium on originative skill and imagination” (273).

It is not just teachers therefore who benefit from the supposed ability of automated technologies to render humans “more human.” Luckin et al. (2016) frame AIEd as specifically designed to provide the means through which students can unleash their intelligence, directed toward particular ideas about the kinds of skills and abilities required for a future world of pervasive automation. Employment in this scenario will be more “cognitively demanding,” necessitating the “higher order skills” of problem solving (47). This labor will also involve “social skills . . . [the] ability to get on with others, to empathise and create a human connection” (47). Such abilities are seen as emanating at least in part from a structured personalization and a one-to-one relationship with the technology.

What we find across the calls for personalization and labor-saving automation in education is a prominent turn toward a broader discussion of the human condition and the ways it might be differentiated from the emergence of ever smarter technologies. The place of the learner is apparently prioritized over the workings of the technology in this vision, yet the supposedly human qualities on which learner preeminence is based are thinly defined. Moreover, this attention to the quality of the human condition should be seen as a long-standing educational orthodoxy rather than some new direction for teaching in an era of automation. As Pedersen (2015) reminds us, education has long been viewed as the “humanist project par excellence,” seen as a “a key component of compulsory becoming-human . . . connected to a general idea of education as something inherently ‘good,’ that can somehow make us become better human beings.”

This educational commitment to humanism is clearly maintained in the discourses of human exceptionalism that accompany the promotion of data-driven automation in education. However, as our own research has argued (Bayne 2015b), this perspective significantly limits the ways we as teachers can conceive of working with technologies in education by holding an authentic humanness separate from technical artifacts and tools. Much more creative and expansive opportunities arise when this underlying separation is challenged and a posthumanist sensitivity to the entangled relationships between humans and technologies is embraced (Bayne 2015b). (See part I of this book for an overview of this area of theory and its importance to the manifesto.)

In other words, rather than perceiving automated technologies as merely instrumental—passively delivering a personalized curriculum or straightforwardly contributing to imperatives for efficiency and scale—one might seek to surface ways of bringing automation and pedagogy together in productive and playful relationships that develop teachers’ and students’ critical understanding of digital education. Automated agents should not be developed only by data scientists. If teachers can take control, shaping and forming automated agencies that align with their own professional values, we open up a future of critical and creative teaching that is far beyond the instrumental assumptions, humanist orthodoxies, and technocorporate visions of scale and efficiency which have dominated the debate so far. Automation need not impoverish education: we welcome our new robot colleagues.

Conclusion: The Politics of “Technical Disruptions”

This part has examined four prominent aspects of digital education signaled by four statements in the Manifesto for Teaching Online: open education, MOOCs, algorithms, and automation.

The sequence of these points is important, highlighting the ways that open education initiatives have been positioned within a wider shift toward data collection and the extraction of value through algorithmic processing. Both openness and automation have garnered much in the way of mainstream media attention in recent years, each seen as potentially transforming the higher education sector by breaking down barriers to participation and enhancing and economizing teaching practices. It is therefore vitally important to temper these high-profile discourses of disruption with teacher and teaching-centered perspectives that surface the ways in which they are problematic.

Drawing on the work of Isaiah Berlin’s “Two Concepts of Liberty” (1969), we have seen that forms of open education often assume a political role aimed at reducing centralized barriers to access within a tacit understanding that openness is ideologically neutral. It assumes the presence of a body of learners who are already self-directing, autonomous, and independent thinkers. This kind of “negative openness” is inclined toward the view that the ability and desire to learn is instinctive and innate and will naturally emerge without the need to specify its operational details or ideological basis in advance. We counter this view with the point that the primary responsibility of education is to support students in the ability to think critically and independently; we cannot assume that these capacities are in some way preloaded and ready to go. We have therefore argued for the recognition of the complexity of the ideal of openness, suggesting that all openings create new closures. It is not sufficient to see openness as a transcendent, universal, and utopic condition by which education can be straightforwardly transformed.

The chapters in this part have also discussed the ways in which MOOCs emerged as a high-profile example of open education, promising liberation from the constraints of geographical distance and financial limitations, as well as the centralized curricular and pedagogic structures of university campuses. We have suggested that this view takes an overly narrow and uncritical view of participants as human capital, easing in new global forms of standardized and scaled provision. Despite this, alternative MOOCs have emerged that demonstrate more interesting course designs that account for localized, self-organizing groups and open up new ways of understanding interdisciplinarity.

We have argued that attention needs to be paid to the growing use of data-intensive computational processing in education. The burgeoning field of learning analytics is offering powerful and appealing insights about educational activity through software interfaces, dashboards, and visualizations. These technologies can be seen to call into question and shift the authority and professionalism of the teacher, while the emergent algorithmic infrastructure permeating contemporary educational activity is introducing a new kind of exclusive, centralized technocracy, impenetrable to students and teachers alike.

The chapters in part III have outlined some of the ways such data-driven technologies are being employed for automating certain teaching functions. We outlined the historical drive for personalized learning and its underlying emphasis on efficiency and instrumentalism. Significantly, this discourse was shown to promote a particular view of humanness catalyzed as a way of preserving human exceptionality in a world of increasing automation. While the manifesto is concerned with encouraging critical perspectives on the rise of automated technologies in education, part of this involves opening up space to consider creative alternatives, so we also present some proposals for an automation that replaces efficiency and precision with a playful kind of excess, demonstrating a way ahead that values the productive entanglement of human teachers and automated teaching machines.