Higher education, like other sectors, is now intensely technologically mediated with most aspects of administration, research, and teaching suffused with digital process. Goals of quality, efficiency, and transparency lead to strategic and operational decisions being made about technology that are intensifying the datafication of academic productivity, engagement, and outputs. Activity is increasingly subject to normative processes of algorithmic exposure and measurement (Lorenz 2012), with academics becoming more and more entangled in what Lyon (2017) refers to as a “surveillance culture.” Defining the term, Lyon describes it as “people actively participating in an attempt to regulate their own surveillance and the surveillance of others” (824). This participation need not be enthusiastic or strategic, but it indicates a different situation from the one Lyon and others previously described as surveillance society:
Surveillance society is a concept originally used to indicate ways in which surveillance was spilling over the rims of its previous containers—government departments, policing agencies, workplaces—to affect many aspects of daily life. But the emphasis was still on how surveillance was carried out by certain agencies in ways that increasingly touched the routines of social life—from outside, as it were. This concept was often used in ways that paid scant attention to citizens’, consumers’, travelers’, or employees’ experience of and engagement with surveillance. (826)
A surveillance society framing, then, is one where surveillance is, by and large, understood as being done to people by agencies. Surveillance culture, by contrast, is characterized by attention to “widespread compliance with surveillance” (Lyon 2017, 828), and the way people “participate in, actively engage with, and initiate surveillance themselves” (829). It is closely linked to more benign concepts such as sharing, safety, and transparency.
These concepts are far from neutral, however. Transparency, for instance, as O’Neill (2002) puts it, “has marginalised the more basic and important obligation not to deceive,” replacing trust with accountability (Ellaway et al. 2015). Transparency in this view seeks to formalize away the moral imperative not to deceive, seeking to remove the vulnerability that is the essential element of a trusting, reciprocal relationship while potentially masking a capacity to cheat. Similarly, sharing, when cast as an imperative rather than a choice, removes from individuals the agency that makes such an act meaningful and obscures the motivations of those doing the sharing.
Natural desires for connection and security, in a technological and market-driven context that generates and thrives on massive amounts of data, create practices that can be understood as “soft surveillance” (Lyon 2017, 833). A particularly important feature of surveillance in its cultural sense is that it is not necessarily or even primarily clandestine; it is better understood as a set of relations or a sensibility and the values aligned with it (transparency, sharing, safety) as contingent and provisional. There are connections here to the critique of openness that we discussed in part III.
Lyon (2017) argues that questions around visibility are ethical ones and that “surveillance ought not merely to be of people . . . so much as for people—and thus should be practiced carefully and held to account” (835). In addition, as Zuboff (2015) argues, “human fallibility in the execution of contracts is the price of freedom” (81). Zuboff critiques claims that surveillance enables new contractual forms, describing it in terms of what she calls “Big Other,” where “habitats inside and outside the human body are saturated with data and produce radically distributed opportunities for observation, interpretation, communication, influence, prediction, and ultimately modification of the totality of action.” This is the “un-contract,” in her terms, where the autonomy needed to enter into contracts gives way to the “rewards and punishments of a new kind of invisible hand” (82).
Those of us who work in digital education contexts are aware that many of the spaces in which we meet to engage in online education create data trails that make them at least potentially, if not currently, subject to practices of surveillance, analytics, and data mining, whether for educational or commercial purposes (or, often, both). Srnicek’s (2017) work on platform capitalism highlights the mechanisms through which data are produced, extracted, monetized, stored, monitored, and reconfigured in a new model of value that relies on individual participation and interaction on platforms. Platform providers are agnostic about the kinds of interactions that produce these valuable data as long as the data keep flowing, and—as we saw in part III in the discussion of MOOC data—digital education settings can be a gold mine of these flows. These issues are becoming more pronounced as the consequences of digital data tracking and analysis become more significant and better understood. Movement through physical campus space, for example, leaves readable digital trails, as when students swipe an ID card to enter campus buildings. Where our manifesto says “online courses,” we therefore take this to apply to all courses with significant technology elements, whether part of the teaching, the administration, or the assessment of the course.
Teacher-student relationships are changed by cultures of surveillance in the university, and technology is implicated in these cultures. Indeed, ways of understanding flows of power and agency in the contemporary university might usefully be mapped by exploring how surveillance cultures operate. To the extent that the human-technology teaching assemblage is constituted in ways often beyond the control of individual teachers, we need to consider collectively what kind of datafied future it is that we want and how we go about building this on a set of values that we can align to as professionals.
One area of intensification of visibility in the university is the implementation and continual upgrading of the technological tracking of student engagement. As we discussed in part III, the field of learning analytics has been at the forefront of this work in recent years, with extensive attention paid to the potential of analytics for improving assessment, feedback, prediction of student success, self-regulated learning, and more (Papamitsiou and Economides 2014). Less attention has been paid to the implications of such tracking on the intensification of surveillance cultures in higher education and on the expectations of students and teachers for how much student activity, work, and engagement should be monitored. As Prinsloo and Slade (2016) explain,
Students often have no insight into the data collected by their HEI and so there is no possibility that data can be verified or any context provided. Considering the asymmetrical relationship of students and their institutions, students potentially then become quantified selves based on, for example, the number of log-ins, clicks, downloads, or time-on-task. (177)
For Prinsloo and Slade, recognition of the impacts of surveillance on configurations of vulnerability is a key factor in ensuring ethical and pedagogical appropriateness of learning analytics in higher education.
On the physical campus, attendance is often used as a proxy for engagement and student location data gathered for the purposes of attendance monitoring at lectures and seminars, as well as presence in other settings, for example libraries. Looking at a range of attendance policies from UK universities, for example, there is a common pattern in how they describe their purposes in monitoring attendance. They regularly refer to research that has found that attendance or participation correlates with higher grades for university students (see Lukkarinen, Koivukangas, and Seppälä 2016), implying a causal relationship, and they often describe attendance monitoring as aiming to support student achievement. They note that absences might indicate students who are struggling, and tracking attendance can be used to flag potential issues and follow up with students who may need help. Finally, they regularly cite government regulations for the monitoring of international students on particular kinds of visas. Macfarlane (2013) identifies a further work preparation argument for this monitoring (82).
In his later work on student freedom, Macfarlane (2016) categorizes attendance monitoring as one of a number of processes in the contemporary university that “demonstrate both a lack of trust in students and failure to respect their freedom to learn as an adult” (81). As a form of “bodily performativity,” it stands in for learning that is not easy to measure and blurs the distinction between presence and engagement. As these systems become ever more intensive, there are seemingly never enough attendance data and they are never foolproof enough. The notion of the loophole (for example, the potential for a student to carry a classmate’s ID card and swipe that person in) becomes a reason to develop ever more invasive methods, from manually signing in to ID card swiping, bluetooth/RFID scanning of devices, fingerprint scanning, and facial recognition.
The movement of students both on-campus and online is of increasing interest to researchers as they seek to determine which forms of movement and behavior correlate with academic success, and this interest is echoed in policies and practices that attempt to gather more, and more varied, data to analyze. The so-called intelligent campus (JISC 2017) has at its heart a focus on generating, analyzing, and interpreting data from the people who move within its virtual and physical boundaries. This may have value, although as Gašević, Dawson, and Siemens (2015) point out, researching what behaviors correlate with positive outcomes, and then mandating those behaviors, is ultimately likely to prove profoundly counterproductive, not least because of the nonequivalence of correlation and causation.
We suggest in our manifesto point that gathering extensive tracking data to see what is useful is not neutral. It has implications beyond the specific use, and it ushers in an orientation to students as sources of data that can be unproblematically mined and analyzed in the interests of institutional performance and efficiency. As teachers, we need to confront the negative ethical and pedagogic aspects of creeping surveillance on campus and resist the uncritical assumption that the intensification of monitoring and tracking of students is somehow inevitable. Online courses are prone to cultures of surveillance. Visibility is a pedagogical and ethical issue.