20

A Routine of Plagiarism Detection Structures-in Distrust.

Finally, we explore one particularly pervasive form of monitoring of students in the university: automated plagiarism detection. At the time of writing (summer 2019), one of the largest of the software systems providing this service, Turnitin, had just been sold to the media conglomerate Advance Publications for the sum of US$1.5 billion. Commentators were quick to point out that this huge valuation rested in large part on the perceived value of Turnitin’s data set (many millions of student assignments), highlighting the intellectual property concerns this raised (McKie 2019) and emphasizing the negative impact on trust implied by routinized use of such services. Here we take this matter of trust and examine it further, particularly in relation to the dangers of routine plagiarism detection for relationships of trust between students and teachers.

Philosopher Annette Baier (1986) defines interpersonal trust as “accepted vulnerability to another’s possible but not expected ill will” (235). She explains that trust is necessary because “no one is able by herself to look after everything she wants to have looked after, nor even alone to look after her own ‘private’ goods, such as health and bodily safety” (1986, 236). Trust exists because of interpersonal risk and vulnerability: “where there is no vulnerability there is no need for trust” (Tschannen-Moran and Hoy 1998, 337). This vulnerability can extend in both directions in, for example, a teacher-student relationship. Townley and Parsell (2004) characterize this as a situation where “students risk error, failure, humiliation, teachers risk disappointment, deception, and indifference” (275). Power imbalances make trusting relationships more (and more complicated) than a simple contract between consenting people.

At an organizational level, trust makes possible the smooth operation of day-to-day business, minimizing costly and time-consuming practices of control and maintaining social relationships (McEvily, Perrone, and Zaheer 2003, 98). Lack of trust undermines the exercise of professional autonomy and agency by staff, risking that institutions could become paralyzed by more and more control practices. Yet leadership in higher education has been shifting toward regulation and away from trust (Olssen and Peters 2005, 324) in a managerialist turn that sees education activity as analogous to industrial production. As such, teaching is increasingly understood via concepts of regulation, efficiency, and economy (Hall 2016; Lynch 2006), aligned to the commodification of colleges and universities that we outlined in part I. In higher education settings, a culture of surveillance, facilitated and intensified by technology, risks creating conditions that are highly risk averse and destructive of the trust basis on which academic and student autonomy and agency rely. Technology architectures introduced to build trust by mapping performance may end up directly undermining these very goals (Tschannen-Moran and Hoy, 1998), as we exemplify here in the case of plagiarism detection systems.

Plagiarism detection software generally works as an intermediary layer between the student (who submits writing online either directly using the service or through a virtual learning environment which that connects to it) and the teacher (who retrieves the writing and provides feedback and a grade, often within the same online system). The plagiarism detection software applies matching algorithms to the submitted work, comparing it to the many hundreds of thousands of texts contained in its database. The higher the match, the higher the similarity score the work will receive. Teachers receive this score, along with color-coded views of the student’s text, when they retrieve the work from the system. Plagiarism detection systems are of course just one example of the surveillance culture in which higher education now operates. However, it is a particularly striking one because of how widespread, accepted, and normalized it is (the Turnitin website claims “over 15,000 institutions in over 140 countries”; Turnitin, n.d.), and how explicitly it intervenes in one of the core tasks that students and their teachers conduct: the production and assessment of student work.

Plagiarism detection systems need to be seen not only in terms of how well they meet their stated aims (and there is some evidence that they do not do this particularly well; see Youmans 2011), but in terms of their other effects, such as on cultures of surveillance on the institution. Zwagerman (2008) argues that “plagiarism detection treats writing as a product, grounds the student-teacher relationship in mistrust, and requires students to actively comply with a system that marks them as untrustworthy” (692). He sees these systems as “the inevitable end point of the integrity scare: an efficient, perhaps even foolproof, technology of surveillance” (691). Morris and Stommel (2017) note that in addition, plagiarism detection platforms turn student intellectual property into profit for private companies, such as Turnitin, which are not accountable to them. Whatever teachers intend, however benevolent their individual purposes in deploying plagiarism detection software, we argue that their own practices, and indeed what it means to teach, are altered in the process, and not for the better.

Logics of surveillance are strongly at work in the practices of plagiarism detection, which attempt to regulate student behavior through the exposure of their writing to algorithmic scanning and monitoring. Such logics see dishonesty, cheating, and other negative behaviors as inevitable if not actively prevented, see students as needing careful monitoring, for their own good and for the smooth functioning of the learning and teaching process, and finally understand dishonesty as having the potential to be screened out through the use of technology.

Much of the critical literature around plagiarism detection practices explores, in one form or another, the shifting, historically specific, and thorny terrain of authorship (Blair 2011; Marsh 2004; Vie 2013), an issue that has been highlighted by networked writing practices including collaboration and remix. We explored this territory in part II of this book, in which we described the ways in which digital writing reworks traditional conceptions of authorship. Plagiarism detection processes oversimplify these complex issues and trajectories, implying a clear-cut answer to conceptually very difficult questions around what is meant by originality.

Plagiarism detection systems are claimed to act as both a deterrent and a method of identification of work that may be plagiarized (plagiarism detection companies are generally careful to stress that the decision about what constitutes plagiarism is always one for the human teacher to make). Many claims are made about the effectiveness of these systems in helping students learn about good academic practice (Mphahlele and McKenna 2019), particularly where students are allowed to run their work through the system before formally submitting assignments. They are also claimed to save time for teachers and—in what is a familiar theme extending through this book—to cast teachers as being deficient by being more effective at identifying plagiarism than the teacher alone can be. They offer to make visible what might otherwise be hidden or missed. This visibility is at the heart of the surveillance culture operating in colleges and universities and the changed relationships it builds between students and teachers.

Fundamentally, being monitored cannot be perceived as a neutral act, and the implication that good behavior is not anticipated signals that the one doing the checking assumes a relationship of distrust, which breeds distrust in return. In addition, the language around plagiarism frames it ideologically as a moral or cultural failing on the part of individual students (for example, through the term academic dishonesty). Introna and Hayes (2011) have done outstanding work to surface how this works against the interests of particular, often more vulnerable, groups such as international students. The surveillance practices instantiated in plagiarism detection and the language used may encourage the subversiveness and disobedience they seek to eradicate by fostering “attitudes of ill-will, skepticism, and distrust by signaling suspicion” (McEvily, Perrone, and Zaheer 2003, 99).

Zwagerman (2008) describes plagiarism detection as a “crusade against academic dishonesty” that is more damaging to the ideals of academia than cheating (677) and “diverts us from developing a pedagogy that encourages students’ authentic engagement with words and ideas” (682). Indeed, at least one element of the design of good tasks—to encourage learning and enable assessment—is to design out plagiarism and other forms of academic practice that reduce the student’s effort expended and challenge experienced by the student (Carroll 2002; Macdonald and Carroll 2006). Active thinking around the task is the foundation of good learning through assessment, and the direction of that processing will determine what is learned. If the nature of the task is such that subcontracting that task is possible, then the student has a choice: to accept the task as set and work hard to address it or to find a workaround to reduce the effort needed. The teacher can then deploy screening technology to ensure that should the latter route be chosen, the student will be caught. In the face of this algorithmic screening, the student still has a choice: to ignore the screening and work on the task or pay attention to the screening and work to circumvent it. What will be learned in the latter case will have little to do with the intended learning outcome of the original task.

Zwagerman goes further, arguing that a routine of plagiarism detection “reinforces rather than interrogates social roles and power differentials [and is] hostile toward critical thinking” (693). He argues persuasively that the use of plagiarism detection software sends entirely unhelpful messages about student work. Requiring students to submit their writing to a commercial service before a teacher even sees it “tells students that the first thing we look for in their work is evidence of cheating” (694). Introna (2016), in his work on algorithmic surveillance, judges this to be part of a commodification of academic writing:

The academic essay (with its associated credits) is enacted as the site of economic exchange—academic writing for credit, credit for degree, degree for employment, and so forth. Within such a rationality, academic writing is an important commodity whose originality (or ownership) needs to be ensured—that is, against the unoriginal copy, presented fraudulently. (33)

As Gneezy and Rustichini (2000) found in their work on social and economic contracts, goodwill lost by making what was a relationship of trust into an economic contract cannot be regained. Institutions engaging in the commodification of student work through processes of plagiarism detection put teacher-student relationships of trust at great risk. Our own approach to this issue has been a clear and sustained refusal to use routine plagiarism detection systems in our own teaching, despite these being normalized within our institution.

Townley and Parsell (2004) critique the assumption that plagiarism is a problem of technology and therefore requires a technical solution. Instead, they argue that plagiarism arises because of a “failure of community,” where academic values and attitudes are not being transferred from teachers to students (276). The reinvigoration of concepts of trust—and the importance of distinguishing them from untrusting practices of transparency and surveillance—will require action on the part of teachers to build and support academic communities in which trust is an underpinning.

Returning to Baier’s definition of trust as necessary because “no one is able by herself to look after everything,” we might reflect on the nature of the teaching assemblage created by the algorithm and the human together. Outsourcing some of our cognitive functions to a person or a technology requires some fundamental investment of trust in that person or technology. Should teachers trust plagiarism detection software to be a partner with them in shaping students’ relationship to higher education—and to their own writing? Is the understanding of teaching that it manifests one that we support?

We would argue that it is not. We believe that plagiarism detection brings the algorithm and the human together in a teaching assemblage that is aligned with ethically unsound surveillance cultures in the university, and as teachers, we need to be much more attentive and critical about our own academic conduct in this regard. We suggest that teaching should not base itself in the foundations of distrust and commodification embodied by the plagiarism detection system, but should rather focus on building strong communities of trust and reciprocity within which students are motivated to learn. A routine of plagiarism detection structures-in distrust.

Conclusion: Strategies of Future Making

The chapters in this part of the book have made explicit connections between surveillance and trust, exploring the problems that come when, through surveillance and monitoring technologies, educational institutions fail to properly value the concept of trust as a mode of organization.

It is increasingly being argued—for example, in relation to the emergence of cryptocurrencies—that the direction of travel for a functioning society relies on trustless systems: those that are risk free because they do not require human judgment or negotiation, instead relying on unbreakable, perfect recording and visibility of financial or other transactions (Schaub et al. 2016). The desirability of this “trustlessness” is in sharp contrast with many decades of research into, for example, the place of collaboration, reciprocity, and sociality in human society (Wright 2001). As O’Neill (2002) has argued, it is ultimately not possible to eliminate the need for trust, only to move it further along some chain of accountability. These locations and processes of accountability are where we must turn our attention in order to understand the implications of technologies of visibility and monitoring for higher education.

Calls for more digital literacy, or better legal protection for personal data, do relatively little to address the ethics of trust that are so urgently in need of attention. The concept of a safe online space or a digital sanctuary is attracting increasing attention (Collier 2017), and our own recent research has emphasized the significant social value for students of surveillance-resistant anonymity in digital spaces (Bayne et al. 2019). However, what unmonitored spaces might exist in campus-based environments risk being eroded with the introduction of biometric monitoring and other facets of the data-intensive, “smart” campus (JISC 2017) that seek to make the concept of “off the record” defunct. At the same time, individual acts of subversion—attempts to limit visibility—become less possible within a surveillant campus culture built around monitoring, performance, and transparency. This is not just a problem for digital educators: the higher education community as a whole ignores it at our peril.

However, to address the issues raised in this part, we must be careful not to suggest that the answer lies in a return to an earlier time (before digital platforms, when class sizes were smaller and colleges served a more homogeneous population of students). Nostalgia will not help us to resist the dangers that would accompany a trustless higher education system. Instead, we must move toward digital futures for higher education that do not have surveillance at their heart. This is profoundly challenging work but ethically and pedagogically necessary.

At a macrolevel, the 2018 introduction in Europe of the General Data Protection Regulation (GDPR) is putting the onus on all data-gathering organizations, including colleges and universities, to reflect on, minimize, and deal ethically with the data they gather and hold about individuals. Such reimaginings of privacy, the right to be forgotten, the importance of context, and the limits citizens should be able to place on how visible they are offer real possibilities for higher education. At the microlevel, our work with master’s students on our online distance program in digital education explores the potential impacts of learning analytics by making data visible to students in new ways that are rapidly easy to grasp (see Knox 2017b). When we make the extent of the data they generate clear to students, they express anxiety, fear of judgment, and resistance to what they see as inaccurate or unexpected reflections of their engagement.

Research into student responses to plagiarism detection software has shown that students understand its required use to be equivalent to an accusation of cheating or dishonesty (Penketh and Beaumont 2014, 100). When students comply with these practices or sign off on their use through accepting terms and conditions, that should not be taken as active, informed consent. As Lyon (2017) points out, normalization of surveillance and the erosion of expectations of privacy are central components of compliance with such practices. An effective strategy of resistance will include finding ways to resensitize ourselves and our students to the values we want to prioritize in our classrooms and offering means by which students can voice their responses to surveillance cultures in higher education.

A second important dimension of resistance is how we address these issues at strategic levels within our institutions and the sector more widely. There is a problem of leadership in digital education when significant decisions about technology practices are made on the basis that they are technical rather than pedagogical, cultural, or ethical. For this reason, important critical discussions may not take place before major initiatives are launched.

Ultimately, movement on these issues will require collective action to design and make real new possibilities for trust at scale, at a distance, and in the context of allowing students more freedom to be less visible. Along the way, we can correct some of the problematic assumptions that make surveillance practices seem so necessary: that face-to-face, small-scale classrooms are the only places where relationships of trust can flourish (see part IV); that making individuals’ behavior, movement, and activity visible is a good substitute for those trusting relationships; and that desires to capture, record, and monitor can be deployed benevolently. We might, while we are at it, explore and challenge damaging assumptions that students are at college to game the system and seek more optimistic framings of students as willing partners within a community of education built around trust.