© Springer Nature Singapore Pte Ltd. 2021
G. T. Stewart et al. (eds.)Writing for Publicationhttps://doi.org/10.1007/978-981-33-4439-6_11

11. Being an Author in the Digital Economy

Leon Benade1  
(1)
School of Education, Auckland University of Technology, Auckland, New Zealand
 
 
Leon Benade

Abstract

The openness and responsiveness of the online and digital world in the twenty-first-century raises critical questions for traditional academic scholarship, but also creates opportunities to consider the role of online and digital media in academic scholarship. This chapter recognises the tensions present in this emergent relationship but suggests that it is more helpful to engage with online and digital media from a position of critically informed scholarship. It considers the shift to open access (OA) and publishing, which has, perhaps inadvertently, enabled predatory publishing behaviour. Central to the digitisation of publication is the sophisticated field of bibliometrics. Metrics not only enable authors to locate the reach and impact of their publications but support them in their selections of suitable outlets. Finally, authors are encouraged to establish and develop a digital footprint.

Keywords
BibliometricsDigital publishingOpen accessPostdigitalityPredatory publishing
Leon Benade

is an Associate Professor in the School of Education of the Auckland University of Technology. His research interests are teachers’ work, school policy, ethics, philosophy in schools, critical pedagogy, and the New Zealand Curriculum, with a current focus on Innovative Learning Environments (ILE). Leon is a co-editor of the New Zealand Journal of Educational Studies and the New Zealand Journal of Teachers’ Work. He is author of From Technicians to Teachers: Ethical Teaching in the Context of Globalized Education Reform (Continuum, 2012) and Being a Teacher in the 21st Century: A Critical New Zealand Study (Springer, 2017).

 

Introduction

The late twentieth century brought traditional academic scholarship and digital media into contact, revealing both opportunities and challenges for both. The brashness of the online world, featuring opinion pieces, blogs and social media comments contrasts with the temperate tone of academic scholarship and its judicious use of evidence, typically disseminated through the pages of scientific journals, historically owned by learned societies, at least until publishers began to grow in prominence (Larivière et al. 2015). Bringing academic scholarship into contact with the openness and responsiveness of the online and digital world raises questions and creates opportunities for discussion. What new publishing possibilities are presented by online and digital media? How is the nature of scholarly work influenced or affected by developments in the online and digital space? Can academics benefit by contact with opportunities offered by digital tools? What pitfalls and traps will academics encounter in the digital world? Does the power of digital data accumulation and analysis offer academics new insight to the reach of their work? How ought academics locate themselves in this digital world? Or, do they simply treat it with disdain, and continue as before?

Underpinning my response to these questions is a rebuttal of the suggestion that scholarly writers reject the role of online and digital media in their work. This attitude may stem from privacy fears, concerns with the political and psychological influence of digital media, and a sense that the value and image of traditional scholarship are at odds with the ego-centred image of online activity. These tensions are recognised in this chapter, balanced by a view that it is more helpful to engage with online and digital media as a critically informed scholar than to refuse to engage on a point of principle. This chapter is addressed primarily to emerging scholars, but also to those experienced scholars grappling with the challenges of the digital world. After considering relevant conceptual considerations, I examine the shift to open access and publishing. Open access and the autonomy of the Internet have enabled predatory publishing behaviour, and scholars’ understanding of this phenomenon is important in the selection of journals and publishers. Digital data provides journal editors, publishers and authors with an array of metrics against which they can locate the reach and impact of their publications. Understanding this terrain (including the potential for manipulation) will support authors in their selections of suitable outlets, but also in monitoring the uptake of their own work. Finally, consideration is given to the judicious establishment and development of a digital footprint, as this can enhance an author’s reach and impact.

Conceptual Considerations

Three considerations feature here: first, the neoliberal context of the academy, including the nature of scholarly work in that context; the concept of ‘digital economy’ is placed in relation to the neoliberal context; last, reference to post digitality (‘the postdigital’). Neoliberalism retains its influence, certainly in most developed economies, despite its failings being exposed by the Global Financial Crisis of 2008, and the turn of many states to wide-ranging welfare policies in 2020 during the Covid-19 global pandemic. In these contexts, neoliberalism continues to exhibit its typical attributes, notably market mechanisms, manipulated by state actions (Jones 2019; Olssen and Peters 2005). These mechanisms, underpinned by a technical-rationalist epistemology (Patrick 2013) in which physical labour has been replaced by cognitive and emotional labour are manifested in a wider performative and accountability culture (Olssen and Peters 2005). Political fascination with performative measures and efficiency gains have accordingly seen academics chasing after work contracts and grant funding, not only intensifying their work demands (Huws 2014), but locking them, as Huws (2014) suggests, into a cycle of ‘begging or bragging’. Thus, the commodification of academic labour is compound by a neoliberal vision of universities as agents of knowledge production for commercial exploitation “rather than…[universities enhancing] individual development within sets of broadly conceived educational aims” (Patrick 2013, p. 3). Human ‘management’ by audit fuels the process of commodification, creating a culture of performativity underpinning academic work, requiring academics to simultaneously lose their autonomy while demonstrating their excellence and uniqueness (Huws 2014).

The audit culture is a demonstration too of neoliberalism’s positioning of the individual as inherently rational, enterprising, competitive and capable of self-management (Patrick 2013). Audit accountability devices are embodied in the academy, in processes such as the Research Excellence Framework (REF)1 in the United Kingdom, the Excellence for Research in Australia2 and the Performance Based Funding Research (PBRF)3 in New Zealand. Success in these processes requires academics to emphasise the extent of their productivity, the quality of the dissemination outlets of their work, and the degree or extent of the reach or influence of their outputs.

The aims and patterns of the neoliberal state are supported by digital technology (Jones 2019), notably in the development of a globalised ‘knowledge economy’ (Robertson 2009). This economy has evolved employment requirements that are supported by networked digital technology, which enables team and collaborative possibilities, making various forms of remote learning, teaching and work possible (Jones 2019). Arguably then, the ‘digital economy’ may be seen as an offshoot of the neoliberal economy. This digital economy is characterised by the predominance of information and communication technologies, accentuating the shift to a knowledge society (Valenduc and Vendramin 2016). In their evaluation of what is novel and what unique in the concept of the digital economy, Valenduc and Vendramin (2016) suggest these emergent principles:
  • Digitised information is becoming a strategic resource;

  • Networking is emerging as a distinct feature of work and society;

  • Digital technologies are producing vast quantities of data;

  • These technologies are providing tools that enable users to harness and leverage the value of this data.

The ubiquity of digitisation arguably gives rise to the view that it holds deterministic promise, leading to such breathless catchphrases as ‘a digital revolution’ often combined with the hackneyed, ‘rapidly-changing society/world’. What is required in the face of such conventional responses to digital ubiquity is a critical attitude towards the digital. One such emergent idea is contained in the notion of ‘the postdigital’ or postdigitality, suggested by Peters and Besley (2019) as a critical or philosophical attitude in relation to the digital and the idea that the digital can be a holistic explanation of the world. They propose a ‘critique of digital reason’. This is a twofold critique of the technical systems associated with digitality, and the political economy of the digital, namely considerations of the acquisition and ownership of digital systems. The ‘post’ prefix in postdigital refers to neither ‘after digital’ nor to a replacement of the digital (Jandrić et al. 2018; Peters and Besley 2019). Instead, it invokes a sense of the ubiquitous nature of the digital and its coupling to daily lived existence. This ubiquity is so steeped in the day-to-day, that ‘postdigitality’ can be said to describe a condition in which the ‘digital’ is better understood by its absence, rather than its presence (Negroponte 1998, cited by Jandrić et al. 2018). The digital ceases to be apart from basic humanity, instead is understood to be embedded in it, becoming an inseparable part of it, ultimately reshaping the way humanity is perceived and understood.

It is this understanding, of digitality as neither deterministic of human behaviour nor as a phenomenon that can be disdainfully set aside, that forms a key premise of this chapter. It is written with a view to recognising that knowledge of the affordances available in the digital economy, can enable scholars to spread the story of their research more widely. Further, while I acknowledge that academics are required to ‘play the game’ of using digital tools to locate themselves advantageously relative to others, I take the view that having critical understanding of the digital world includes developing ‘interstitial strategies’ (Wright 2010), whereby scholars seek out cracks and fissures to advance their writing careers in a critically-informed and ethical manner.

Critics of the position I have just outlined may dismiss it on the basis that engagement with the scholarly writing tools embedded in the digital economy simply reflects uncritical acceptance of neoliberalism in its various forms. Indeed, as Ball and Olmedo (2013) affirm, neoliberalism speaks and acts through our discourse and relationships, setting cultural and social limits on the possibilities for individual action. Still, they argue, and I concur, that neoliberalism opens new spaces for struggle and resistance. They quote Foucault (1997, p. 284): “Freedom is the ontological condition of ethics. But ethics is the considered form that freedom takes when it is informed by reflection” (cited by Ball and Olmedo 2013, p. 93). Furthermore, as Lemke (2011) has indicated, while studies of neoliberal governmentality have been fruitful in exposing neoliberal governance, there are several ‘blind spots’, among these being the development of a neoliberal metanarrative based on a formulaic response to a wide range of phenomena. Consequently, the “reader already seems to know in advance what he or she is going to read…[and, as]…a result, any surprising insights derived from the empirical data and material are effectively ruled out” (p. 99). While I acknowledge that there is value in taking seriously studies of neoliberal governmentality that consider programmes to “uncover what they hide and exclude” (Lemke 2011, p. 83), this chapter proceeds on the pragmatic premise that these programmes should also be considered for their potential to be used subversively, by seeking their cracks and fissures.

Online Publishing and Open Access

The following considerations bring together two overlapping, yet distinct, issues: open access to knowledge product, and the shift from print journals (and books) to online publication. They have a common stem, however. The Budapest Open Access Initiative (2002) (BOAI) declared that the confluence of traditional forms of scholarly dissemination through manually type-set and printed journals on one hand, and the Internet on the other, had created an opportunity for extraordinary public benefit, which it referred to as ‘Open Access’. Knowledge is a public good, according to Suber (2012), one of the founding signatories of the BOAI, because its consumption cannot deplete it; and its consumption is theoretically available to all. Open Access (OA) thus makes it possible to turn theory into practice. For BOAI (and subsequent OA statements) OA has two dimensions: First, the author as the copyright holder enables free access by others to use and re-use the author’s material, on condition of attribution (citation of the author) only. Second, for a complete version of the author’s published work to be deposited in a freely accessible online repository (usually managed by academic institutions, learned societies, government agencies or research organisations).

Two similar events followed closely, in 2003. The Bethesda Statement on Open Access Publishing, held in Maryland, consisted of a group of librarians, scientists and academics (including Suber), who met “to stimulate discussion within the biomedical research community on how to proceed, as rapidly as possible, to the widely held goal of providing open access to the primary scientific literature” (Suber 2003. “Summary”). A few months later was the Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities (Open Access 20032020), which aligned itself with the BOAI and Bethesda statements. The Berlin Declaration defined Open Access as the “unrestricted, online access to peer-reviewed, scholarly research papers for reading and productive re-use, not impeded by any financial, organisational, legal or technical barriers” (Science Europe 2015, p. 1). The dual goals of OA journals, and institutional repositories where authors could store versions of their published work, ranging from pre-publication prints, to author accepted manuscripts (AAM) or the version of record (VOR) gave rise to the concepts of ‘Gold OA’ and ‘Green OA’. ‘Gold OA’ applies to freely available published articles for which an Article Processing Charge (APC) applies, while ‘Green OA’ applies to freely available versions of published articles located in institutional repositories. The Berlin Declaration represented the culmination of the desire of the scientific community (researchers, funders, policy makers and relevant stakeholders) to attain the vision of the free availability to the public of research that is funded directly or indirectly by taxpayers.

More recently, in September 2018, ‘cOAlition S’, a consortium of research funders established under the aegis of the European Commission and the European Research Council (ERC), set itself the target of ensuring that scholarly research funded either by public or private grant “be published in Open Access Journals, on Open Access Platforms, or made immediately available through Open Access Repositories without embargo” (European Science Foundation 2020. “About”). Marc Schiltz (President of Science Europe) has indicated that this target is based on the premise that publication paywalls (such as subscription-based journals) are “withholding a substantial amount of research results from a large fraction of the scientific community and from society as a whole” (Schiltz 2018, p. 1). In militant mood, Schiltz captured the essence of the Plan S target: “the subscription-based model of scientific publishing, including its so-called “hybrid” variants, should therefore be terminated” (p. 1). In this regard, he defined ‘science’ to include the humanities.

This brief historical overview of the progression towards a vision of knowledge as freely open and available to all, while laudable, masks a number of tensions that are characteristic of neoliberal knowledge economies. The scholar is paid a university salary in part to conduct research, thus is writing for free (Huws 2014). Tempering this perspective, Scheufen (2015) suggested that academics are incentivised by peer recognition; potential career rewards; and the satisfaction of writing for publication. Arguing from the underlying assumption that academic publication is a competitive zero-sum game with winners and losers, Scheufen (2015) offers a number of economic-based propositions to challenge the view that OA is altogether positive in its effects. Amongst his key propositions is the view that there is a hierarchy of researchers and scholars, and that typically, ‘highly talented’ scholars will be employed by leading universities, that will be more likely to pay the APC, unlike the ‘mediocre’ universities where ‘less talented’ researchers are more likely to be employed. This may reduce the incentive for less talented researchers to publish OA, thus widening the gap between ‘talented’ and ‘less talented’ scholars, as those who publish OA will be more visible to a wider readership. Scheufen (2015) argues that closed access (CA) minimises the asymmetries of talent among scholars.

The early trends and idealism associated with OA and the shift to digital publishing disguise other related tensions (Peters et al. 2016). Publishers, that have traditionally amassed significant profits on the basis of essentially free academic services rendered by scholars, have not passed on to libraries the savings associated with the shift from paper to online publishing, specifically in relation to type-setting and print production, leading Larivière, Haustein, and Mongeon to ask, in exasperation, what “in the electronic world…do we need publishers for? What is it that they provide that is so essential to the scientific community that we collectively agree to devote an increasingly large proportion of our universities budgets to them?” (2015, p. 12). In the new OA terrain proposed by cOAlition S, publishers will be required to be fully open access, charging transparent fees only for legitimate costs associated with managing the business of reviewing, editing and disseminating a digital product (European Science Foundation 2020. “Principles and Implementation”; Schiltz 2018).

Furthermore, as the author becomes the primary customer in the OA relationship, not the organisation paying subscriptions (Scheufen 2015; Ward 2016), there are inherent risks of damaging the quality of scholarship. Under a subscription model, institutional clients may cancel if the quality of published scholarship is weak, whereas an author-pays model reduces the level of control imposed by customer or client satisfaction (Ward 2016). This view would be challenged by Schiltz (2018) however, who regards scholarly communication (‘science’, in his terms) “as an institution of organised criticism [thanks to] the test and scrutiny of other researchers” (p. 1, Emphasis in the original). Therefore, controls are imposed by the scholarly community. Scheufen’s (2015) arguments relating to the incentives of scholars suggest though that the quality of OA is potentially diminished as the APC model keeps some scholars from publishing OA, allowing those who can pay to expend less effort to be published.

Scheufen notes also: “While OA lowers the access barriers for researchers of countries who have been hardly able to subscribe to a single journal in the past, it necessarily creates a participation constraint as it sets a price for participation in the publishing game” (2015, p. 100). Thus, emerging OA initiatives can deepen asymmetries between industrialised countries and developing countries. This tension is characteristic of the tensions between the competing epistemologies and ontologies underpinning closed vs open access (Peters et al. 2016). The spirit of OA, at least as expressed by the three founding statements mentioned earlier, is to bring knowledge to all, with the only barrier being access to an Internet-enabled device. Yet, the large publishing houses have been able to reposition themselves in response to the emergent OA reality, so that they maintain much control over OA, charging Article Processing Charges (APC) in excess of €3000 per article. Thus, it is apparent that the emergence of digital publishing, while technically allowing a free flow of information, has nevertheless been captured by the captains of the digital economy, manifested in the paywalls erected by large publishing houses to keep results of research cloistered. This situation is precisely the target of the cOAlition S funding consortium. Its intention of making the published work of scholars freely available may, however, effectively bar scholars working in the humanities and social sciences, that typically have low access to funding opportunities, or those in low- and middle-income states unable to provide generous grant funding. However this dilemma may be solved in the future,4 there is a potential for interstitial challenge to this situation, namely scholars exploring collaborative writing projects where the APC charges can be shared (Peters et al. 2016). In this way, multiple aims are assured—a democratic approach to research, and the opportunity to enable authors to bring their research to those who might not ordinarily have the resources to access this knowledge.

What has been presented here is not intended as a binary, or some kind of ‘break’, placing printed media and digitised media in opposition, but rather as suggestive of the ‘postdigital’ reality indicated by Jandrić et al. (2018). In this reality is evidence of a messy, networked world that blends digital and analogue. As scholarly writers, academics must constantly traverse the terrain of both print media and digital media (for instance, books are published in both hardback and as e-books). Digitisation (in various forms and formats) and OA are further developments in the technology of academic labour—they do not constitute an either/or binary, and the rules of scholarship still apply, regardless of the medium (Peters et al. 2016). The focus here has been to develop an understanding of the OA terrain, in terms of its history, and some of the arguments associated with OA and digital publishing. Embracing OA allows authors to take up its potential for a radical democratisation of both the research process (by encouraging collaboration) and the research product (by making it freely available to those who may not otherwise have the resources to access the knowledge product). Perturbing this idealistic potential of OA, however, is the advent of the ‘predatory journal’.

Predatory Publishing

New and emerging scholarly authors, especially those anxious to have their work published, may find themselves lured by the practices of predatory journals and publishers. In the following discussion, I will clarify some salient features and practices authors are likely to encounter, making it easier to identify these. Suggestions for selecting reputable journals and publishers are offered to support authors.

The advent of OA and the author-pays model has facilitated the growth of predatory publishing (Beall 2017b; Ward 2016). These authors have suggested that the OA business model leads to conflicts of interest for publishers, such as trading off standards of peer review against the desire to provide efficient editorial decisions and services to paying authors. In the predatory marketplace, these (mal)practices are egregious (Beall 2015, 2017b; Bohannon 2013). Beall (2017b) has also attributed the rise of predators to a confluence of other factors: the proliferation of higher education institutes and of postgraduate qualifications, the performative element of academics’ work, that sees their value measured in part by published outputs, and the desperation of those academics whose work is regularly rejected by high-quality journals.

Predatory journals generate profit by publishing the work of authors willing to ‘pay-to-publish’. Among their practices are the generation of enormous lists of academics who are sent spam emails inviting them to submit their work for publication (Brezgov 2019). These emails may reflect some of the criteria mentioned below including lists of ‘international journals’ with titles indicating fields of no relevance whatsoever to the email recipient. Peer review, to the extent that it may exist, is light, with the promise of tight turnarounds, sometimes within a week (see, for example, http://​questjournals.​org/​cfp.​html). Though publication comes at a cost to the author, copyright may be retained by the publisher or journal, making later re-publication of the same article in a different journal possible (Brezgov 2019). Negative outcomes of predatory practices include the publication of work by authors unable to be published in reputable journals, the promotion of dubious research (Beall 2017b; Bohannon 2013), fake science or ‘advocacy research’ (Beall 2017b).

The techniques of predatory publishers and predatory journals are similar. Beall, a university librarian, composed lists of predatory publishers and predatory journals (2015, 2017a) and defined their characteristics (2015), though, he has argued somewhat polemically (2017b), his efforts led to the systematic harassment of both himself and his employers. His lists show an astonishing growth in the number of both predatory publishers and stand-alone journals (2017a), now numbering in the thousands. The additions to Beall’s lists have outstripped the growth of the respectable Directory of Open Access Journals (DOAJ)5 (Bohannon 2013), established in 2003 to index high quality, open access, peer-reviewed journals.

Typical criteria that may suggest predatory motives or conduct include all or some of these:
  • Minimal academic information regarding the editor, editorial staff, and/or review board members (such as their institutional affiliation);

  • No editorial policies in evidence (such as review policies, editorial policies and copyright policies);

  • Hidden or unclear information regarding author fees;

  • Or, provision of optional ‘fast-track’ fee-based services guaranteeing expedited peer review, suggesting assured publication with little or no vetting;

  • Publishers laying false claim to its content being indexed in legitimate abstracting and indexing services;

  • The publisher listing inadequate contact information, including misrepresenting the publisher head office location;

  • Poorly maintained websites that may include dead links, obvious misspellings and grammatical errors. (Beall 2015. See some examples of the above at http://​questjournals.​org/​index.​html).

Identifying predatory journals and publishers is not a simple task, nor is it possible (or advisable) to assume a binary of reputable publishing houses on one hand and disreputable predators on the other. For instance, Bohannon discovered in a 2013 ‘sting’ that some titles on Beall’s list appeared in DOAJ (Bohannon 2013). The same operation (a bogus ‘junk science’ manuscript submitted in 304 versions to journals across the world) revealed that even journals hosted by reputable publishing houses may publish contributions of dubious-quality—although this may be a consequence of shoddy peer review rather than predation per se, hence not directly relevant to this discussion. Of greater relevance is a different case, that of the Multidisciplinary Digital Publishing Institute (MDPI—https://​www.​mdpi.​com). MDPI is one of the publishers who publicly challenged Beall for listing it as a predator. Richard Poynder, an independent journalist with a special interest in the development of OA, summed up the case for and against MDPI (2015), and while there appears no definitive evidence of MDPI as a predatory publisher, yet there continue to be questions raised (Brezgov 2019), specifically regarding the editorial and review practices of MDPI. It is evident then, that it behoves authors be well-informed of journals or publishers being considered for selection.

Considering the situation just described, how does a new or inexperienced scholar set about selecting a journal? First, by recognising that the digital (capitalist) economy has permitted the development of a publishing oligarchy that sees the field dominated by a handful of publishing houses—amply demonstrated by Table 11.1, which shows the top ten publishers for New Zealand corresponding authors by volume. Clearly, the field is dominated (at least in terms of where New Zealand authors choose to publish) by just four publishers. While this lack of choice and variety may be a negative feature, this evidence also indicates to prospective and more experienced authors the range of ‘reputable’ publishers (notwithstanding the inclusion in Table 11.1 of MDPI, noted above as one flagged by Beall [2015]). A second strategy is to reflect on the criteria established by the European Reference Index for the Humanities (ERIH) (2019). While not specifically aimed at challenging or discrediting the predatory market, the ERIH criteria are required to be met by credible, quality European journals in the humanities, seeking to be listed by ERIH. As such, these criteria will serve as a meaningful standard by which new (and experienced) scholars may judge journals they are unfamiliar with.
Table 11.1

Top 10 publishers

Publisher

No. of journals

% of all journals (%)

Elsevier BV

582

20.38

Wiley-Blackwell

375

13.13

Informa UK Ltd./T&F

366

12.82

Springer Nature

357

12.50

SAGE Publications

159

5.57

Emerald

87

3.05

Oxford University Press

79

2.77

Cambridge University Press

69

2.42

MDPI

47

1.65

IEEE (Institute of Electrical and Electronics Engineers)

45

1.58

Source CONZUL 2019, p. 28

Chief among these criteria are clearly identifiable, transparent and explicit procedures for external peer review. Here it may be helpful to flesh out the point of peer review, as, underlying much of the disquiet toward the predatory industry, is the absence or near absence, of rigorous peer review (Bohannon 2013; Ward 2016). Somewhat clouded and mysterious to outsiders, peer review nevertheless has a central function in ensuring the academic rigour and quality of submitted manuscripts (Jackson et al. 2018; Ward 2016). Rigorous peer review, provides, as it were, a ‘gold standard’ in academic publishing (Enslin and Hedge 2018). While it may sometimes lead to results for genuine authors that can be difficult to fathom it can provide an opportunity for constructive feedback (Enslin and Hedge 2018) especially when conducted in a pedagogical and developmental spirit (Jackson et al. 2018).

The ERIH (2019) list provides further clues to look for, including evidence of an academic editorial board, with members affiliated to reputable, high-quality universities or other independent research organisations. Published articles should provide information on author affiliations and addresses, and no more than two thirds of the authors published in the journal should be from the same institution. The published articles should have abstracts in English and/or another international language. A final criterion is evidence of a valid ISSN code, confirmed by the international ISSN register.6 Although the ERIH list provides several clues to look for when considering a journal, there are other options, one of which is to have a knowledge of journal metrics, which in turn are related to an author’s own metrics, suggesting that authors will find it beneficial to know more regarding journal metrics.

Journal Metrics

Journal metrics (or, more accurately, ‘bibliometrics’) is a statistical discipline prominent in library and information science, and the data it generates can be used by scholarly authors to identify suitable journals. Eugene Garfield, a linguist and library scientist, is credited for originating the concept of measuring journal impact by tracking citations (Pendlebury and Adams 2012). Garfield sought a system to ensure researchers would not cite poor data: to “eliminate the uncritical citation of fraudulent, incomplete, or obsolete data by making it possible for the conscientious scholar to be aware of criticisms of earlier papers” (Garfield 1955, p. 108). Further usefulness of a citation index, he imagined, would include the ability to support researchers following unique lines of thought not anticipated by compilers of subject indexes or bibliographies, for example.

Garfield’s early efforts developed into a substantial industry that has shifted the focus onto the relative performance of journals. His Institute for Scientific Information (ISI) now forms part of the Web of Science Group (2020), a Clarivate Analytics company. The journal analytics industry is populated by several powerful databases, such as Web of Science and Scopus, an Elsevier database, claimed as the world’s largest abstract and citation database (Elsevier 2020. “CiteScore metrics”). Moed et al. (2012), while acknowledging the debt owed to Garfield for his insights into, and development of, journal statistics, note, however, that measures such as impact factors, have become detached from their original indexing purpose, increasingly becoming a proxy for quality. Before further critical consideration, it will be helpful to characterise the features of some of the more well-known metrics, and to understand what it is they measure, and how these can support authors.

Journal Impact Factor

The Journal Impact Factor (J/IF) is published each year by Clarivate Analytics and is based on the Web of Science database. It is a measure of the number of times an average paper in a journal is cited during the preceding two years. Included are peer reviewed articles, reviews, and conference proceedings. Not included are editorials or letters-to-the-editor. This is a two-year IF, though there are other measures that are based on three and five years. Longer audit periods provide a longer ‘window’ for individual articles to be cited. In the worked example (Table 11.2), the ‘IF’ of 2.8 can be considered against other, similar journals—the higher the impact, the greater the standing of the journal. Such factors make it possible to rank journals, providing an additional measure of ‘impact’. Scholarly authors should note, however, that these rankings only make sense within a particular field or discipline. For instance, science-oriented articles attract far greater citation rates than those in the humanities, thus the IF of science journals well exceeds those of the humanities.
Table 11.2

Illustration of the calculation of a two-year journal impact factor (JFI)

Two-year journal impact factor

The Journal of Researcher Bliss (JRB)

1. Assume 2018 is the year JRB is measured

2. Total of all citations the articles published by JRB in 2016 and 2017 received during 2018 (A)

3. Total of ‘citable items’ published by JRB in 2016 and 2017 (B)

4. Calculate the 2018 impact factor by dividing A into B

A = 112 (total number of citations received in 2018)

B = 40 (total number of citable items published over the two year period)

IF = 2.8 (this is the average number of citations received by the articles published in the preceding two years)

CiteScore Metrics

An Elsevier product, CiteScore metrics operates in a similar manner to the formula above, but is calculated over three years, and “includes citations from articles, reviews, letters, notes, editorials, conference papers, errata and short surveys” (Scopus 2017, 2:22). The same types of items are used to calculate the items indexed by Scopus, thus “acknowledging every item’s potential to cite and to be cited” (2017, 2:47). The CiteScore metric provides the author with an indication of where it may be best to publish, though authors have to realise that being published in quartile one journals can present a greater challenge than publishing in a lower quartile journal.

Scimago Journal Rank (SJR)

This metric tries to overcome the weakness of traditional IF that weights all citations equally. Regular IF are a measure of a journal’s popularity, rather than its prestige. SJR recognises that some journals enjoy greater prestige than others, so a high SJR indicates higher prestige. Citations originating from journals with high SJR carry more weight than citations from lower-ranked journals, which carry less prestige. The underpinning calculation is a three-year IF, however, unlike CiteScore, the ‘citable documents’ include only peer-reviewed items, that is, articles, reviews and conference papers, and the citations must come from the same kinds of documents (Elsevier Journals 2010). In the calculation of SJR, citations are weighted as more or less than 1.00, based on the importance of the citing journal. The calculation takes into account, however, the difference between high cite fields (like sciences) and low cite fields (like humanities) so that the difference between fields is flattened.

Source Normalized Impact Per Paper (SNIP)

Like SJR, the underpinning calculation is a three-year IF. This metric considers the citing potential of a journal in relation to its impact. High impact journals (such as the sciences) are characterised by their high citation potential. Conversely, lower impact journals (such as in the humanities) are characterised by lower citation potential. The citation potential is determined by calculating the average number of listed references made to any peer-reviewed documents (in any journals) that are one to three years old by the articles citing one to three year old articles in the target journal (Elsevier Journals 2010, 4:53). To calculate SNIP, the impact is divided by the citing potential. This has the effect of lowering the value of high impact/high citing potential journals or raising the value of low impact/low citing potential journals. This ‘normalising’ process corrects for differences in citation behaviour between fields (2010, 5:27). Therefore, SNIP values provides authors an additional tool when assessing journals, as differences will relate to journal quality, not citation patterns.

Using These Measures

Metrics and measures can easily assume greater importance than they ought in the context of neoliberal performativity and instrumentality. Considering Garfield’s original intentions, clearly measures such as IF have shifted the focus from indexing to evaluation (Moed et al. 2012), and any over-emphasis on the significance of these tools, and their inappropriate use as proxies of researcher quality, can create an ‘unhealthy fascination’ (Pendlebury and Adams 2012). Perhaps it may be suggested that the system of using these measures to rank journals compounds the competitive behaviour already present (Elsevier 2020). Ironically, the preceding three citations are sources with an interest in the performance of publishing houses and their journals, so their implied critique may be considered to be disingenuous. The various complex ‘impact factor’ formulae are largely the product of the fascination publishers have with journal metrics—after all, these metrics are critical to the success of the marketing campaigns of the publishers, some of which make enormous profits. Emphasising this point, Eddy (2019) cites the profits of “Elsevier, Springer Nature, and Wiley at US$3.2, US$1.9, and US$1.7 billion in revenue in 2017” (p. 462). Publishers therefore have a vested interest in ensuring their journals are made visible to academic authors, and journal metrics are central to this aim.

These measures are, nonetheless, currently a reality for academic authors, and what the preceding descriptions above do indicate, however, is that their diversity and range make a single indicator impossible. This should indicate to prospective authors that they make use of multiple sources of evidence, both qualitative (such as peer review reports) and quantitative (such as the examples outlined above) (Elsevier 2020; Moed et al. 2012; Pendlebury and Adams 2012). Caution is still advised, as the predatory journal and publishing practices referred to earlier, include fraudulent or manipulative use of journal metrics (Bohannon 2013). Making use, however, of reputable sources of evidence, such as the Scopus Preview7 and Scimago Journal and Country Rank8 can provide authors with interesting and potentially valuable information regarding the standing of journals in their field. There are also, as expected, a range of tools authors can use to track their personal progress—namely, author metrics.

Author Metrics

Two relevant considerations are discussed here, the h-index, and research impact. Like journals, authors can gauge their influence in a particular discipline or sub-field by using available tools that measure citations of their own publications. These citations reflect the uptake of a scholar’s work by others in the field, indicating influence or impact.

The H-Index: Its Value and Its Weaknesses

The h-index was devised by Jorge Hirsch, a Californian physicist, who sought to provide a metric displaying a researcher’s consistent influence and productivity over a period of time (Clarivate Analytics 2019; Spicer 2015). The index indicates the number of an author’s articles (h) that have been cited by other authors at least the same number (h) of times. In other words, an h-index of 15 means an author has at least 15 articles that have each been cited at least 15 times. As a productivity measure, the h-index recognises a researcher’s influence across a range of published outputs, rather than just one or two highly cited works. An author’s Google Scholar profile will report a higher score than other databases, as its algorithm draws on a wide range of published material, including grey literature and university theses depositories. Scopus and Web of Science (Publons) will reflect a lower score, as their algorithms draw on a smaller data base. The index will rank an author’s cited publications in descending order of citations over the life of the publication until the point is reached where the number of citations received by an item on the list is equal to or greater than its numbered position on the list. This will be the author’s h-index. In Table 11.3, the author’s h-index is 5: that publication, and the ones above it, all have been cited at least five times.
Table 11.3

Illustration of the h-index

Publication #

Citation #

1

105

2

73

3

47

4

11

5

9

6

3

7

1

The h-index has several disadvantages and limitations. Those working in the social sciences or humanities will tend to have lower h-scores, as these fields achieve lower citation rates than the sciences. The h-index does not differentiate for the length of career of researchers—a more experienced author will potentially have a higher h-index than a less experienced researcher. On the other hand, shifts in the h-index can be disproportionate. At the lower end of the scale shifts can be more easily made than at the higher end. To illustrate: the next publication of an author with an index of 2 has to be cited three times, and two others cited at least three times in order to shift to an h-index of 3. On the other hand, an author with an h-index of 20 has a much tougher hill to climb: the index will climb to 21 only once an article and 20 others are all cited at least 21 times (Elsevier 2020). Raising one’s index rating becomes increasingly challenging as some publications below the index cut-off point may simply never be cited often enough to reach and exceed the cut-off. As Table 11.3 illustrates, the h-index ignores the citation effects of influential publications that may report significant discoveries or display a career breakthrough (as illustrated by the top three publications in Table 11.3); nevertheless, these publications may have opened insights previously closed to other scholars in the same discipline or sub-field (Kreiner 2016).

Conversely, the h-index can encourage ‘gaming’, whereby citation rates are artificially boosted by excessive self-citation, and by the behaviour of ‘citation cliques’, groups of authors who mutually cite each other. Indeed, some years ago, a ‘cabal of editors’ (Bishop 2015) came under attack. The editors making up this ‘cabal’ were accused of using their position as editors of a group of journals to hasten the publication in the journals they edited of a large number of articles they had authored and co-authored, and to have used self-citation excessively. Notwithstanding some of the challenges this blogpost (Bishop 2015) received, it drew attention to dubious tactics that not only undermine the peer review system, but also suggest ways the h-index can be manipulated. Such manipulation arises, arguably, because of the manifest neoliberal obsession with performative measures. Further, the numbers can, and do, mask content (Spicer 2015) and under-performance (Kreiner 2016).

Despite these weaknesses, however, h-indexes are used widely as evidence in promotion or funding contests. Seen in isolation, and in the hands of human resource managers or committee members who may not fully understand these metrics (such as the differential across fields and even sub-fields), the h-index can fail to do justice to applicants, or artificially boost their bids. More recently, however, there have been moves to find alternatives to understanding the influence or ‘impact’ researchers can have on their fields and ‘industries’.

Research Impact

In February 2019, SAGE gathered a wide range of stakeholders at an ‘impact metrics workshop’ to discuss the problem of traditional metrics being used as an indicator of research and researcher quality in the social sciences. Underpinning this initiative was a concern that ‘impact’ should extend beyond having influence on the research of others, to having some kind of ‘real-world’ impact or influence.

For those conducting research, being able to measure the wider impact of their work would allow them to tell a more rounded story of their scholarship that goes beyond the number of articles published, or citation counts to those articles, or citation counts to the journals containing the articles. (SAGE Publishing 2019, p. 4)

The question of ‘real world’ influence or impact has been gaining momentum amongst funders of research since 2000 (McCann et al. 2015). These funders include governments; accordingly, ‘research impact’ has come to play an increasingly influential role in research audits such as the Research Excellence Framework (REF) in the United Kingdom, where it was first introduced in 2014 (Terämä et al. 2016). In that round, ‘reach and significance of impact’ made up 20% of the overall individual assessment of research (Terämä et al. 2016). In Australia, the ‘Engagement and Impact (EI)’ assessment was introduced to understand the translation of research into wider benefits for society and the economy (Australian Research Council 2019), and is completed in tandem with the Excellence in Research for Australia (ERA) audit. The Terms of Reference of the 2019 review of New Zealand’s Performance Based Research Fund (PBRF) indicate a desire to better capture the contribution research makes to society and the environment (Ministry of Education 2020). These Terms of Reference acknowledged such assessment to be challenging.

How are research and researcher impact to be judged other than using quantitative measures alone? A response is evident in emerging policies already referred to, and provides guidance to emerging scholars at the start of their careers. In the Australian example, both engagement and impact are considered, and these are evidenced through narratives, and some numeric measures. Although this EI exercise in Australia is aimed at the institutional level, the notion of individual researchers articulating the way their research is delivered to the wider community, beyond conferences and publications, bears thinking about. ‘Uptake and Impact’ was recently added to the New Zealand PBRF calculation of ‘Research Contribution’ (which constitutes 30% of the total individual score). This additional indicator enabled researchers to provide evidence (with the only restriction being an embedded character count limit) on the benefits of their research to society, economy and environment (The Tertiary Education Commission 2016). In the case of the REF2014 ‘impact case studies’ were required (Terämä et al. 2016). Notably, in their study of the REF2014, Terämä et al. (2016) did not find any particular favouring of one kind of impact over another.

Thus, while metrics retain their influence as a measure of the quality of academics’ research work, the ‘more rounded story’ being called for at the SAGE event mentioned earlier, is finding its way into the accountability policies of at least three national examples cited here. Nonetheless, precisely what ‘impact’ means, and how it can be applied evenly across disparate fields of research, especially when the social sciences, humanities and the arts are contrasted with engineering or technology for instance, remains a contested matter. The definition and application of research impact may depend on the interpretation of the institutions where researchers are located, further distorting the picture (Terämä et al. 2016). Clearly, academics have to find ways of progressing through these imperfections. This digression into a consideration of some counters to the mania of measurement highlights to new and emerging scholars, just commencing the journey of documenting their research, that multiple options exist not only for their research to be evaluated, but for them to be self-consciously aware of the influence of their work. In the final part of this chapter, I will turn to consider some of the digital steps new (and existing) academic authors can take to establish a digital footprint in the less-than-perfect world in which the twenty-first-century academy finds itself.

Establishing and Developing a Digital Footprint

This final stanza of the chapter presents and discusses a small selection of tools and networking sites that are helpful to scholars in establishing a digital footprint. They provide additional options for tracking the reach, impact and dissemination of scholarly work.

ORCID and Kudos

ORCID: Generating a unique persistent digital identifier to distinguish registered users from every other researcher (ORCID, n.d.), an ORCID ensures that the published work of registered users is recognised as uniquely theirs. This permanent identifier can be displayed on articles, but also on CVs and on various profile pages related to the digital footprints of authors. Indeed, many publishers now require authors to have an ORCID, and this digital tool enjoys wide-spread institutional support as ORCID is supported by over 1100 Member Organisations (comprising, for instance, universities, laboratories, private research bodies and publishers). Registration as an ORCID user can facilitate networking and contact among researchers internationally. It currently has over 8 mil users, and access to ORCID is free of charge.

Kudos: This is a cloud-based research engagement and impact tool supporting researchers by ensuring their publications “get found, read and cited in a world of information overload” (Kudos Innovations Ltd. 2020. “About Us”). Kudos has over 330,000 registered researchers, and “works with publishers, universities, corporations, funders, metrics-providers and other intermediaries to help aggregate efforts around researchers to build impact for their work” (2020. “About us”).

The ‘Kudos Hub’ locates discovered publications of registered users (the process of creating the list is supported by linking to one’s ORCID account). Users are able to embed links to media such as Twitter, Facebook, Academia and ResearchGate, as well as to personal webpages or blog sites. Kudos provides a daily update of traffic directed to the Kudos-linked articles through those weblinks. It also provides a link to the Altmetric website (https://​www.​altmetric.​com/​), providing more detailed information regarding sources such as Twitter mentions of the publication.

Social Networking Sites

Twitter: Users can promote microblog entries (‘tweets’, which are limited to 280 characters), amongst their followers, who, in turn, can ‘re-tweet’ to any of their followers (Wired Staff 2020). In its own words, “Twitter is what’s happening in the world and what people are talking about right now” (Twitter, Inc. 2020. “About”). Tweets can include images, videos and links to other websites, for example, to a user’s personal web or blog site. The advantage for authors is that they can tweet links to their articles, if these are freely available, or to the prepublication (or ‘green’) copy on sites such as Academia and ResearchGate. At least one serious disadvantage (like Facebook that also links users to other like-minded users) is that Twitter users can find themselves in an echo chamber, where a user’s followers simply confirm what the user already thinks.

Academia: This American social networking site claims over 117 mil registered users (Academia 2020), allowing academics to share and track the reach of their research. Amongst its services, the site provides authors with a Green OA repository of pre-publication manuscripts, author accepted manuscripts (AAM) and open access articles,9 and tools to track mentions, and networking opportunities created by following others with similar interests (or being followed by others). The reading interests of registered users are matched with manuscripts covering subjects of related interest. Increasingly though, some of these services are located behind a paywall.

ResearchGate: A European-based competitor to Academia, this site has fewer registered users, but claims higher traffic (ResearchGate 2020). Aimed also at researchers and academics, it too provides a repository feature for manuscripts and published works, with opportunities to create networks and collaborate across projects and users. It provides the facility for researchers to ask questions and have these answered on the platform by other users. User-tailored details include an h-index and ‘RG Score’, though it is reported that this score is largely meaningless (ResearchGate 2020). Furthermore, ResearchGate has been found to contravene copyright by permitting the upload of articles published in subscription-based journals (ResearchGate 2020).

Some Critical Considerations

Tracking and providing evidence of researcher impact may be seen as an important reason why scholars might engage with forms of digital, online media. For example, impact evidence can be gleaned from altmetrics, which are as an indicator of broader communities engaging with research (Green 2019; Sugimoto et al. 2017). Green, an OECD researcher, conducted a case study in which he created a vigorous campaign promoting two of his articles primarily using Twitter. Green used Altmetrics (https://​www.​altmetric.​com/​) as an evidential tool to track the uptake of the articles. He achieved enormous interest a variety of ways, including having a published article ready for inserting into discussions at relevant conferences by, for example, tweeting key messages from his article alongside the conference hashtag (Green 2019). His case study also demonstrated that there is considerable labour associated with establishing, developing and maintaining such a digital profile—indeed he premised his case study on the idea that as much time is spent promoting an article as researching and writing it.

Many academics will, however, be uncomfortable with what Green suggests. Recent malicious use of social media to interfere with democratic electoral processes will cause some to question its value (Hemsley et al. 2018). The way Green (2019) has proposed that academics use social media to promote their published research, may, arguably, suggest nothing less than bald self-promotion, and self-styling in the public realm, an idea reprehensible to many academics. On the other hand, in the performative context of their work, academics may see value in engaging with social media. Ultimately, despite some studies suggesting scholars are neither consistently in favour nor against these media (Sugimoto et al. 2017), there will be those who agree that social media can promote positive messaging, develop community, and engage users intellectually (Hemsley et al. 2018). Therefore, as suggested here, many scholars will find some use value in a suite of carefully selected digital tools and social networking sites.

Conclusion

To engage, as I have just suggested, with the business of creating and maintaining a digital footprint, is labour that may be well justified and will manifest in positive author recognition. Seemingly dry bibliometric tools are useful, not only to the scholarly author, but for publishers, for whom these tools have acquired a marketing allure and status, signalling a journal’s success, reach or prestige. Concerningly, however, these measures are not only a marketing device enabling corporate publishers to maintain their hold on viable publication channels, but they may have become a proxy for quality. In some national contexts for example, researchers are enjoined to publish only in ‘quartile 1’ or ‘quartile 2’ journals. Even if such strictures do not apply, authors are nonetheless well advised to understand at least some of the more notable metrics when deciding where to publish, although these metrics must be seen in context. Astute authors will come to recognise that multiple metrics tell a more complete story than just one will.

For the foreseeable future, most authors will continue to have their published work placed behind paywalls, but legitimate opportunities to engage with Open Access (OA), provide authors the potential of wider readership and uptake of their work. For those unable to benefit by institutional support of Article Processing Charges (APC), then, as suggested in this chapter, authors might engage in collaborative work where a team can share the APC cost. That strategy offers authors the benefits of OA and enhances their scholarship by virtue of collaborative work.

For new and emerging scholars particularly, and those unfamiliar with the world of digital publishing and OA, the proliferation of predatory journals and publishers can make the process of selecting appropriate dissemination avenues challenging. Despite the possibility of publishing houses being caught up in dubious practices as suggested earlier, and despite their monopolisation of scholarly work, the reputable standing of major publishing houses are nevertheless a new author’s best defence against blatant predation.

Scholarly authors thus find themselves in an uncomfortable relationship with publishers, but moreover, the first two decades of the twenty-first century have demonstrated that they also find themselves caught up in the steady shift from print to digital media. In this chapter, I have argued from the premise that, despite the potentially negative features of the online and digital world, academics ought to play at least some role in that world. What could now be at stake for scholars is no longer, ‘publish or perish’, but, ‘be visible or disappear’.