© Der/die Autor(en), exklusiv lizenziert an Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2022
L. Supik et al. (Hrsg.)Gender, Race and Inclusive Citizenshiphttps://doi.org/10.1007/978-3-658-36391-8_18

Cybercolonialism and Citizenship

Pinar Tuzcu1  , Malte Kleinschmidt3   und Radhika Natarajan2  
(1)
Universität Kassel, Kassel, Deutschland
(2)
Universität Bielefeld, Bielefeld, Deutschland
(3)
Leibniz Universität Hannover, Hannover, Deutschland
 
 
Pinar Tuzcu (Korrespondenzautor)
 
Malte Kleinschmidt
 
Radhika Natarajan

Abstract

After digitalisation was seen as a great opportunity for the emancipation of subjects in the 1990s and early 2000s (Block and Dickel 2020), it has become increasingly apparent in recent years that even supposedly neutral algorithms perpetuate and reinforce social inequalities, e.g. with regard to race and gender. Thus, the continuity of colonial discourse is becoming the focus of the debate on these processes. In conversation with Radhika Natarajan and Malte Kleinschmidt, Pinar Tuzcu explains how colonialism is reproduced in the digital world, and discusses political strategies to address this issue.

Keywords:
CybercolonialismCybalternityAlgorithmic otheringPolitics of prediction in digital timesData science
Pinar Tuzcu

(PhD) is a post-doctoral researcher and one of the main applicants and the project coordinator of the Volkswagen Foundation-funded project “Re:Coding Algorithmic Culture” in the Department of Sociology of Diversity, University of Kassel. Her research and teaching interests include contemporary queer feminist theories, post-migration studies, algorithmic power, anti-colonial feminism and critical research methodologies. She recently published “Decoding the Cybaltern: Cybercolonialism and Postcolonial Intellectuals in the Digital Age” (2021) in the Journal of Postcolonial Studies.

 
Malte Kleinschmidt,

Dr., researches and teaches in the domain of Civic Education at Leibniz University Hannover. His research interests are decoloniality, globalisation, racism, post-migrant societies, radical democracy and citizenship studies, particularly in relation to civic education. Major publications are the books Dekoloniale politische Bildung. Eine empirische Untersuchung von Lernendenvorstellungen zum postkolonialen Erbe [Decolonial Civic Education – an Empirical Research of Learners’ Concepts of the Postcolonial Heritage] (2021) and Eurozentrismus in der Philosophie [Eurocentrism in Philosophy] (2013).

 
Radhika Natarajan,

Dr. phil., read German literature and linguistics at the University of Mumbai and taught for more than a decade at Max Mueller Bhavan Bombay, before pursuing her research on refugee women and their gendered negotiation of everyday life and the German language at the University of Hannover. Currently, she is post-doctoral fellow at the Faculty of Education Science, University of Bielefeld, and visiting lecturer at the universities of Kiel and Wuppertal. Her teaching and research focus on German as additional language, multilingualism in the contexts of migration and diversity- and gender-sensitive educational approaches for children and adults. She published her doctoral thesis in German, entitled Sprachliche Wirklichkeiten der Migration [Linguistic Realities of Migration] (2019), and has edited an interdisciplinary volume in German, Sprache, Flucht, Migration [Language, Exile, Migration] (2019) and a bilingual anthology, Sprache – Bildung – Geschlecht [Language – Education – Gender] (2021).

 

1 Coloniality and Digitalisation

Malte Kleinschmidt: You use the concept of coloniality in your article. An essential phenomenon in your analysis is your exploration of the biases inscribed into the algorithms of the digital world. You show, impressively, how private and monopolised companies control great parts of the structuring of information and thereby marginalise subaltern knowledge and reproduce colonial and racist categorisation. How far is this an aspect of othering and colonialism in your opinion? What levels of coloniality are crucial in your work on digitalisation?

Pinar Tuzcu: I use the concept of coloniality to analyse how the employment of digital technologies in the process of knowledge production generates new modes of colonial relations. My aim is to show these colonial relations as the continuation of neo and historical colonial settings today. I do not claim that digital technologies are opening a completely new era in this regard, but rather I focus on understanding the continuities of colonialism and the ways they work in the digital age. More specifically, I look at how the data companies take part in this continuation, i.e. how the digital data is used by those companies to shape our mode of knowing which reproduces and also generates inequalities and new modes of exploitation, particularly with regard to asymmetrical geopolitical settings. So, how do they produce ‘knowledge’ that perpetuates, for instance, ‘othering’ practices from digitally collected pieces of information? And, eventually, how do these practices help companies and also governments to exploit others? In what ways do we know about us right now and how do those companies actually shape the way we know about others in the digital age? In my view this is a new mode of knowing that emerges from our digitally induced consciousness/being, which I for now tentatively call this field technonto-epistemologies. So, in order to map out this field in the paper, I looked at the encoded links between the tech experts, the discursive and digital applications they produce and their targeted users. I examine and try to decode these links to understand what kind of subjectivities, actors and agents emerge from such a field.

One of these actors is, for instance, the ‘data scientist’. Now, data science is not something that is completely new, and there have always been data scientists, even before the Internet. However, with the Internet, they have gained another and much more vital role. Today, we hear their names everywhere! At one point, after hearing their name over and over again, I just wanted to understand what role data scientists deploy for themselves in the digital age when they call themselves scientists. In fact, how they present themselves in the scientific world, what sort of methodologies and tools they are using for their scientific inquiries and what is their relationship to their research subjects in a hyper-datafied society. I’m basically asking: Whose science is this?

So, when I found Brian Godsey’s book Think Like a Data Scientist: Tackle the Data Science Process Step-by-Step (2017), I thought, this is exactly what I want to understand: how the data scientists think. I wanted to see how they analyse the data. Mine was just a methodological question! Because, well, you know, I am a sociologist, and I know and believe that methodology matters in the process of knowledge production when we do science, because methodologies shape knowledge. So, I wanted to see what sort of methodological approach they employ while they are doing their science. At one point, while I was reading the book, I started to get gradually alarmed by the language that was used by Godsey because his language and choice of words were strikingly reminiscent of colonialism. And reading that, to put it bluntly, a white man calling for exploration of these virtual territories, which he called the new world, gave me chills and a flashback. What is interesting, however, is his use of the phrase “the new world” does not just refer to the Internet but also and specifically to the data-lands. He is comparing these ‘lands’ to the jungles of the untrodden territories, calling data scientists the explorers of these lands and he says, “explorers much like famous early European explorers of the Americas or Pacific islands” (Godsey 2017, p. 43).

Comparing his mission heroically with the early European colonisers was actually just enough to see what kind of colonial dynamics were held up in the idea of being a data scientist. I felt like I am now getting where this kind of jargon is coming from. Besides, as a scholar, especially having engaged in the fields of post-colonial theory and decolonial thoughts, these kinds of wordings trigger anti-colonial questionings in my mind. So, I couldn’t help but think to myself, “Okay, hold on. Who is the native of these worlds?” Because we know that what Godsey called exploring uninhabited territories—which is very similar to the colonial discourse—were actually not uninhabited territories. People, natives, were living in those territories, who were indigenous to those territories, those lands were not abandoned. So, it was never a story of exploration, but a story of occupation and exploitation. This parallel mindset made me ask a, perhaps very metaphorical, but essential, question as a point of entry in my research: “Whose data-lands are going to be stolen, get grabbed and who is going to be exploited and misused by these potential techmasters?” So, my questions are the counter-imitations of Godsey’s colonial jargon. They are an anti-colonial response to expose the colonial mindset engrained in this kind of thinking.

Yet, these questions are not entirely metaphorical. There is a reality about them. Yes, a reality which emerges virtually, but, nevertheless, a reality whose virtual nature makes it highly complex to grasp, observe and witness. This is how I started to develop my research question, interest: How do we link such reality that comes from the virtual conditions with our offline life (if anything like this even exists, after all) in this context? How can we make it more observable and graspable? That’s why I thought a term such as ‘installer colonialism’—a virtual version of settler colonialism—might help at least to start such a conversation. ‘Installer’ because you do not have to sail or move anywhere, you just have to install the software (like in the Cambridge Analytica case) in order to extract data from the data-lands, and often without their owners’ consent. Because the Internet has its own geo-political orders and borders. Domains, for instance, are both representing these territories and can be seen as the markers of digital borders. So, the idea that was once very dominant, that the Internet is a borderless space, is now almost completely contested.

Malte Kleinschmidt: True, but how should we understand these new geographies of the Internet in this context?

Pinar Tuzcu: With new approaches. For instance, in my article, ““Allow Access to Location?”: Digital Feminist Geographies” (2016), I suggested that the borders still exist in the Internet. Perhaps they exist in different forms and have a different material reality, but they are still at work. In this article, I suggested that we need a different kind of politics of location in order to understand how this new geo-political order of the Internet functions. This is what I was interested in and why I use the term cybercolonialism—because it articulates and places emphasis on this spatiality. There are also other concepts like digital colonialism or data colonialism. Although they are very useful concepts, for me, they did not serve to really make me understand this specific spatial affair, the geo-political order that is so much at the heart of the colonial mindset and logic. What are, for instance, the digital borders regimes? How do they function? Therefore, I was definitely inspired by these terms, but my concept of cybercolonialism aims to expand them. For instance, with the concept ‘digital colonialism’, Danielle Coleman makes a very illuminating analysis by looking at it from a legal perspective (2019). But when I say cybercolonialism, I want to draw attention to the geo-political dimensions as much as the unique material reality of the digital data. Because for me, it is not the digitality itself, the technology itself, nor is it the data which causes the colonial impacts, but the way certain data and digitalisation policies are installed in particular geographies and how they target specific groups of people. And, in fact, our data has always been collected, this was also happening before the Internet and digitalisation. But after the Internet, this process accelerated and the technocrats who own companies or work for these companies make incredible amounts of money by using our digital data. So, it is not just about collecting data anymore, it is about extracting data! And, in return, let alone becoming the stakeholders of the income and profits they generate—because, in the end, they make money with our private ‘property’—we don’t even know who is using our data, for what and to whom it is getting sold. We have no control over something that belongs to us. So with the term cybercolonialism, I want to draw attention to, yes, the fact they collect our data but also what these companies are doing with our data, how they use it to shape the political and social climate globally and whether there is a way to claim not only legal but also economic rights over the data that are stolen from our data-lands.

As Coleman also brilliantly explains in her article, one example to illustrate the ways this geo-political order works is the different regulations with regard to data and online privacy rights and protection in the Global South and the Global North. The privacy protection laws which were employed in the Global South are currently in a very poor condition and make the citizens of those geographies very vulnerable to exploitation. But at the same time, in Europe, we are fiercely discussing new data protection rights, new laws are being passed and we are arguably getting this sense that our privacy online is much better protected than it used to be a few years ago. But when you look at the Global South, this is not the case at all. This is, of course, an obvious example of how those power relations are still at work, encoded in both discursive and algorithmic applications. And, in fact, the issue is much more complicated than simply having a better data privacy policy, it is also about algorithmic transparency and democratisation.

2 Data Fictionalisation and Cybercolonialism

So, in my work, I am moving from the ‘what’ to the ‘how’. Hans von Storch and Carsten Gräbel’s article, The Dual Role of Climatology in German Colonialism (2018), is a fascinating piece. Very short, but a very informative article because it brings me to exactly the question of the how, to the question of methodology. For instance, how do these data companies and their data scientists use predictive analysis? With their article, von Storch and Gräbel opened up a new perspective for me. These authors look at how, in that period of time, at the beginning of 20th century, some entrepreneurs used weather forecasting from the weather stations such as the Deutscher Wetterdienst, located, for instance, in Hamburg, to predict the anatomic but also cultural characteristics of folks who were living in the other parts of the world, even before they sailed their ships to colonise them. So, this is a very interesting phenomenon because it gives us the correlation between what kind of role predictive analytics plays in analysing the digital data (Editorial Note: see the contribution by Douglas Becker on predictive policing in this volume). So, when you read von Storch and Gräbel’s article, you see—because I also went to the original studies they are referring to in their paper—how this data, weather forecast data, was used to make predictions about the people these scientists had never met, people they knew practically nothing about. I saw that these studies, which used predictions as their methodological approach, were completely suggestive works, yet the data is there, it is very well organised, categorised and presented.

This is why I call it data fictionalisation. Data fictionalisation works for me almost like a riddle. You want to reach a certain conclusion through data which you want to acquire, so, in this sense, the reality doesn’t quite matter. And you justify it with numbers and statistics. By doing this, you also develop knowledge and a new way of knowing. That is why, in my article in this volume, I zoom in on a short expression from Cambridge Analytica’s data scientist and former CEO, Alexander Nix: “We knew that…” (see Tuzcu in this volume). Nix uses this phrase, in a promotional video, when he describes to his clients how they interfered in several elections globally by using predictive analytics.1 The predictive approach that is apparently employed in this example unfolds the complexities and the continuation of this colonial logic. It now takes the form of algorithmic power, which is accumulated in the hands of a very, very tiny group of people, who, with tech companies, are usually and often located in Western territories, and let’s be sure to not forget China but also operate in non-Western territories. I also show in my article in this volume that when Alexander Nix says, “We knew that…,” he actually tells us that his way of analysing data has brought him to a prediction that helped his client, which is a political party, to win the election in Trinidad and Tobago. But my analysis shows that, actually, his predictive strategy was based on the racial colonial divide and eventually polarised the youth of the country by hijacking their unified political actions. So, in this case, I am asking a rhetorical question about the scientific methodology that is used to produce such predictions. Namely, what kind of coding could legitimise the labelling of the people in Trinidad and Tobago, in Nix’s words, as ‘docile’, ‘lazy’ and ‘apathetic’ subjects like the company did. What kind of algorithmic categorisation lies behind it? This makes me curious because, in this video, and as Nix’s narrative also reveals, we see that these young people demonstrated their political interest and motivation during the election campaign, yet he labels them as being politically apathetic. My aim to focus on his words “we knew that” was to have this methodological investigation and see where it leads to the current discussions on algorithmic bias, or, as I like to call it, algorithmic othering. That is to say, to show how data fictionalisation operates in this case to figure out how their predictive analysis influenced the behaviour of the citizens and that of potential voters.

Malte Kleinschmidt: You just started to talk about the case of Trinidad and Tobago, which is a telling example of how Cambridge Analytica influenced politics in the Global South, and similar companies may still be doing it. Can you tell us a bit more about what happened there, for example, what kind of actors were involved? How is this connected to the difference between virtual reality and non-virtual reality?

Pinar Tuzcu: This is exactly why I introduce cybercolonialism as a concept by hoping to open up a new kind of understanding in this regard. I am really interested in what reality is in virtual reality, how we can bring the analogue and the digital power structures together. Because these things do not just happen on the Internet and stay there. There is an incredibly strong intra-action within these spaces, which we still lack the language to describe. I would suggest that this is where we find ourselves as citizens at the moment: in the currents of this intra-action. Cambridge Analytica is a good example to explain this intra-action, but it is a very complicated story. Whenever I read about it, I figure that each time, they call themselves something different, sometimes it is a ‘behaviour changing company’, sometimes a ‘private intelligence company’ and sometimes just a data company. Sometimes, they use the term ‘global election management agency’ to describe themselves. But I think they were all of these. The company was founded in 2013, but before that, there were already smaller firms. After the reveal of the public scandal in 2018, the company declared that it was bankrupt. So, it is not functioning anymore as Cambridge Analytica, but there are still rumours that the small companies are continuing their work. I think it highly likely that this is true. It would be naïve to think that the way Cambridge Analytica influenced populations has disappeared now. Even while investigations around what they did in the elections continue at different levels, their global impacts are still unfolding. We still don’t know the entire picture, because it was revealed that, thanks to the whistle-blowers, they were involved in the general elections of more than 86 countries. So, it is a global phenomenon! But, interestingly, nobody, no country, to my knowledge, has so far openly confirmed that they commissioned Cambridge Analytica for their election campaign. In fact, in 2020, the British state-aided watchdog, The Information Commissioner's Office, came to the conclusion that the company wasn’t involved in Brexit, although, ironically, Alexander Nix, together with his high-ranking ex-employee Brittany Kaiser (who later became a whistle-blower) say in the video that they were involved in Brexit (BBC News: 2020).

In the case of the election in Trinidad and Tobago, to my knowledge, so far there has been neither denial nor acknowledgement. What we have in our hands as evidence about the company’s involvement in these elections is the video I have just mentioned, and some documents that were leaked by Brittany Kaiser and another high-ranking ex-employee. In this video, Alexander Nix (the CEO) openly tells his clients how they actually managed these elections by taking the example of the 2010 general election in Trinidad and Tobago. In the video, Nix triumphantly claims to have launched a movement called “Do so! Don’t Vote” to prevent young people, the Afro-Trinidadian and Indo-Trinidadian youth, from casting their votes in the election. He explains that, in reality, however, their real target was the young Afro-Trinidadians. And while explaining how active these young people (both groups) became in the “Do So!” movement, he labels them as lazy, docile and a generation that is not interested in politics. So, the company’s election strategy is built on exploiting and manipulating the country’s racial diversity as it reappropriates the colonial history of the stereotypical depictions of the country’s cultural and religious differences. Nix later triumphantly explains, in this leaked video, that despite their active participation in the movement, the Indo-Trinidadian youth “went out and voted”. In fact, the turnout among 18- to 35-year-old Indo-Trinidadians was like 40%. But actually, Nix says, the company’s client needed something around 6% of this turnout to win the election, since the election results between the two parties were usually very close. Indeed, the winning party of the 2010 general election was UNC, a party that allegedly commissioned Cambridge Analytica and increased their popular vote from 29% to 49%. This is a huge increase. That is success and that is why Alexander Nix says towards the end of the video, “We won” which is very irritating and confusing. It leaves you with an eerie question: “Who won the election? Cambridge Analytica?”.

What is stunning and also bothers me the most is the kind of role data scientists today are prepared to play in such events. Because it raises questions about the hegemonic power that is empowered by the scientists. For instance, the app “This is your digital life” that extracted millions of people’s private data without their consent on Facebook was developed by a Cambridge University professor. So, when you look at the dynamics of this kind of hegemony, and its relation to science, to the question of citizenship, political participation and public elections, you ask yourself “Is Cambridge Analytica part of the government now? What do they mean when they say “we knew that” or “we won?”” It is very hard to grasp their position in this entire scenario, because it goes beyond just being a data company who started to manage a political party’s election campaign. How do we locate them in this kind of dynamic of democracy, political participation and citizenship? Who is voting, who is not voting, who is managing the electoral system? What we can observe here proposes a different mode of citizenship, emerging from the predictions that are made by data companies. At the same time, their language of expertise is not at all transparent, not accessible, and conceals the real impact of the entire operation. It is remarkable how they managed to keep their language so opaque and inaccessible but can justify it by saying “Well, we knew it” but never explain HOW they knew it.

Malte Kleinschmidt: It is really difficult to know the impact when all our knowledge is based on some documents from whistle-blowers and promotion videos like the one you mention. Cambridge Analytica wants to show that they can change behaviour and even an election to get new clients, of course. So it is difficult to estimate the real impact of their work, but I agree that we can learn about the ways colonial logics are embedded in what they do. I think the logic involved and also what they actually want to influence are not so much about censorship or domination, but remind me of a Gramscian notion of power which is about being able to conduct power, e.g. by controlling the infrastructure. It is not about domination in the first place, but about producing consent, like truth, cohesion, subjectivation forms, control or influence.

Pinar Tuzcu: That is a very good point. But, today, the whistle-blowers and the leaked documents become essential for us, for the citizens, to understand what is really going on. We need to acknowledge it. Because otherwise we wouldn’t have access to certain pieces of information that were kept secret from us, if they wouldn’t leak it. Of course, whistle-blowers have an ambiguous role in this narrative since they were once actively involved in these things, and for some reason, mostly, as they claim, because of their bad conscience etc., they decide to make these ‘secrets’ public. So, despite their ambiguous role, I still think that what they do is very courageous and also very important for the public, and I truly believe that they should be given special legal protection for this. Because, firstly, without them, we wouldn’t know what we know today, and secondly, being a whistle-blower usually comes at a high cost. Often this cost is risking their lives. However, this by no means should set them free completely from what they did before they became whistle-blowers. That’s another thing. They are kind of ‘frenemies’, if you like. Anyway. But, when I talk about transparency and the inaccessible language used in this setting, I also refer to this kind of relation. It is also because we encounter a highly cryptic professional language. There are algorithms, there is coding and there are programmers, and we have no clue what they actually do. We don’t know how to read them, what sort of algorithmic ranking and filtering are used, etc. This highly technical language comes from mathematical procedures and makes people hesitate to try to understand computing. The other aspect of these hegemonic relations lies in the creation of such an isolated and exclusive professional, which makes the arguments that they generate highly difficult to analyse. So, we can really think of a new group of people producing and disseminating controlled knowledge by using artificial intelligence. Not only technological but also at the onto-epistemological level, because they produce knowledge according to our digital existence/being. In fact, they perform science, they like to look, act and sound sophisticated, and claim that their way of processing the knowledge extracted from the data is highly technical and sophisticated. So, coming back to your question with regard to the Gramscian notion of power, that’s why, in another article, I called this group of technocrats, like Alexander Nix, artificial intelligentsia (see: Tuzcu, 2021).

3 Power and Language: Perspectives for a Democratisation of Knowledge

There are these technocrats who understand how to use and create technology. We use technology, but we do not understand how to create it or how it operates. And, in fact, now technology uses us. People assume that you need to study to understand it and everyone else, but particularly disadvantaged people, is expected to trust and go with whatever these technological experts, the programmers, generate, because we as citizens do not understand technology. And of course, the fear and performative suggestions generate societal hierarchies which, in this case, directly emerge from the access to knowledge. Because when we look at how much computers and algorithmic calculations influence our life and realise how little we know about them, it indeed becomes kind of threatening. But let’s also not forget that there are quite a few people who are working against these forces. So, the point is here is not to become technophobic, but, on the contrary, to engage and learn more about technology.

It is exactly this type of hegemonic power which emerges from this language of professionalism when Alexander Nix says, “We knew that” Well, how? “Predictive analytics”. Okay, how do they work? There is a very inaccessible language, even though when you go into it you see that it creates new forms of hegemonies, which brought me to my concept of cybaltern. This is how we are excluded from that form of knowledge production.

Malte Kleinschmidt: So, would you say that we all need to study informatics right now, or do we need to achieve something like a critical literacy of this virtual reality to understand what happens behind the scenes? What kind of perspectives do you see to challenge these hegemonies? There are a lot of movements from below, also in virtual reality. A really popular example would be the Black Lives Matter Movement, which was born with a hashtag Black Lives Matter. Of course, it was not the movement that was born, but this label. Also, a lot of the struggles around this are on the Internet. What perspectives do you see for this kind of political mobilisation? Do we need a radical change, the democratisation of the whole infrastructure?

Pinar Tuzcu: Let me start answering your question by underlining that I started using the term ‘cybaltern’ very carefully in this context. I was not really satisfied with the term ‘digital subaltern’. The concept of digital subaltern doesn’t really explain what makes the subaltern digital. Subaltern has a subjectivity that denotes a gap in knowledge, this means that you can’t turn them even into data because they don’t have the digital literacy and also don’t have access to digital hardware and software. But when we talk about the digital world, as we see in the example of Cambridge Analytica, there is a group of people whose voices are muted and rendered unheard, paradoxically, despite and because of the digital tools available to them. So, it is not a matter of being literally eligible to use browsers, or hashtags, or Twitter, etc. These are people who have access to digital hardware and know how to use certain digital platforms and software, but despite this, they are silenced and, perhaps worse, their voices are manipulated. Trinidad and Tobago is a perfect example, because young people were involved, they were active, they showed political interest, they were motivated and they actually worked in unity. But despite this, the data companies managed to silence them and manipulate their political positions and divide them. This is what I want to highlight in my argument. The silencing does not necessarily happen because groups of people do not have access to digital tools, but because they do. Paradoxically, the tools that theoretically should make their voice heard become the very means for suppressing it. For example, young people in Trinidad and Tobago produced YouTube videos etc., but their movement was framed by the company as a kind of ‘fake’ grassroots movement. To be honest, we can’t say how much of it was fake, but, at the least, we can say that it was hijacked. So, we come again to the question of political activism and participation in the digital age. That is, what do we even mean when we say grassroots movement, protest, uprising, etc. in times of social media, after all? Grassroots movements are increasingly becoming vulnerable to such manipulations on the Internet, and it is hard to understand who is speaking from where, which makes everything more complicated. So, with the term ‘cybaltern’ I am trying to draw attention to yet another new subjectivity emerging from this field. If we talk about democratisation today, it is important to understand these subjectivities.

If your question is about what would be the efficient or more productive policy for data privacy that would hinder or disrupt such colonial continuation, and if someone would answer that question with recommending the deprivatisation of data companies, I would ask what deprivatisation means in this context. Should it be some kind of governmental task and do we actually want to transfer the responsibility to the state? Because we are definitely talking about a different kind of socio-economic constellation—basically a different sort of product, value, labour and property relation here—that these companies propose and manage.

If you suggest, for example, that the state should gain control over the flow of people’s digital data, my answer would be a straight “No”. I do not think it is putting it in the right hands to give the state control over any kind of data. But if we talk about imposing targeted and specific regulations on these companies, I would absolutely agree. There have to be new regulations specifically tailored for Facebook, because Facebook operates differently from Instagram, Instagram operates differently from WhatsApp, etc., etc. The terms and conditions of these regulations should be done in cooperation with digital data rights activists and scholars, and the process should definitely involve the public—by this, I mean, for instance, the users of Facebook. Claiming the knowledge and dismantling their tools are definitely necessary here. First of all, people need to learn what kind of algorithmic coding processes these companies are using and what their purposes are. We need to understand what filter bubble algorithms are, where they come from and why they are employed in certain social media platforms. For instance, a specific filtering program application is employed in Facebook, but not in WhatsApp. So, we need to ask what kind of algorithms WhatsApp actually uses to access their user digital data. What is the purpose of the algorithms they use? We need to know more about it, because this is exactly where they are generating this new kind of hegemonic power. That’s why I would actually say that we need algorithmic democratisation. It might sound banal but is a straight answer to what you asked.

Beside this, data rights classes on algorithmic coding should be definitely included in school curriculums, as kids need to learn what they are dealing with when they use Facebook or other platforms. We need to go beyond interfaces and look at the digital infrastructures and their applications. At the same time, as a legal field, it should become a part of law departments at universities. I also think it is necessary that we as social scientists, who are particularly dedicated to social justice, need to overcome the fear of engaging in this kind of language of professionalism proposed by the technocrats. We need to claim our right to decode and recode what these companies are doing with our lives technologically, and we need to understand the techno-discursive debates. This is what I’m trying to do in this piece on Cambridge Analytica and in other articles that I have recently published. It means that I do not have to know how to program a computer in order to understand the political and social influence the data companies have in my life. The ways this language of professionalism is proposed, suggested and created excludes us and tries to invalidate all the knowledge we have accumulated so far as scholars. I propose a methodology to get into it to understand how it operates behind this techno-discursive power. But, in order to reveal how software applications generate new forms of power and who is benefitting from this power, we need to ask different kinds of questions. That’s why I borrowed Stuart Hall’s formulation and called it de:coding. This encapsulates decolonial coding, which means techno-discursively exposing the hegemonic characteristics of algorithmic power that constitute new premises of cybercolonialism.

To answer your question regarding online or hashtag activism, I would say that this is definitely a part of the digital world, because, nowadays, almost everything emerges online as a participatory action first, for example, when people are called to rallies online. This is not only the case in Black Lives Matter, but happened in the Gezi Park demonstrations in Turkey, Lebanon and the Arab Spring, everywhere. And this was also what Cambridge Analytica claimed that they did in the election: they claimed that they launched a grassroots movement through a fake Facebook account and mobilised people to go to demonstrations. Can today’s grassroots movement be used to control election results or regime changes? This is something we need to be really careful about when talking about movements, riots, etc.

Having said that, I think such activism that emerges online and uses online platforms is very powerful and very important. It is a new, massive form of calling people out, as in the case of the #MeToo movement, for instance, and can be very effective if it goes viral. This is what happened with Black Lives Matter, what happened last summer after the killing of George Floyd and Breonna Taylor. At the same time, we have to be aware of the circumstances. You mentioned that Black Lives Matter did emerge online, but the action was accumulative, especially against the backdrop of COVID-19 measures when people were living under lockdown. It became much more vital and important to use hashtag activism, but people still went on the streets. And the movement happened on the street. So, hashtag activism is only one dimension of activism.

4 Perspectives for Post-Colonialism

Radhika Natarajan: Thank you, Pinar, for your very illuminating exposition of the different aspects you discuss in your paper. Instead of a question, I have a few remarks that may lead us to further aspects which we could discuss. I very much liked the way you have clarified how we cannot use vocabulary and insights which simply dichotomise the Global South and the Global North, or erroneously describe responses from the Global South as being responses from below. Indeed, we do at times feel that responses at the grassroots level are included, but your work shows how complex and intertwined global inequalities are. On the one hand, we have disenfranchised people not only in the Global South but also in the Global North. I do not like to use the term ‘white trash’, but it points to the kind of disenfranchisement to which people are subjected, who apparently are seen as white or see themselves as white, but socially and socio-economically do not belong and thus, again, do not have a voice concerning political strategies or social developments that directly affect them. And this might influence their access to technology or the ways in which they use it.

You mentioned Gezi Park and what happened in Turkey. In India, where I come from, we observe a markedly right-wing nationalist movement which has taken over mainstream politics. If you look at how it operates in India, you could even call it neo-fascist. And one of the most important tools they use is WhatsApp. The information technology revolution occurred much earlier in India than in other countries, and ushered in a new age and possibility of communication. Nearly everybody today has access to mobile phones and to WhatsApp. And there is a great amount of politicising that happens, for instance, before and during communal riots which flare up, or rather are incited, between members of two religions, the majority community, Hindus, and the minority Muslims. This is engineered and organised simply through rumour-mongering via social media. So, on the other hand, we have a relatively broad access to technology in some countries of the Global South, and it is not merely ‘the West’ or the Global North which uses it to certain political ends. In India, I would add, it is not necessarily so much about communal disharmony among real people on the ground, but the rift that is sown and widened through the employment and deployment of technological means. For me, it seems that a kind of social engineering happens, which fosters and furthers an already existing communal unease.

To mention another aspect, many people in India are at the forefront as leaders in the field of technology. Sundar Pichai who is the CEO of Google comes from India, was raised, educated and socialised there. He is not just a person who was born in America and has Indian origins. No, he is a person very much from India who moved to the USA and became the CEO of one of the biggest tech companies in the world. It shows that, just because you have so-called people of colour in positions of power, this does not automatically mean that they can or will take up the cause and start implementing strategies to include everybody. This is in line with the point you make about the nature of the complexity of social inequalities perpetuated and stabilised by technology, which cannot be altered merely at the individual level but are structural in nature and rendered invisible by the algorithms at play.

Your paper has given me quite different ideas on how to develop post-colonial discourse. It highlights some aspects we could advance and need to critique from within, in order to comprehend ongoing debates and adjust to the current developments. For example, your use of the term ‘cybaltern’ and your thoughts on it are very interesting.

Pinar Tuzcu: Your comments make me want to highlight again how intertwined these things are. Definitely, having a person of colour in high-ranking positions at these companies doesn’t mean that we are going to automatically trust them. Because being a person of colour is one thing, but there are also social and political positionings, right? The Internet was yesterday something uniting and connecting, and today we are talking about its polarising character. I also think that it has something to do with its infrastructure, with its design. I say this because, as we all know, its parameters are built on binary codes, i.e. on (1) and (0). And when we talk about the polarising impact of the Internet, this might sound a little bit speculative, but I do think that there is a relation that needs to be explored. Because political polarisation happens through the Internet and different kinds of manipulative information is channelled into extreme right-wing positions. And we were able to observe the possible outcomes just a couple of days ago in the course of the invasion of the US Capitol.2 So, there is a group that is being radicalised. And parts of this group might be poor, unhappy and dissatisfied, but what is important is that we are experiencing a different kind of informational polarisation. That is why, for instance, algorithmic filter bubbles are interesting. They determine what they call your ‘digital life’, your digital personality. So, collected data points generate a digital personality for me. The programs collect my digital fingerprints and put them kind of together and construct a new digital personality. And then I am bombarded with certain pieces of information that are tailored according to my digital personality but, at the end of the day, actually target the analogue me. So, the term ‘behaviour-changing company’ comes from this logic because they are experimenting with our psychographics by using big data. In return, they want me to believe that my digital self, which they have created out of my clicks, comments and likes, etc. is me, my analogue self. This is exactly where everything starts to be manipulative and polarising. Because they actually guess what we would like to do next, according to our digital data, without having met us, without knowing us. Please imagine someone always guessing what you need, what you like to eat, travel and read, and who also kind of provides you with these things before you are even asking for it. This might sound like a very romantic and wonderful idea at first, but it is also very scary. Because it reduces our personality to our calculability, to our ‘guessability’. So, you can only exist to a degree of how much they can predict about you. So, it is your predictable values and parameters that generate your digital identity. This also creates an illusion, i.e. that we think almost everybody sees the same thing and receives the same piece of information and news, when we look at our news feed, our social media threads. But it is not true. We are not seeing the same or even similar things all the time. This is why I suggest that if we want to have an algorithmic democratisation, these companies have to tell people what sort of algorithms they are using on their platforms. What are their functions? These are important, because we can then have a better understanding of why we are getting this particular information but not that one. But this is a complex process.

Radhika Natarajan: Yes, of course it is quite complex. The events at the Capitol on the 6th of January are a good example, as one could argue that it is not the working poor or the white working class who came to the fore here but middle-class and even upper-class people. So, in this case, it is too easy to just assume that it is only poor people who are part of the Trump movement, because the aspect of social class is entirely different and continues to be overlooked. Many of them are not necessarily disenfranchised people.

Pinar Tuzcu: Yes, and there is one last thing I would like to add to your point. It is so interesting how the white working class is represented through people who are not disenfranchised at all. Every time, whenever I hear about Trump’s election and the groups of people who voted for him, the prompt answer is that it has to be the white working class, or poor white people. I think “How? What did Trump give to them?” He gave them nothing. So, this has also something to do with, in my opinion, how artificial intelligence and algorithmic power kind of take over the control on the question of representation. That is, it becomes blurrier who represents whom. Yet, you see law enforcement people and veterans who were included in this kind of mob that actually stormed the US Capitol and attempted to storm the German parliament. That’s why I’m saying that this kind of polarisation is also related to language and categorisation that are produced through binary coding. If you talk about the white working class today, we need to ask how and by whom the white working class is actually represented online. By Trump? Or, going back to the question that gave me the idea of coining the term ‘cybaltern’, I would rather ask: What kind of coding produces these people as ‘white working class’ today?

Radhika Natarajan: Thank you very much again for sharing your thoughts.

Malte Kleinschmidt: I also want to say thank you for the great interview.