Rikke Frank Jørgensen
This chapter reflects on methodological lessons learned from doing empirical research on Google and Facebook as part of a research project conducted 2015–2017 on the commercialized public sphere (Jørgensen 2017, 2018; Jørgensen and Desai 2017). The project set out to explore how the two companies frame human rights such as freedom of expression and privacy in relation to their platforms; how this framing is reflected at governance level (e.g., content regulation), contractual level (e.g., terms of service), and technical level (e.g., default settings); and the human rights implications of these practices. My motivation for researching these questions was twofold. First, the companies are increasingly powerful and effectively influence the boundaries for how billions of users may exercise human rights such as the right to privacy and freedom of expression. The human rights impacts of their practices may arise in a number of situations and decisions related to how the companies respond to requests from the government to restrict content or access user information; how they adopt and enforce terms of service; the design and engineering choices that implicate security and privacy; and decisions to provide or terminate services in a particular market (Kaye 2016, para. 11). Second, researchers have little insight into the norms and narratives that shape these companies’ practices. As emphasized by platform scholarship (Ananny and Gillespie 2016), these systems, people, and values are too often out of reach of scholars whose job is “to make explicit, orderly, consistent—and open to critical analysis—these ‘orientations’ that are usually taken for granted by empirical researchers” (Calhoun 1995, 5).
The research draws on a science and technology studies approach, highlighting how the design and implementation of any particular technology are patterned by a range of social, political, and economic factors (Williams 1996)—for example, how a particular economic interest patterns the privacy features that a platform offers (see also Musiani’s science and technology studies approach to Internet governance in chapter 4). My analytical vantage point has been to think of discourses—or what Mansell (2013) and Flichy (2007) call imaginaries—as an integral part of understanding these specific platforms. Similarly to a paradigm, much of the knowledge generated by a discourse comes to form common sense (Edwards 1997, 40). It is a “background of assumptions and agreements about how reality is to be interpreted and expressed, supported by paradigmatic metaphors, techniques, and technologies and potentially embodied in social institutions” (34). A discourse continually expands its own scope, occupying and integrating conceptual space in a kind of discursive imperialism. Presuming that discourses shape social practices and expectations, unpacking people’s way of framing and ascribing meaning to specific issues may help us understand and confront their common sense, underlying assumptions, and metaphorical point of reference (Jørgensen 2013, 6). In short, discursive frames are used to situate events, fashion a shared understanding of the world, and guide problem-solving (Barnett and Finnemore 2004, 33). Engaging with representations such as connecting the world, building social infrastructure, organizing the world’s information, and making information universally accessible can help unpack the relationship between a technology and its embedded norms and values, as well as illuminate how a particular framing benefits certain interests and downplays others (see also Hofmann’s take on discourse analysis in relation to Internet governance and multistakeholderism in chapter 12). In explaining why particular frames are promoted over others, the project relies on an actor-oriented perspective (Long 2001), emphasizing the agency of company staff and their meaning making, motivations, and strategies. This implies attention to how individuals explain particular meanings and how these meanings are translated into practice.
In terms of data collection, I have relied on a context-oriented qualitative approach, using interviews and online material as key sources of data (Huberman and Miles 1994). I collected data via interviews with Google and Facebook staff members (primarily in Europe and the United States), as well as via publicly available material, including public presentations by company staff. To retain some level of privacy for interviewees, while qualifying the findings to the widest extent possible, I identify them by affiliation but not by name.
In the following, I use the research case to discuss challenges pertaining to this type of policy-engaged qualitative research, focusing on getting access to staff within the organizations, the interview situation itself, and the data analysis. While the case and its analysis relate specifically to Google and Facebook, the chapter raises broader issues on concepts, methods, and frameworks for conducting Internet governance research. In the concluding section, I draw out some of the lessons learned and relate these observations to Internet governance as a research field.
Gaining access to interlocutors within Facebook and Google proved to be a major challenge in relation to the data collection. Whereas both companies had initially—via staff based in Denmark—agreed to participate in the research project, the subsequent process of locating people willing to talk about human rights turned out to be very difficult. Practical obstacles varied from lack of publicly available contact details to lack of response to emails. Neither company, for example, has contact details of staff listed on the company websites. In contrast, many of the employees can be found via LinkedIn Pro, which proved to be a useful tool to locate staff within a given area of the organizations. Once staff were identified and contacted, a subsequent challenge related to the difficulty of finding informants willing to share their observations, beyond the designated spokesperson within either organization. In practice, I would continually be referred to a selected point of contact when I tried to contact staff with different expertise across the organization. Although I explained that gathering different perspectives across the organization was a point in itself, staff would insist that the identified spokesperson would be the best person to answer my questions. Eventually, I almost gave up on interviewing people and considered basing the analysis solely on publicly available statements such as the Zuckerberg archive,1 combined with the relatively large amount of publicly available presentations, interviews, and so on from Google and Facebook founders and policy staff. Instead, I decided to revise my data collection strategy and focus on spaces where the two companies were present, such as Internet policy meetings (e.g., the Internet Governance Forum, EU meetings related to Internet governance) and more specific industry or research events (e.g., Global Network Initiative Public Learning Forum, Google Developer Groups meetings). This proved to be a good strategy, both as a means of establishing follow-up contact and in terms of engaging in discussion outside the more formal interview setting. I realized that when I made personal contact with Google and Facebook staff—for example, at a public meeting—they would often be more willing to be interviewed subsequently. Another useful strategy was to talk to local organizations that had some level of cooperation with the companies and were willing to introduce me to specific staff members. In sum, gaining access to interview subjects within the companies required an extensive amount of work compared with previous data collection I have been involved in—for example, among activists, civil society organizations, and government officials.
Interviewing elites posed methodological challenges to the empirical data collection. In addition to general interview considerations such as assessing the appropriate length of an interview, gaining the trust of respondents, dealing with respondents not answering questions, and closed- versus open-ended questions (Kvale 1997), specific issues relate to interviewing elites. The literature covering these methodological challenges has grown (Dexter 1970; Harvey 2011; Mikecz 2012). The methodological issues in elite interviewing involve issues of both validity (How appropriate is the interview format to the task at hand? Will the interviewer gain valid data?) and reliability (How consistent are the results?) (Berry 2002). Elites are often more likely to attempt to control the interview and be more particular about the questions they are willing to answer than other interview subjects (Harvey 2011, 439). This prompts researchers to consider how they present themselves and to show that they have done their homework because often elites might consciously or unconsciously challenge them on their subject and its relevance (Zuckerman 1972).
I conducted approximately 20 interviews with staff from Google and Facebook as part of my data collection. The interviews were via face-to-face meetings or done remotely via Blue Jeans (Facebook) or Google Hangouts (Google). I also visited the US and international (Dublin) headquarters of both companies, both for interviews and to attend meetings. Most of the respondents were staff members with responsibility for public policy, privacy, community operations (Facebook), and removal requests (Google). However, I also conducted interviews with technical staff (Google) and staff working on education and user experience (Google). Two aspects of the interviews proved especially challenging: The first was how to handle an interview in which the topic is sensitive, the interview situation is restricted, and the respondents are cautious on what type of information they convey. In the second, challenges related to teasing out the implicit sensemaking and taken-for-granted context that effectively shape the framing and governance of human rights, such as the right to privacy and freedom of expression, within the two companies.
In relation to the first set of challenges—the constrained interview—I tried to leverage my own expertise in the field and to create an atmosphere of peer-based conversation rather than interview per se. Because of my previous work on Internet rights and freedoms (Jørgensen 2013) and because access to respondents had proved so difficult to obtain, I was very aware that the topic of my research was politically sensitive within both organizations. As an illustration of this, none of the respondents would allow me to record the interview, which meant that I had to rely on extensive note-taking during the conversations. Writing notes during the interview had the disadvantage of drawing attention away from the interview, limited eye contact between the interviewer (me) and the interviewee, and at times made it difficult to ensure that the interview stayed on track. However, note-taking also has some advantages compared with recording. While typed notes provide a weaker description of the interview, they potentially provide more detailed off-the-record information (Byron 1993). In other words, while recording provides a detailed record of the interview, it may result in less information being conveyed because it limits the respondents’ willingness to talk. In contrast to previous research that has found note-taking to encourage more frank and detailed responses (Dexter 2006), I did not find the responses to be significantly richer than recorded answers to similar topics I have found in the public domain (e.g., recorded panel discussions or question and answer sessions). I did find, however, that note-taking during the interview forced me to focus on key messages and to document those messages instantly.
To facilitate a trustful and open atmosphere, I would start each interview by explicitly addressing an existing controversy—between, for example, privacy advocates and Facebook—and outline how my research hoped to engage with these existing debates. My intent was to demonstrate solid knowledge on the topic and a contextual and political understanding of the issues. In many cases, this would generate a more open and friendly tone; however, my general sense was that the interviewees were extremely guarded and synchronized in their responses, and it proved difficult to get respondents to elaborate more broadly on company practices and policy development. While research indicates that the interviewer can use silences to create a tension that may lead to more detailed answers (Berry 2002), I was generally trying to keep the conversation going. Given the sensitivity of the topics, I was afraid long silences would contribute to an awkward atmosphere and thus make the respondents less likely to elaborate on their answers.
Elites generally dislike closed-ended questions, which confine them to a restricted set of answers: “Elites especially—but other highly educated people as well—do not like being put in the straightjacket of close-ended questions. They prefer to articulate their views, explaining why they think what they think” (Aberbach and Rockman 2002, 64). While I took note of this in my interview style, I was also aware that I often had limited time to speak with the respondents and therefore a structured approach was necessary to obtain a focused response in a short time frame. Consequently, I conducted most interviews in a semistructured form using interview guides that reflected the key themes of my research. The semistructured format allowed improvisation and following subjects that appeared during the interview, including issues that were not part of the interview guide. Similarly, some areas were relevant to some respondents but not to others; hence, the thematic structure I followed varied. Despite the cautious approach toward staff interviews that both companies had demonstrated, some respondents were surprisingly generous with their time and would allow up to 90 minutes for an interview. In general, the face-to-face interviews lasted considerably longer than the interviews conducted remotely.
Although I experimented with relatively open-ended questions, the answers were in most cases fairly short and seemed standardized. However, there were also respondents who did not follow this pattern and would go into longer elaborations on a topic. In some cases, the respondents would advertently or inadvertently not answer a question. In these circumstances, I would rephrase and ask again. If they still did not answer the question, I would continue my line of questions and then circle back to the original question later in the interview. While this circling-back approach proved useful in some interviews, that was more the exception than the rule. Arguably, staff working with public policy within either organization are accustomed to dealing with policy makers, media, civil society, and so on and are very conscious of how they respond to politically sensitive questions.
When I compared the data collected via interviews with the publicly available presentations—for example, on content moderation practices—I found that the narratives and examples used were almost identical. In other words, after having listened to a few public talks on a given topic (e.g., transparency reporting) I could anticipate how a question related to that topic would probably be answered by one of my respondents. Irrespective of whether the topic concerned the core company values (e.g., the importance of free speech and privacy), the approach toward human rights (e.g., the description of corporate safeguards against government overreach), or governance challenges (e.g., content removal mechanisms), the responses were closely aligned across the people I talked to and even across the two companies for themes common to both organizations. In terms of corporate culture, it is interesting that storytelling is so uniform around what the company is doing and why it is doing it, in particular the role that technology is seen to play in making the world a better place (see later discussion). In terms of critical self-reflection, my experience was that when I referred to a given company policy that had been criticized (e.g., Facebook’s Free Basics or Google’s merger of its privacy policies), it was often presented as a factual misunderstanding, thus something that could be remedied by more information on what the company was actually doing. Few of the respondents were willing to reflect substantively on criticism—for example, from researchers or civil society organizations—of a specific practice.
The second set of challenges relate to identifying the implicit sensemaking and taken-for-granted context that inform the human right narratives within the two companies. Respondents seemed cautious and guarded when talking about corporate practices and human rights, yet it became clear to me after a couple of interviews that they shared an interpretative frame that governed their understanding of the relationship between the company and the framework of international human rights law. To tease out this frame, and to make it more explicit, I would first take note of the vocabulary and metaphors staff used to talk about specific human rights issues. I noted, for example, that the respondents repeatedly highlighted the liberating and transformative role of technology and thus of technology companies in making the world a better place. “We believe that, if more people are connected, the world will be more successful economically and socially” (#4, Facebook 2015). “What we do will help make the world a better place” (#3, Google 2015). Both the Facebook narrative (giving all individuals the ability to share and connect) and the Google narrative (making all the world’s information accessible) seem deeply rooted in a belief system of doing good for society. Moreover, there is a strong narrative around human rights threats posed by either repressive or controlling (regulating) governments. “Our main focus is to minimize harms from governments” (#7, Google 2015). In terms of repressive governments, the respondents would refer to countries such as China, Russia, Iran, Turkey, and Egypt that are known for Internet censorship (Clark et al. 2017), whereas they would say controlling governments were those European countries seen as paternalistic with regard to data protection regulation.
To validate the interpretative frame I had drawn from the interviews, I would refer to it in follow-up questions or in the last part of the interview. I would ask, for example, if it was correct to note that governments are perceived as the primary threat to the protection of human rights online, and I would continue by asking about the institutional mechanisms established to counter governmental overreach. I fine-tuned my interpretation on the basis of the responses and added nuance to the description of the corporate storytelling. For example, my respondents all emphasized privacy as an important norm; however, I realized that the respondents referred to a very specific understanding of user privacy. I therefore went through several questions asking staff more explicitly what privacy protection at Google and Facebook entailed, and I found that it refers solely to user empowerment and user control while using the products and not to data minimization, which is core to data protection regulation and thus to online privacy protection in Europe. Whereas reference to data minimization would challenge the underlying logic of the online business model based on data maximization, the framing of privacy as user control may be accommodated through various privacy settings, as well as improved user information. In short, the extensive data collection that the data-driven business model relies on is an inherent element in the interpretative frame in both organizations. The business model functions as a taken-for-granted context and is seen as an integral and essential component in providing a “free” service (Jørgensen 2017).
The third and final theme relates to the data analysis, specifically how to identify and describe the core human rights narratives and their reflection at the technical, legal, and governance levels. For theme analysis, I used a combination of meaning condensation and narrative structuring (Thagaard 2004, 158–163). I searched for theme-related points in the interview notes and tried to identify persistent explanations across the data collected.
The analysis revealed that public policy staff at Facebook and Google are aware of, and refer to, the international framework of human rights law and practice. When staff were elaborating on the role of the company vis-à-vis human rights, there was a very persistent narrative around state-caused threats to the free flow of information, invasive behavior from repressive states, skepticism toward state regulation of company practices (e.g., detailed data protection regulation), and a strong belief in self-regulatory measures. Although there are certainly cultural differences between the two companies, they both seem guided by a strong belief in the power of technology and in finding technical solutions to complex societal problems such as power inequality and uneven access to information. Generally, the interviewees shared a belief that their products contribute positively to social and economic development and thus to the enjoyment of human rights for everyone.
When trying to detect in more detail the nature of the human rights commitment expressed by staff at both companies, subthemes emerged. First, there was a strong emphasis on the state-centric nature of human rights violations, focusing almost entirely on user harms caused by states. While it is true that human rights law holds no direct obligations for private actors, this does not imply that private actors do not cause human rights harm. On the contrary, there is an increasing recognition of the potential negative effect on human rights from private actors and a call on companies to mitigate it (Knox 2012). In fact, several soft law standards provide guidance to companies on their responsibility to respect human rights.2 When questioned about their human rights responsibility, some of the respondents referred to the UN Guiding Principles on Business and Human Rights, which represent the most recent soft law standard in this domain (UN Human Rights Council 2011). The guiding principles reaffirm states’ obligation to ensure that businesses under their jurisdiction respect human rights but also outline the human rights responsibility of business, independent of how states implement their human rights obligations. Anchored in the UN “Protect, Respect, and Remedy” framework, the principles establish a widely accepted vocabulary for understanding the respective human rights roles and responsibilities of states and business. None of the respondents, however, mentioned that the UN Guiding Principles assert a global responsibility for businesses to avoid causing or contributing to adverse human rights impacts through their own activities and to address such impacts whenever they occur.
Second, the state-centric narrative was informed by several cases in which states had tried to assist technology companies in censorship, shutdowns, or surveillance. This is not surprising, as the ability of states to compel action by technology companies has been the source of some of the most widespread and severe human rights violations facilitated by the technology sector (Sullivan, 2016). In response, several of the major Internet companies cofounded the Global Network Initiative (GNI) in 2009. In the interviews, the GNI was mentioned time and again as a platform for multistakeholder engagement and as a sector-specific example of developing standards and guidance on how to respond to—and mitigate—overreach by states. The benchmarks and guidance produced by the GNI confirm the state-centric framing and thus is a narrative in which governments make more or less legitimate requests and companies respond and push back against such requests. With the increasing power that technology giants such as Google and Facebook hold in the online ecosystem and the standard setting provided by the UN Guiding Principles, this take on the human rights responsibility of companies is far too narrow. Arguably, there is a rising expectation from scholars and civil society organizations that the companies not only safeguard their users from overreach by states but also assess and mitigate negative human rights impact caused by their own business practices.
Third, safeguarding user content from state overreach was spoken of as a freedom of expression issue; however, the state-centric narrative effectively meant that the companies’ enforcement of self-defined boundaries for allowed content ranging from clearly illegal to distasteful—and anything in between—was not identified as a human rights issue (Jørgensen 2018). Whereas state requests for takedowns rested on some level of transparency, terms of service enforcement appeared largely invisible and guarded from any kind of public scrutiny. Likewise, for privacy, employees at both Facebook and Google emphasize that they pay great attention to privacy and push back against government requests for user data with due diligence standards. As an example of the commitment to privacy, both companies mention that an extensive system is in place to detect and react to privacy problems whenever a new product or product revision is introduced. In contrast, the underlying collection and use of personal data is not framed as a privacy issue. Privacy is repeatedly addressed as user control within the platform and not as limits to data collection per se. The idea of personal data as the economic asset of the Internet economy is presented as a given, and there is no recognition that the business model may, at a more fundamental level, preclude individual control. Thus, both companies praise the Internet’s emancipatory potential and their role in facilitating this potential worldwide while not acknowledging that the infrastructure and services they control are key to people’s ability (or inability) to exercise their rights. The strong belief in the liberating power of technology echoes the US online freedom narrative (Morozov 2011), which largely focuses on threats to the free and open Internet from repressive governments. It pays limited attention to the fact that rights such as freedom of expression, freedom of information, freedom of association, and so forth, are being exercised on private platforms—and thus the boundaries and modalities for these rights and freedoms are set by companies and outside the direct reach of human rights law. Moreover, that users’ social interactions on the platforms are directly linked to revenue reinforces the imbalance between these companies and their users. Since I finalized the research, these issues have only increased on the public agenda, and data scandals such as that of Cambridge Analytica have prompted policy makers in Europe and the United States to discuss options for regulating the business practices of technology giants such as Google and Facebook. In 2018, the UN special rapporteur on freedom of expression dedicated an entire report to examining how states’ and social media companies’ regulation of user-generated content may affect freedom of expression (Kaye 2018). As part of the report, the rapporteur proposes a framework for content regulation based on human rights law.
In conclusion, I will examine how the human rights framework relates to and informs the broader field of Internet governance research.
First, the human rights framework is increasingly used by the companies that govern Internet infrastructure and services, not only major Internet platforms but also companies and corporations in the domain name system—as pointed to by Braman in chapter 2 and Musiani in chapter 4. Gatherings of the Internet Corporation for Assigned Names and Numbers (ICANN), for example, have had several sessions related to how ICANN understands and operationalizes its human rights commitment—at ICANN, registry, and registrar levels.3 The increasing attention to the human rights framework in the technology sector is also demonstrated at the annual RightsCon summit,4 which brings together companies, policy makers, and civil society groups from around the globe to discuss the intersection between technology use and regulation and the human rights implications of these practices.
Second, the increasing uptake of the human rights discourse in the technology sector contributes to defining public expectations and thus has policy implications. The narratives at Facebook and Google situate human rights issues firmly within a free market discourse in which companies provide products to users and are allied in protecting their users against repressive governments. This framing iterates the power of the free market and users’ ability to choose between different products while not recognizing the negative impact that these platforms may have on users’ rights. Essentially, it makes a difference whether Facebook is framed as an important part of the public sphere and public debate, with attached obligations, or whether it is seen as merely one product among many competing products in a free market. Likewise, for Google. Does it provide a social infrastructure whose governance has a direct impact on its users’ ability to exercise rights and freedoms, or is it merely one among many search engines that users may or may not choose to use? The competing framings are important in understanding the conflicting expectations and their implications for the protection of rights within these spaces. While it is important that the technology sector address its human rights responsibility, its discourse and practice need to acknowledge that human rights impact assessment, mitigation, and due diligence must include all business processes and not only those involving state intervention.5 The attention to the information and communication technologies sector by the UN special rapporteur on freedom of expression, David Kaye, is an important step in this direction (Kaye 2016). In short, we should be attentive to the risk of private companies capturing the human rights discourse without truly acknowledging that they themselves are powerful and potentially rights-violating actors in the ecosystem of human rights protection.
Third, the corporate uptake of the human rights discourse may divert attention away from the legal obligations of states to ensure that the rights of users are protected in the realm of private actors. In other words, states breach their human rights obligations both when abuse can be attributed to them and when they fail to take appropriate steps to prevent, investigate, punish, and redress private actors’ abuse (UN Human Rights Council 2011). It is important to keep in mind that human rights law set out to protect the individual from abuse by power, and thus address a power imbalance, therefore there will be conflicting interests involved in human rights protection. A discourse that fails to acknowledge the conflict between, for example, users’ right to privacy and a business model based on harvesting user data have not seriously dealt with the human rights impact of company practices. Likewise, if the discourse fails to recognize the potential conflict between standards for freedom of expression and terms of service enforcement, that effectively influences the boundaries for allowed speech for billions of users.
Finally, while there is a need to critically engage with private actors on their human rights impact and responsibility, one must be realistic about the enormous economic interests vested in a specific framing and thus about what can be achieved in terms of dialogue and voluntary commitment. In addition, it is important to recognize that while there are essentially different discourses on these topics inside and outside the companies, there are increasing cooperation and streams of funding from technology companies toward advocacy, policy, and research in this area. In this increasingly blurred landscape, it is important to distinguish and critically address both the legal obligations of states and the soft law responsibility of companies to assess and mitigate all business practices that may have a negative impact on human rights. As stressed by DeNardis in chapter 1, Internet governance covers “complex and multivariable points of control” that can be used to advance fundamental rights and freedoms or harm them. With this in mind, the blurred landscape of multistakeholderism and public-private corporations that surround the Internet governance domain poses specific challenges for conducting Internet governance research. It is thus crucially important that the researcher can interrogate, navigate, and study her field while retaining full integrity as a researcher. While this is a well-known challenge in many fields of research, it calls for hypercritical awareness in a field where the stakes are so high in terms of both economic and societal interests.
1. The Zuckerberg archive contains all public talks by Mark Zuckerberg in the period 2004 to 2019. Available at https://www.zuckerbergfiles.org/.
2. See OECD Guidelines for Multinational Enterprises, available at http://mneguidelines.oecd.org/text/, and ILO Declaration on Fundamental Principles and Rights at Work, available at http://www.ilo.org/declaration/lang--en/index.htm.
3. See, for example, WS2—Enhancing ICANN Accountability, available at https://community.icann.org/display/WEIA/Human+Rights, and NCSG-ICANN and Human Rights, available at https://meetings.icann.org/en/marrakech55/schedule/mon-ncsg-human-rights.
4. See the RightsCon Costa Rica 2020 website, https://www.rightscon.org/.
5. See Götzmann 2019 for a comprehensive overview of the state-of-the-art of human rights impact assessment in the global business context, including sector-specific case studies.