3

SURVEYING JOURNALISTS AROUND THE WORLD

A Methodological Framework

Corinna Lauerer and Thomas Hanitzsch

Because of its large scale, the Worlds of Journalism Study needed a special research approach. Planning, coordinating, and executing a study of this magnitude has proved to be demanding in many ways. While the project was not the first cross-national endeavor of its kind, its scale and complexity have called for innovative as well as sometimes unconventional answers to the challenges of research coordination and methodology. As a result, in addition to shedding new light on the global diversity of journalistic cultures, this work has broken new ground on methodological issues in comparative journalism research.

Since its beginnings in 2006, the WJS has grown from a relatively modest pilot project spanning seven countries into a massive multinational endeavor, covering more than 27,500 journalists in sixty-seven countries around the globe. In the early years the study’s leaders had not anticipated such growth; there was no master plan for the project’s expansion and no central source of funding to cover the expenses of researchers in all countries. Fundamentally, the study builds on the credo Robert L. Stevenson (1996) so famously referred to as “Give a little, get a lot.” We interpreted this motto simply: every participating researcher who contributes a sufficiently reliable set of data from interviews with journalists to the WJS gets access to the combined comparative data set that includes all sixty-seven countries.

Researchers from all world regions were welcome to participate on the condition that they adhered to the study’s key premises and methodological framework. This framework related to definitions, field procedures, and measures, collaboratively developed through extensive deliberation, as well as during four workshop gatherings. The study’s leaders converted the essence of these discussions into detailed field instructions, research protocols, and memos answering frequently asked questions. Despite all these efforts, the project leaders had to deal with several unanticipated issues that arose during field research and data analysis. We managed to fix most but not all of the issues we encountered.

The main purpose of this chapter is to introduce the methodological framework of the study. We focus on project coordination, the sampling process, data collection, data management, questionnaire development, and index construction. While we worked hard to meet the highest standards of comparative research, the WJS, like most other studies, is not without imperfections. Considering the complexity of this large-scale multinational endeavor, methodological problems were bound to arise—more in some areas, less in others. As a service to future comparative researchers, and for reasons of scientific transparency, in this chapter we also discuss the major challenges we experienced along the way and document the limitations of the study.

Coordination

The first phase of the WJS (2006–2011) included twenty-one countries and resulted in several peer-reviewed international journal publications. Its success led to the idea of a second phase. After the initial planning for this wave in early 2010, the number of participating countries, and thus the extent of managerial complexity, quickly reached a level that necessitated greater sophistication in the project’s coordination and management. One of the first actions of the WJS network was to implement an additional layer of regional coordination to support the central project management. Countries were pragmatically grouped into seven regions, each to be overseen by a regional coordinator: (sub-Saharan) Africa, Asia, Oceania, Central and Eastern Europe, Latin America and the Caribbean, the Middle East and North Africa, and Western Europe and North America. Together the seven regional coordinators would form the WJS Executive Committee, which closely collaborates with the WJS Center based in Munich. As a matter of democratic principle, regional coordinators are elected in a general assembly of national principal investigators (PIs). In the elections, each country has one vote.

As the study’s second wave unfolded, communication and deliberation processes were professionalized. For instance, a mailing list, a project website, and social media accounts were created, and detailed field instructions and methodological documentation were formulated. These measures, along with centralized data management and regular meetings, helped to keep methodological standards on a high level. Several problems remained, however. Researchers collaborating in the WJS typically come from various theoretical, methodological, and cultural backgrounds and tend to have different understandings of quantitative research and of the division of labor. Furthermore, communication in such a multinational group of scholars is not always easy. New communication technologies effortlessly bridge vast geographic distances, but communication habits still differ substantially from country to country. Regular meetings—held in Thessaloniki (2014), Munich (2015, 2016), Cardiff (2017), and Prague (2018)—were extremely instrumental in overcoming many of these challenges. They provided opportunities for discussing conceptual, methodological, as well as organizational matters and created a space for personal encounters, building a scholarly community and nurturing a commitment to the study.

Global Representation

Comparative researchers typically distinguish between two types of strategies for the selection of countries. The Most Similar Systems Design aims at comparing very similar societies that primarily differ in the outcome variable, while the Most Different Systems Design seeks to maximize cultural variability across the selected countries (Przeworski and Teune 1970). During the first wave of the WJS, the project compared relatively few societies that were selected systematically, deliberately using the Most Different Systems Design (Hanitzsch et al. 2011). In its second phase, the project replaced systematic selection with the principle of inclusivity and thus opened the door to researchers worldwide. While this new strategy allowed for comparisons across different geographic, political, economic, and linguistic regions, it also led to a certain degree of regional bias because inclusion of countries depended on the interest and commitment demonstrated by the national teams of researchers in the various geographic locations, as well as research infrastructure.

Owing to the global political economy of academic research, scholars from developed democracies in the West were more likely to join the project than were their colleagues from the Global South. For the most part, Western countries have a well-developed academic infrastructure with funding available to cover field expenses. Typically this privilege contributes to a strong overrepresentation of Western countries in comparative studies, which can have substantial implications for the interpretation of findings. In political science, for instance, Linda Hantrais (1999) noted that similarities or differences revealed by international comparison could result, to a substantive extent, from the choice of countries. Barbara Geddes (2003) convincingly demonstrates how case selection can affect, or even render unreliable, outcomes of a comparative study. Hence the selection of countries is a crucial step in any cross-national inquiry into journalism, as noted elsewhere (Chang et al. 2001; Hanitzsch 2009).

We dealt with the tradeoff between inclusivity and systematic country selection by actively seeking and supporting collaboration with researchers in parts of the world underrepresented in comparative studies. Much effort, for instance, went into recruiting researchers in sub-Saharan Africa, with the result that we were able to include eight countries from that region. In many cases, funding provided by Western European partners allowed us to secure collaboration with scholars from underresearched countries. Universities in Norway, for instance, covered expenses of field research in Ethiopia and Kosovo. LMU Munich, the WJS Center’s host institution, provided partial funding to more than twenty countries through a formalized procedure, in which the WJS Executive Committee made decisions on financial support based on a review of funding applications submitted by national teams. In this process, we particularly prioritized regions that have to date received scarce attention from journalism researchers as a way to counterbalance the disproportional representation of European and North American societies in our sample.

When the most recent WJS wave was completed in early 2017, a total of sixty-seven countries were represented (see table 3.1). Together these societies cover almost three-quarters of the world’s population. Still, European and North American countries have considerably more weight in the project than do other regions. Despite all our efforts to reach out to researchers worldwide, North, Central, and West Africa are poorly represented in the study, as are Central Asia and parts of Latin America.

Selection of News Media and Journalists

The selection of news media and journalists as specified in our Field Manual consisted of three steps: (1) estimating the population of journalists, (2) selecting news media organizations, and (3) choosing journalists within those organizations.1 As the amount of relevant information for making these choices and the access to this information varied among the sixty-seven countries, researchers needed to find creative solutions to the problem of identifying news media organizations and journalists for the sample. Whichever strategy they used, the key principle was that national samples of journalists provide a reasonable representation of the respective national populations.

The WJS focused on studying professional journalists, broadly defined as those who have at least some editorial responsibility for the content they produce (Weaver and Wilhoit 1986). To qualify as a “journalist,” individuals had to earn at least 50 percent of their income from paid labor for news media. In addition, they had to either be involved in producing or editing journalistic content or be in an editorial supervision and coordination position. Press photographers, for instance, generally counted as journalists, while TV camera operators counted only if they had the necessary freedom to make editorial decisions (i.e., when they did not merely follow orders from the producer). Freelancers who met the criteria were included. News media organizations that had their own news programs or news sections for which they produced original content were included as sites for the selection of journalists, as were news agencies, but radio stations that aired only music programs, for example, were not included. While this selection strategy was inclusive enough to provide a breadth of representation of journalists, it excluded a range of individuals because they did not classify as “professional” journalists according to our definition, despite the fact that many of them engage in activities in which they reproduce the discursive techniques of journalists. This group included, among others, citizen journalists and bloggers, individuals who worked in journalism as a side job, and those who were recently laid off by their news organizations.

Usually the first step in the sampling process consisted of gathering data about national media systems and populations of journalists. In some countries this information was easy to compile, as there were national lists and directories of news media readily available to the public. In many cases, however, this information was often outdated or incomplete, requiring researchers to combine several sources of data, including media directories, official or government information, data from industry associations and journalists’ unions, as well as results from previous studies. In countries where the information was difficult to obtain, principal investigators provided an educated estimate of the population. These estimates were based on the national researchers’ experience in the field and on leveraging the limited data available, for example, by extrapolating the population from those news media for which numbers of journalists were known. In some cases even this turned out to be challenging because the dynamic media sector created uncertainty in many ways, with media outlets closing, new ones opening, and news organizations moving locations.

The second step was to draw a systematic sample of news organizations. All research teams were encouraged to construct national samples of news media that reflect the structure of their country’s media system. A common way of achieving this was by using a quota scheme that specified the composition of media outlets and news organizations in the country with respect to indicators such as media channel (newspapers, magazines, TV, radio, online, and agencies), content orientation (e.g., “quality”/broadsheet vs. popular/tabloid), audience reach (national, regional, and local), and primary ownership (public, private, or state-owned). Within these categories, investigators ideally chose news media organizations randomly or systematically.

The third and final step consisted of selecting journalists from within the sample of news media organizations and content providers. Wherever possible, researchers chose journalists randomly or systematically from newsrooms, so that every journalist in a chosen news organization had an equal or at least nonzero chance of being selected for the survey. Furthermore, according to the instructions detailed in the Field Manual, PIs were required to consider journalists from all editorial ranks and all kinds of news beats (including politics, culture, sports, lifestyle, and other news topics) for the sample.

As part of this strategy, we also applied a differential to the sampling process to minimize the disproportionate representation of news organizations with smaller or larger numbers of journalists. Researchers were instructed to select a greater number of journalists from larger organizations and proportionally fewer from smaller news media. What constituted a “large” or “small” organization ultimately depended on the national context. As a guiding principle, we advised researchers to select three or fewer journalists from smaller news media and five from larger organizations. This approach was employed in more than two-thirds of the participating countries.

Problems and difficulties, however, were the norm rather than the exception. Limited access to both data and journalists, as well as fast-changing conditions on the ground, led to less than ideal research settings in many countries. Several research teams had to rely on snowball sampling as a last resort (at least for a certain subgroup of respondents). In Cyprus, for instance, access to journalists was impossible in a number of news media organizations; hence respondents were approached through personal recommendations from other journalists. Freelancers were hard to identify in most of the countries. The Finnish team, for instance, used snowball sampling for freelance journalists to achieve a sufficient number of interviews in this group. In other countries, such as Sudan, an omnipresent atmosphere of physical violence, harassment, and surveillance complicated researchers’ efforts to reach out to journalists and win their cooperation for the survey. As a quality control measure, we thus asked all researchers to document problems in the sampling process. In addition, they were required to compare the composition of their samples (using some basic sociodemographic measures as a reference point) with the respective national populations of journalists wherever such comparable data were available.

Given the rather substantive variation in national conditions, we needed to exercise a certain degree of flexibility in the application of the methodological framework. Several key aspects of the sampling process, however, were not open to negotiation. The minimal sample size required for each country was calculated based on a maximum margin of error of 5 percent (at a 95 percent confidence level). Samples that met this threshold were treated as being sufficiently reliable representations of the respective country’s population. The vast majority of national teams of researchers were able to comply with this condition (see table 3.1). Only five countries failed to meet this criterion; hence we included them in our analyses (with necessary caution) but tagged them as “nonrepresentative” to flag the deficiency. These countries were Bulgaria, France, Singapore, South Korea, and Turkey. It needs to be noted, however, that the margin-of-error criterion privileges countries with larger populations of journalists. In Bhutan, a country with only 114 journalists at the time of the survey, researchers had to interview almost every reporter to meet the required sample size.

Naturally, the overall sampling strategy detailed here, and international variability in the application of the common methodological framework in particular, inevitably created variance in sample sizes across countries. We accounted for these differences by giving each country exactly the same weight in all cross-national comparisons reported in this book.

Data Collection and Fieldwork

National PIs were responsible for fieldwork in their respective countries. Data collection took place between 2012 and 2015, though in four countries (Brazil, Canada, China, and Thailand) field research extended into 2016, and in Bulgaria even into January 2017. Albania was the first country to deliver data, in July 2012, closely followed by Iceland and Egypt. Owing to its late entry into the study, Bulgaria was the final country to submit data. Across the whole sample, most interviews were conducted in 2014 and 2015 (see table 3.1).

The time span between the start and end date of data collection varied among countries too, broadly depending on the interview method and local circumstances. In almost half the countries, researchers completed data collection within roughly half a year or less; it took longer than a year in about a third of the countries. Data collection was fastest in the United Kingdom, where researchers finished the whole survey in less than one month. In Mexico, by way of contrast, the national team needed twenty-seven months to complete the study.

Since the required number of interviews was relatively large, data collection in many countries turned out to be demanding and time-consuming. Journalists were often reluctant to participate in surveys, as the low response rates in various countries indicate. Several media organizations in various parts of the world had a general policy of not cooperating with journalism researchers. Moreover, several teams had to go back into the field after the WJS Center found their data sets to be of insufficient quantity or quality. In one country, for instance, the number of interviewed journalists fell short of the required sample size after the WJS Center discovered that several respondents did not meet the study’s definition of a journalist. In other countries, researchers had to deal with unforeseeable events. In Sierra Leone, for instance, data collection was interrupted due to the Ebola outbreak in 2014. Field research resumed about one year later after Sierra Leone was declared Ebola-free.

Table 3.1  Overview of samples and data collection

Source: Based on information provided by the national research teams.

a Face-to-face.

b Belgium: Flanders/Wallonia.

c Margin of error (MOE) = 5.9%.

d Data collection for print media in 2012–2013; for broadcast media, in 2015–2016.

e MOE = 6.5%.

f MOE = 9.6%.

g MOE = 5.2%.

h MOE =10.0%.

The contextual realities in participating countries required that the general methodological framework allow for several methods of data collection. Effects of different survey modes are well documented in the literature (e.g., Hantrais and Mangen 2007; Stoop et al. 2010), and methodological equivalence in this respect is generally regarded highly desirable in comparative journalism research (Hanitzsch 2008). However, several practical limitations prevented us from achieving this goal. On the one hand, conducting face-to-face interviews in all countries was simply infeasible because of limited funding. Online interviews, on the other hand, were not suited to countries with restricted internet access or where journalists did not routinely use e-mail communication. Personal interviews seemed to be a better choice under specific local circumstances, such as the omnipresent surveillance of journalists by government authorities or cultural norms that privilege oral communication over other forms of communication. In Egypt and Ethiopia, for instance, journalists would generally feel uncomfortable responding to online or telephone surveys as face-to-face encounters are the preferred form of interaction. In Scandinavian countries (and in several other nations, too), by way of contrast, online surveys are widely accepted by and routinely conducted among journalists.

As a response to the political, cultural, economic, and technological peculiarities in the countries participating in the study, national teams were free to choose the interview method, or a combination of interview modes, that best suited local conditions. The interview mode was recorded as a separate variable in the data set. Overall, investigators in twenty-six countries relied on one interview mode exclusively (table 3.1). In the remaining countries, teams combined different interview modes for a variety of reasons. The German team, for instance, started with an online survey but quickly switched to telephone interviews owing to the discouraging response rate from the online survey.

Response rates also varied considerably among countries (table 3.1). More than half the research teams managed to achieve response rates of higher than 50 percent. In about one-fifth of the countries, response rates were lower than 30 percent. In Italy, the Netherlands, and the U.K., where researchers collected data through large online surveys, response rates were extremely low, at 10 percent or less.

Data Inspection and Management

One of the most challenging aspects of the study was the inspection, filing, and management of data sets from sixty-seven countries. The WJS Center coordinated this process as a service to the entire research network.

Data screening and cleaning, which also included some preliminary data analyses, were implemented with rigor. Our data integrity check included several steps. Typically we first checked data sets for invalid or missing entries, typographical errors, incorrect use of scales, errors in filter variables, and missing information for mandatory measures. In a second step, we screened data sets for inconsistent values across related variables and sometimes even across similar countries. We also searched for systematic response sets (e.g., acquiescence bias), duplicate cases, and even manipulated data. The following examples illustrate the variety and scope of challenges we dealt with as part of this process.

In some instances, problems were easy to detect. In six countries, for example, work experience was measured through categories rather than in exact years. In two countries, a suspiciously high percentage of female journalists led to an examination that discovered incorrect numeric codes for gender. Sophisticated methods of data inspection were required to identify the cause of a problem in many other cases. A preliminary comparative analysis of mean scores and tests of bivariate relationships, for example, revealed that journalists’ responses in one country showed a pattern that was diametrically the opposite of results in other societies in the region. Through careful inspection of the data we established that the respective national team had reversed numeric scales for some questions, though not for all of them.

In a few other cases, we decided to reject whole data sets for a number of reasons. In one country, for instance, the data set was returned because the national research team had used four-point scales for a large number of questions instead of five-point scales as stipulated in the core questionnaire. That team finally decided to conduct the entire survey again one year after the initial study, this time using the proper scales.

We also rejected data sets in which we found duplicate cases. In three countries, we provided national research teams with the opportunity to investigate the duplicate interviews and replace them with new respondents to meet the required sample size. In other cases, PIs did not provide any plausible explanation as to how the duplications occurred, or they did not undertake any significant effort to resolve the problem. These data sets were eventually not included in the international data file.

The extent of investigative work that went into detecting and classifying duplicate data varied from country to country. In one country, duplicate respondents were easily identified because researchers apparently duplicated not only the journalists’ answers but also their supposedly unique ID numbers. In other instances, research teams appeared to have invested considerable effort and sophistication in covering their tracks, mostly by tweaking answers here and there. Severe data manipulation by local teams was considered a serious breach of the WJS Statutes and ultimately led to immediate termination of the membership of two national research teams.

Data integrity checks were time- and labor-intensive not least because the WJS Center had to communicate back and forth with sixty-seven national research teams whenever questions and issues arose. Many data sets were revised several times until they were deemed sufficiently reliable and ready for inclusion in the multinational data set. In a final step, the WJS Executive Committee and Statistical Advisory Board reviewed the consolidated version of the multinational data set before it was shared with all participating researchers. As the WJS community began to work with the comparative data, a few more issues were discovered and resolved, necessitating several updates of the international data set during the first months of data analysis.

For reasons of transparency, and to provide clarity to the whole research network, we codified matters related to data ownership, data sharing, and data usage in a Data Sharing Protocol, which the whole group adopted in July 2015.2 As per this agreement, and as a service to the academic community, key variables from the cross-national data set are being made available for secondary analysis on the project’s website in June 2019.3 The website also contains a summary of methodological documentation provided by all country teams.

Data and Measures

At the heart of the WJS methodological framework was a common core questionnaire that contained a range of both mandatory and optional questions.4 While research teams could add some of their own questions, they were not allowed to change the meaning of any question in the core questionnaire.

We developed the questionnaire collaboratively between 2010 and 2012 in a series of meetings before the group formally adopted it. The major challenge was to make enough room for relevant and timely questions while at the same time limiting the questions to a number that would not deter potential respondents from participating in the survey.

In the questionnaire, most mandatory questions pertained to key elements of the study that were in line with the framework presented in chapter 2, such as journalists’ perceived influences on news work, editorial autonomy, journalistic roles, professional ethics, and perceived changes in journalism. Most of the questions on journalists’ personal and professional backgrounds, conditions of journalistic labor, and relevant characteristics of the newsrooms and news organizations were mandatory, too. Several optional questions complemented the questionnaire, including journalists’ trust in public institutions, and questions pertaining to journalists’ personal backgrounds (including income, ethnicity, and political orientation). Research teams were free to decide whether to include these questions in their respective countries.

We distinguished between mandatory and optional questions for both pragmatic and substantive reasons. On a pragmatic level, we wanted to keep the list of key questions as short as possible based on concerns that a very long questionnaire could result in a significant number of nonresponses. Furthermore, several questions about journalists’ personal backgrounds were considered potentially sensitive—in some countries more than others. Hence we did not force PIs to ask the question about political leanings in countries where journalists had limited political freedoms. Moreover, the way this question is commonly asked in international surveys—by inviting respondents to place themselves on a left-right continuum—does not necessarily provide meaningful results in every society. For example, in the case of many countries belonging to the Islamic world, political ideologies are not typically classified into “left” and “right.” A similar example is the question about journalists’ ethnic identification, which would cause confusion in societies where ethnicity is irrelevant or even a sensitive category. Finally, questions about journalists’ incomes and religious affiliations were also marked optional owing to their very private nature.

The master questionnaire was drafted in English first and subsequently translated into the relevant local languages. As a rule, functionally equivalent translation was given priority over literal translation. National teams were instructed to use creative means to retain the original meaning of the concept in the translated version. Strict literal translations can sometimes change the meaning of a question or answer categories. To reduce the possibility of translation errors, we kept the wording of the English master questionnaire simple and avoided using culture-specific idioms (e.g., “watchdog”) and normatively loaded language. National research teams were instructed to ensure sufficiently accurate translations by utilizing established techniques, such as translation-back-translation routines, or by working with multilingual experts (Van de Vijver and Leung 1997; Wirth and Kolb 2004).

Despite our best efforts to produce measures and question wordings applicable in a wide range of countries, we were still confronted with a number of issues that came up in the process of field research and during data checking. Several of these issues pertained to answer options given to journalists in the interview. Job denominations such as “reporter” or “desk editor” and the associated level of editorial responsibility, for instance, varied substantially across national contexts. In some journalistic cultures, editors have considerable editorial responsibility in the newsroom, while in other contexts, they have relatively little say on editorial matters. Another example is the categorization of media organizations according to the main type of ownership into “private,” “public,” and “state-owned.” The Norwegian Broadcasting Corporation (NRK), for instance, is usually classified as public service broadcasting. In a strictly formal sense, however, one would have to categorize it as state-owned because the media organization belongs to the state. However, putting the NRK into the same category as Chinese state television CCTV would obviously be an inaccurate description. Furthermore, for researchers in several other countries, public ownership could also mean that a media corporation was publicly traded on the stock market. This ambiguity eventually required us to recode the ownership variable for a variety of media organizations. We also added a new category for community media that was particularly relevant for the African and South American contexts.

In addition to compiling an integrated international database from interviews with journalists, the WJS Center gathered relevant data on the national context in the broader areas of politics and governance, socioeconomic development, and cultural value systems (see chapter 2). For politics and governance, we compiled information on press freedom, democratic development, transparency, and the rule of law from the websites of Freedom House, Reporters Without Borders, the Economist Intelligence Unit (EIU), Transparency International, and the World Bank. For socioeconomic development, we gathered information about gross national income, economic freedom, and human development from the World Bank, Heritage Foundation, and United Nations Development Programme (UNDP), respectively. In the area of cultural value systems, we compiled data on the prevalence of emancipative values and acceptance of power inequality from the World Values Survey and European Values Study, as well as from an online repository established by comparative social psychologist Geert Hofstede.5 All this information was gathered for the year during which each country’s survey was mainly conducted. This additional layer of societal-level data allowed us to properly contextualize our findings and identify the main factors behind cross-national differences across our sample of countries.

Composite Indexes

Several chapters in this book draw conclusions based on a relatively wide range of indicators. Journalists’ perceptions of their roles, for instance, were measured through a complex array of eighteen statements. Discussing every single aspect of journalistic roles would clearly be a monumental task beyond the scope of this book. For reasons of parsimony, we constructed composite indexes for selected areas of analysis, including journalists’ perceived influences, editorial autonomy, journalistic roles, and journalists’ political trust. In constructing these indexes, which are outlined in more detail in the respective chapters, we deliberately followed a formative rather than a reflective logic, thus diverging from the standard approach used in most communication and media research.

In the standard approach, indexes are constructed in such a way that indicators are manifestations of the index, which represents a “latent” construct presumed to exist independently from research. Causality in this context means that variation in the latent construct “creates” variation in all indicators that belong to the same index. A reflective index assumes that indicators are strictly interchangeable and expects them to covary with each other; indicators should thus have the same or similar content, or they should share a common theme (Coltman et al. 2008; Jarvis, MacKenzie, and Podsakoff 2003). Furthermore, composite measures of this kind assume that relationships between the index and its corresponding indicators on the individual level of measurement can be replicated at the aggregate level (Welzel and Inglehart 2016). Thus the validity of indexes hinges solely on a construct’s internal consistency, and on configurations within cultures, which are given priority over configurations between cultures. However, as comparative researchers in psychology, the political sciences, and other disciplines increasingly argue, this logic does not hold true even for many psychological value constructs (Alexander, Inglehart, and Welzel 2012; Boyle 1991; Welzel 2013).

The formative approach to index construction, in contrast, does not assume that the indicators are all “caused” by a single underlying construct. Rather, it assumes that indicators are defining characteristics of the construct measured by the index. Causality in this context means that the construct is formed—or “caused”—by the indicators and does not exist independent of the researcher’s conceptualization. Hence the direction of causality flows from the indicators to the latent construct, and the indicators, as a group, jointly determine the conceptual and empirical meaning of the construct measured by the index. Indicators need not be interchangeable; they may be correlated, but the model does not assume or require this to be the case (Coltman et al. 2008; Jarvis, MacKenzie, and Podsakoff 2003).

Dropping the assumption that indicators covary consistently across cultures does not automatically diminish the validity of an index. As Welzel and Inglehart (2016) convincingly argue, the underlying combinatory logic, assuming compositional substitutability among indicators, may contribute to a better representation of differential realities in distinct cultures (see also Boyle 1991). This may be particularly true for the concepts analyzed in this book. We have indeed little reason to assume that theoretical and empirical relationships among indicators for the indexes we use are exactly the same in cultures that differ on a variety of variables.

Political influence is a case in point. This index includes four groups of actors: politicians, government officials, pressure groups, and business representatives (see chapter 5). In some countries, the parliament may be weak or may simply not exist, which is why it might exert little or no influence on journalists. However, the government, which tends to have more power in political systems different from parliamentary democracy, may well compensate for this lack of influence. A logical consequence would be to expect the relationship between perceived government influence and influence of parliament to vary, perhaps considerably, across political systems. In such a situation, formative indexes may better represent the different realities in various cultures, particularly because they do not prioritize a certain—often Western—theoretical framework. A clear disadvantage of formative indexes, however, is that their reliability cannot be established through technical or statistical measures in a way similar to that used for establishing the reliability of reflective indexes, which are routinely checked for internal consistency. One way to overcome this deficiency is to examine how well an index relates to external or contextual measures (Bagozzi 1994; Welzel and Inglehart 2016). As our analyses in the following chapters demonstrate, our indexes perform reasonably well on this count.

Into the Future

Steering the Worlds of Journalism Study is both an exciting and a challenging venture because of its unique conceptual, methodological, and organizational nature. It was, and continues to be, an instructive endeavor in which all of us continue to learn how to practice comparative research. Despite its sometimes atrociously large managerial complexity, the project generated a wealth of information about journalists from sixty-seven countries, putting the study at the forefront of comparative communication and media research (Blumler 2017). The experiences made and lessons learned in this most recent wave of the WJS continue to serve as a substantial source of motivation to further professionalize the project as we prepare for the third wave. Ultimately we hope to turn this project into a sustainable academic endeavor to study the state of journalism around the world. The idea is to keep selected key measures of journalistic cultures in the questionnaire, while leaving sufficient space for additional measures designed to follow up on current developments in journalism worldwide. This would allow for a long-term assessment of journalistic cultures on a global scale while shedding light on the political, cultural, and socioeconomic directions in which the institution of journalism is traveling.

Notes

1.  Methodological documentation, 2012–2016, Worlds of Journalism Study, http://www.worldsofjournalism.org/research/2012-2016-study/methodological-documentation/.

2.  Data Sharing Protocol, Worlds of Journalism Study, July 1, 2015, http://www.worldsofjournalism.org/fileadmin/Data_tables_documentation/Documentation/WJS_Data_Sharing_Protocol.pdf.

3.  Aggregated tables for key variables, 2012–2016, Worlds of Journalism Study, http://www.worldsofjournalism.org/research/2012-2016-study/data-and-key-tables/.

4.  Master questionnaire, 2012–2016, version 2.5.1, Worlds of Journalism Study, http://www.worldsofjournalism.org/fileadmin/Data_tables_documentation/Documentation/WJS_core_questionnaire_2.5.1_consolidated.pdf.

5.  Geert Hofstede, https://geerthofstede.com/research-and-vsm/dimension-data-matrix/.