33
ON RESEARCH THAT “MATTERS”
ERICA CHENOWETH
▶ FIELDWORK LOCATION: GLOBAL
Many political scientists think the questions we ask are important, and we want our research to matter.1 But my formal training as a political scientist gave me very little preparation to deal with two unexpected preoccupations over the past few years: (1) the professional and ethical responsibilities of engaging with people who were trying to apply my research findings to their current contexts; and (2) the politicization of my research findings. This essay is a cautionary tale about the risks of engaging extensively with public audiences about the practical implications of our research when these audiences may not understand the limitations of our work. It is also a story about how accepting U.S. government funding—particularly from defense and intelligence agencies—can make people skeptical about the researcher’s intentions and the reliability of the research findings. This is not a story about fieldwork specifically, but these lessons can apply to those conducting research using any method. I conclude by discussing my experience developing an informal ethical practice regarding engaged scholarship, and I offer some considerations for others who many wish to do the same.
First, a bit of background. While completing my doctoral research on terrorism and political violence, I developed a parallel interest in the strategic effects of nonviolent resistance. With my colleague Maria Stephan, we published an article titled “Why Civil Resistance Works,” in International Security in 2008. In it we argued that nonviolent campaigns with maximalist goals were twice as likely to succeed as violent ones between 1900 and 2006. In our book of the same title, we put forward a series of arguments explaining why this was the case, and we detailed the ways nonviolent resistance achieved what armed insurrection couldn’t in four cases in the Middle East and Southeast Asia.
When the book was published in the spring of 2011, our findings joined a much larger set of discussions about the value of people power movements and nonviolent action. These debates had garnered much renewed interest within the United States and internationally because of current events: the Arab Spring was in full swing, and Occupy Wall Street was about to take off in the United States. In the years that followed, I developed new research projects on the dynamics of nonviolent resistance, and I also gave talks, workshops, and lectures on the topic in over a dozen countries. I began to publish more regularly in newspapers, blogs, and public outlets to discuss extensions and applications of my research to current events. My visibility on these topics has led people to begin to reach out to me directly—often with heartrending accounts of their struggles—requesting public endorsements, advice, or support for organizing their campaigns. Although connecting my research with real-world problems has always been important to me, I had to learn on the fly how to navigate some unforeseen challenges that come along with engaged scholarship.
ENGAGED SCHOLARSHIP AND THE MORAL HAZARD PROBLEM
Political scientists—especially those of us working in policy-oriented schools—are regularly exhorted to conduct research with an impact. Institutional review boards (IRBs) instruct us in how to protect our subjects during the research process, but there is little to no institutional, administrative, or ethical oversight when it comes to sharing the implications of our scholarship with those in the real world. Thus we are unprepared for the ways in which our findings might be used in unanticipated ways—sometimes harmful ones.
For example, one of the findings I derived from the Nonviolent and Violent Campaigns and Outcomes (NAVCO) data set is that no dictatorships had defeated a mass uprising whose participation rate surpassed 3.5 percent of the population. I called this the “3.5 percent rule” in a TEDx talk that circulated widely on YouTube. Later, organizers and activists in many countries seized on the 3.5 percent rule alongside many other symbols and slogans to galvanize support.
Surprising and inspiring descriptive statistics can give audiences a false sense of cause and effect as well as predictive power. Often what gains traction—whether in a twelve-minute TEDx talk or in an op-ed—does so specifically because it is not weighed down with caveats and nuance. Putting the full transcript of my TEDx talk online, with links to references, did nothing to counteract this fact. In addition, I did not fully appreciate how public deference to perceived “experts” and “authorities” can lead people to embrace and apply research findings with unbridled confidence.
Unlike the healing professions, social scientists do not take a Hippocratic oath. Academics’ perceived authority on their subjects compels them to take special care not to overstate their confidence in their findings, or to motivate people to take undue or misinformed risks. When engaging with such audiences, I came to appreciate the crucial practice of remaining modest about the claims we can make based on the empirical evidence, carefully distinguishing correlation from causation, identifying caveats and limitations of available data, and providing what information we can without emboldening people to take risks beyond those they would otherwise take.
This dovetails with the classic moral hazard problem that economists often face—the tendency to support certain policy interventions or programs that carry risks of failure without assuming any of those risks themselves.2 For instance, I try to follow a principle of not offering specific tactical or strategic advice to activists or movements in which I am not personally involved. This is because I don’t see myself as qualified to comment on foreign contexts and also because I would assume none of the risks and bear few of the burdens of any strategic errors.
POLITICAL SCIENCE IS POLITICAL
As research gains traction, it becomes increasingly likely—and potentially increasingly costly—for it to become politicized. Because my research on nonviolent resistance was seen to “matter,” other previously irrelevant aspects of my work were swept up into new narratives aimed at discrediting both the findings and my motives in disseminating them.
Since I was a doctoral student, I have been part of several teams funded by the U.S. Department of Homeland Security and the Department of Defense’s (DoD) Minerva Initiative to study counterterrorism, terrorism, and political violence. When I began my career, conversations about the potential risks or conflicts of interest with government research funding were only just beginning in my academic circles. In the mid-2000s, many security studies scholars viewed ethics conversations as largely applying to human subjects research rather than to secondary data collection. When discussions around ethics did occur, they typically focused on accuracy: the importance of high data quality, transparency, and replication. Some conversations about the potential political implications of federal funding were indeed happening,3 and I knew some scholars who would accept funds from the National Science Foundation but not from the DoD. However, these colleagues typically framed their choices as arising from their personal political leanings.
I, too, approached much of my work as a critic of U.S. foreign policy. But my thinking was that it was reasonable to spend taxpayer dollars conducting research on topics that mattered to the public, as long as I subjected my work to peer review and made the data and related findings publicly available. Researchers accepting federal funds exercised considerable independence and autonomy when it came to the research questions, designs, and interpretation of results, even when they arrived at conclusions that critiqued U.S. government foreign policy and security policy. As one of relatively few young female security scholars at the time, it felt important to have a seat at the table when opportunities arose to contribute to collaborative research projects and initiatives.
Unlike my terrorism research, the research behind the book Why Civil Resistance Works was funded exclusively by private foundations and university funds, and subsequent iterations of the NAVCO data set have never received any U.S. government funding.4 But once organizers and activists began citing the book’s findings in their struggles, the compartmentalization of funding sources for my different research projects became politically irrelevant. Regardless of my intentions and expectations, my associations with various U.S. government agencies—although wholly unrelated to the research supporting the book—became the subject of conspiracy theories. Outside the United States some saw proponents of nonviolent action, including me, as peddling a theory that reinforced American hegemonic ambitions by supporting “regime change” abroad. Such theories were particularly potent among those who sympathized with a narrative accusing the United States of supporting “soft coups” in authoritarian regimes by backing color revolutions. The theory was empirically tenuous but highly popular in Russia, Iran, Venezuela, Turkey, and among some critics in the United States. As a consequence, my limited and indirect associations with the U.S. government were parlayed into conspiratorial narratives.
Even within the United States, skeptics on the left and right have regularly attacked proponents of nonviolent action. Critics on the left argue that advocates of nonviolent resistance benefit the U.S. government by promoting a “passive” population over a militant revolutionary one. Others suggest that the government has co-opted proponents of nonviolent action, who either help the U.S. government better suppress these movements or actively discourage oppressed populations from properly defending themselves against their oppressors, thereby “pacifying” them in the face of incredible injustice. On the right, critics suggested that my coauthor and I had manufactured our findings because of opposition to the Second Amendment of the U.S. Constitution. We argued that unarmed civilians had historically confronted and overthrown oppressors more successfully than armed insurrectionists, so our study threatened the narrative that Americans needed to remain well armed to deter and confront tyranny.
Regardless of what one makes of these different narratives, many people at home and abroad justifiably perceive U.S. defense, foreign policy, and security agencies—and some private foundations—as directly at odds with their own aspirations for justice, dignity, and rights. The fact that I sought and accepted funds from such groups to support various research projects in the past cannot be undone. But transparency necessitates full disclosure,5 and the political reality of guilt by association presented a major risk for activists approaching me with questions about how to use nonviolent resistance in active conflicts.
For example, on one occasion when giving a lecture in a semiauthoritarian country, a minder approached me during the reception and asked whether I was there because I intended to bring a color revolution to her country. This interaction did not make me fearful for my own safety, but it did make me anxious for the welfare of the many activists who attended the lecture. (One person in attendance later met me at a separate event in the United States, where she was pursuing a graduate degree; she told me that she and her family had experienced a forceful interrogation and routine police harassment after attending the lecture.) I’ve had similarly worrying encounters in academic fora such as public lectures live-streamed on various social media platforms, and over unencrypted email. As a scholar visibly active in public engagement regarding the findings of my work, I have little ability to convey risks to people who voluntarily reach out to me with sensitive information on public or surveilled channels that could put them in jeopardy in their countries of origin. This issue was particularly troubling in the case of Syria, where in the early days of the uprising, those seen to reach out to American or European academics for support were singled out for brutality by the Assad regime as traitors and conspirators. Similar issues have arisen for activists who have reached out to me from Venezuela, Russia, and elsewhere.
I was unprepared to fully appreciate and recognize the ways in which my public interactions with various research users and audiences would become political acts—with or without my intention or forethought. I wanted to conduct research that “matters,” but I have had to jettison the illusion that research findings can ever be apolitical. The choices that we make about what questions to study, how, and with which supporters are all relevant to someone’s political agenda at some point in time.6 This reality can establish a perceived or actual conflict of interest between the source of our research funding and consumers of our research more broadly.
ON RESPONSIBLE ENGAGED SCHOLARSHIP
Needless to say, the question of how to conduct responsible engaged research has remained paramount in my mind. I am convinced that it is unrealistic and naïve to think that disengaging from diverse audiences relieves us of our ethical responsibilities. Refusing to engage in the major questions of our time, or obscuring them in paywalled academic journals, is not a responsible or viable solution, nor will it prevent people from using published work for their own purposes. We cannot control the way people use our work once it is published, whether or not we engage with various constituencies. As such, the question is not whether to engage but how to engage, with whom, and to what ends.
My response to these ethical and moral dilemmas has been to put together a personal code of conduct for my own research practices. I first did this in 2013 as I became increasingly aware of how wide-ranging the dilemmas and considerations of engaged scholarship could be. I was not trying to create a formula for perfection or absolution. Instead, my aim was to formally reflect on the values and principles that animate my work, establish a baseline of minimum standards for my own conduct, and develop a written commitment to consult widely if I were unsure of any case. I also wanted to build a path to accountability to others, which I did by obtaining feedback on the code from colleagues over the course of 2013 to 2015, and by establishing a small group of trusted advisors who were willing to talk through any questions or issues upon request. My advisory group includes people whose moral and ethical instincts I trust, who have no personal or professional reason to withhold critical feedback, and who can keep a confidence. I’ve relied on their support and my evolving ethics code to make many decisions about professional opportunities in the years since.
Core questions inform my personal code of ethics and provide a process for staying true to it day by day. I share some of these questions here in hopes that they help others prepare for the moral and ethical conundrums inevitably present in engaged and political research.
ON CORE VALUES
•  What values are most important to me?
•  What is my primary purpose for my life? In my profession?
•  What problems do I want to help to solve, and what are my guiding assumptions about how such problems originate? Have I contributed to such problems, wittingly or unwittingly?
•  What sacrifices am I willing to make—professionally, financially, and otherwise—to stay true to these principles?
•  Who are the intended audiences for my research? Why?
ON POTENTIAL INTENDED AND UNINTENDED CONSEQUENCES OF ENGAGED SCHOLARSHIP
•  How can I best communicate regarding the limitations of the claims I am able to make based on empirical evidence, the identification of various caveats, and the role of uncertainty in applying results to current or future contexts?
•  What are some potential unintended audiences for my research?
•  Are any of the potential audiences—intended or unintended—in conflict with one another?
•  Could my research findings be harmful to anyone? Could they motivate action that could harm others?
•  Could my research embolden people to take risks beyond those they would otherwise take?
•  Can I take steps to ensure that appeals for advice or public endorsements are treated with the utmost confidentiality and care?
ON FUNDING, RESEARCH PARTNERSHIPS, AND ASSOCIATIONS
•  What steps will I take to ensure transparency about my associations, past and present?
•  From whom am I willing to accept funding to support my research? Are any of my intended audiences in conflict with these funders?
•  Am I able to engage with any people within the U.S. government in limited ways that can be helpful? Are there people in foreign governments with whom I am willing to engage? Might my association with such people put anyone at risk, now or in the future?
•  Am I willing to conduct research if the results are proprietary?
•  If and when I choose to engage with various audiences, am I willing to refuse to cooperate in and openly challenge activity I view as immoral or at odds with my primary purpose, even if doing so comes at a professional cost?
•  Am I comfortable collaborating with scholars who do not share my own core commitments? At what point during a potential collaboration should I approach this topic with them?
•  How do I involve students and research informants in my research? Do I provide my students with as many opportunities as possible to participate in this research? Do I credit my students, collaborators, and research informants accurately for their contributions?
ON ACCOUNTABILITY TO MY ETHICS CODE
•  What personal processes do I follow for reflection, self-examination, and building moral courage? What will I do when I have doubts about whether I am following my ethics code?
•  Have I invested time in building a community of like-minded colleagues and confidantes who hold me accountable to my principles?
•  Do I have an open mind regarding potential further refinements or revisions to my core principles and a willingness to constantly improve my effectiveness in fulfilling my primary purpose?
•  How can I set an example for others regarding how to engage with various audiences on difficult or sensitive topics?
•  Am I willing to talk and write about my experiences, both positive and negative, so that they can benefit others?
•  Am I willing to constructively challenge and encourage my colleagues, professional networks, and students to think deeply about their own moral and ethical commitments—and to develop guiding principles and codes of conduct for themselves?
This is surely not an authoritative or exhaustive list of questions one ought to ask, but they flowed directly from experiences that I’ve found morally troubling. I am still an amateur ethicist. But I can say that following a systematic process, putting my principles on paper, and building a small group of professional and personal confidantes who are willing to hold me accountable to my ethics code have provided more opportunities for personal and intellectual growth than I could have anticipated. I am grateful to the editors of this book for providing all researchers with an opportunity to share best practices, including those regarding an issue that we rarely discuss—the ethical dilemmas of producing research that “matters” in unanticipated ways.
______
Erica Chenoweth is the Berthold Beitz Professor in Human Rights and International Affairs at Harvard Kennedy School.
PUBLICATIONS TO WHICH THIS FIELDWORK CONTRIBUTED:
•  Chenoweth, Erica, and Maria J. Stephan. Why Civil Resistance Works: The Strategic Logic of Nonviolent Conflict. New York: Columbia University Press, 2012.
•  Chenoweth, Erica. “Nonviolent and Violent Campaigns and Outcomes Data Project,” Harvard University, 2019. https://www.navcodata.org/.
•  Stephan, Maria J., and Erica Chenoweth. “Why Civil Resistance Works: The Strategic Logic of Nonviolent Conflict,” International Security 33, no. 1 (Summer 2008): 7–44.
NOTES
1. I thank Zoe Marks, Jessica Stern, and the editors for their invaluable comments on this chapter.
2. Thanks to George DeMartino for this insight from an economist’s perspective.
3. See, for instance, the Social Science Research Council’s exchange on the ethical dimensions of the Minerva Initiative, http://essays.ssrc.org/minerva/.
4. After the Arab Spring, I did receive various federal grants to develop different data on mobilization—a development that, in hindsight, likely reinforced these issues.
5. The fact that transparency can often lead to unintended risks for informants is an issue taken up in the Qualitative Transparency Deliberations that have taken place over the past decade. See the various working papers available at https://www.qualtd.net/. However, here the discussions relate mostly to risks to informants and less to the risk to people who take up published research findings to support their own political struggles.
6. On the issue of finding oneself in the midst of power relations that are both unexpected and difficult to fully understand, see Cathrine Brun, “ ‘I Love My Soldier’: Developing Responsible and Ethically Sound Research in a Militarized Society,” in Research Methods in Conflict Settings: A View from Below, ed. Dyan Mazurana, Karen Jacobsen, and Lacey Andrews Gale (Cambridge: Cambridge University Press, 2013), 129–48; Elisabeth Jean Wood, “Reflections on the Challenges, Dilemmas, and Rewards of Research in Conflict Zones,” in Research Methods in Conflict Settings: A View from Below, ed. Dyan Mazurana, Karen Jacobsen, and Lacey Andrews Gale (Cambridge: Cambridge University Press, 2013), 295–308.