The work in this volume provides profound evidence of the value of randomized experimentation in political science research.
Laboratory and field experiments can open up new fields for exploration, shed light on old debates, and answer questions previously
believed to be intractable. Although acknowledgment of the value of experimentation in political science is becoming more
commonplace, significant criticisms remain. The oft-repeated shortcomings of experimental research tend to center on the practical
and ethical limitations of randomized interventions. I begin this chapter by detailing some of these criticisms and then explore
one means of extending the value of randomized interventions beyond their original intent to ameliorate some of these same
perceived limitations.
One of the most prominent critiques of this genre is that randomized experiments tend to be overly narrow in scope in terms
of time frame and subject matter, as well as high in cost. Although short-term experiments may incur costs similar to observational
research, they often focus on a single or just a few variations in an independent variable, seemingly limiting their applicability
to a breadth of topics that a survey could cover. The high cost associated with long-term data collection and the necessity
of maintaining contact with the subjects involved impedes the likelihood of gathering information on long-term outcomes. There
are also few incentives to conduct interventions in which the impacts may only be determined years down the road. Such studies
are not amenable to dissertation research unless graduate students extend their tours of duty even longer, nor do they suit
junior faculty trying to build their publication records. The isolation of long-term effects necessitates long-term planning,
maintenance, and funding, all of which tend to be in short supply for many researchers.
Also, ethically dubious is the notion that some interventions could have the long-term consequence of influencing social outcomes.
Artificially enhancing randomly selected candidates’ campaign coffers to test the effects of money on electoral success could
affect electoral outcomes and the drafting and passage of legislation. Randomly assigning differential levels of lobbying
on a particular bill could sway the drafting and passage of legislation. Testing the effectiveness of governmental structures
through random assignment of different types of constitutions to newly formed states could result in internal strife and international
instability.
The list of interventions that, although useful for research and pedagogical purposes, would be simply impractical or unethical
is seemingly endless, whereas the universe of interventions that are both feasible and useful appears somewhat limited in
scope. Does this mean that experiments will only be useful in answering questions where it is practical and ethical to manipulate
variables of interest? The answer is no for myriad reasons discussed in this volume and elsewhere. Here I focus on just one:
we can expand the utility of initial interventions beyond their original intent through examination of the long-term, and
sometimes unforeseen, consequences of randomized interventions. Using downstream analysis, political scientists can leverage
the power of one randomized intervention to examine a host of causal relationships that they might otherwise have never been
able to study through means other than observational analysis. Although political scientists may not randomly assign some
politicians to receive more money than others, some other intervention, natural or intended, may produce such an outcome.
Researchers can exploit this variation achieved through random or near-random assignment to examine the consequences of this
resulting discrepancy in campaign finances on other outcomes such as electoral success.
In the next section, I further define and detail the key assumptions associated with downstream analysis of randomized experiments.
I then highlight research that uses downstream analysis and outline some potential research areas that may benefit from this
outgrowth of randomized interventions. I conclude with a discussion of the methodological, practical, and ethical challenges
posed by downstream analysis and offer some suggestions for overcoming these difficulties.
“Downstream experimentation” is a term originally coined by
Green and Gerber (
2002). The concept of using existing randomized and natural experiments to examine second-order effects of interventions was slow
to build due, in large part, to the relative dearth of suitable experiments in political science. Now that experiments are
becoming more widespread and prominent in political science literature, scholars are beginning to cull the growing number
of interventions to test theories seemingly untestable through traditional randomization. In this section, I touch on just
a few such examples and offer avenues for further exploration using similar first-order experiments. This discussion is meant
to encourage those interested to seek out these and other works to explore the possibilities of downstream analysis.
As I discussed previously, an interesting stockpile of experiments well suited for downstream analysis are interventions designed
to test public policy innovations, programs in education in particular. While a key variable of interest to many is the influence
of education on a wide array of political and social outcomes, one's level of educational attainment is itself a product of
numerous factors, potentially impeding our ability to isolate schooling's causal effect through standard observational analysis
(Rosenzweig and Wolpin
2000). At the same time, it is nearly impossible and potentially unethical to randomize the educational attainment of individuals
or groups to gauge its effect.
Sondheimer and Green (
2010) examine two randomized educational interventions, the
High/Scope Perry Preschool project examining the value of preschool in the 1960s and the Student-Teacher Achievement Ratio
program testing the value of small classes in Tennessee in the 1980s, in which the treatment groups witnessed an increase
in years of schooling in comparison control groups. They used these differential levels of schooling produced by the randomized
interventions to isolate the effects of educational attainment on likelihood of voting, confirming the strong effect often
produced in conventional observational analysis. In addition to examinations of voter turnout, downstream analysis of educational
interventions can isolate the effects of years of schooling on a range of outcomes, including
views on government, party affiliation, civic engagement, and social
networking.
As discussed throughout this volume (see, in particular, Michelson and Nickerson's chapter in this volume), experimentation
is proliferating in the field of
voter
mobilization. Scores of researchers conduct randomized trials to estimate the effectiveness of different techniques aimed
at getting people to the polls. In doing so, these studies create the opportunity for subsequent research on the second-order
effects of an individual casting a ballot when he or she would not have done so absent some form of intervention.
Gerber et al. (
2003) use an experiment testing the effects of face-to-face canvassing and direct mail on turnout in a local election in 1998
to examine whether voting in one election increases the likelihood of voting in another election. Previous observational research
on the persistence of voting over time is unable to distinguish between the unobserved causes of an individual voting in the
first place from the potential of habit formation. Gerber et al. find that the exogenous shock to voting produced within the
treatment group by the initial mobilization intervention in 1998 endured somewhat in the 1999 election, indicating a persistence
pattern independent of other unobserved causes of voting. Further extension of mobilization experiments could test the second-order
effects of casting a ballot on attitudes (e.g., internal and external efficacy), political knowledge, and the likelihood of
spillover into other forms of participation.
Laboratory
settings and survey manipulation offer fruitful ground for downstream analysis of randomized experiments. Holbrook's chapter
in this volume discusses experiments, predominantly performed in laboratories or lablike settings or in survey research, that
seek to measure either attitude formation or change as the dependent variable. As she notes, understanding the processes of
attitude formation and change is central to research in political science because such attitudes inform democratic decision
making at all levels of government and politics. Experiments seeking to understand the causes of attitude formation and change
can use downstream analysis to examine the second-order effects of these exogenously induced variations on subsequent beliefs,
opinions, and behaviors. For example,
Peffley and Hurwitz (
2007) use a survey experiment to test the effect of different types of argument framing on support for capital punishment. Individual
variation in support for capital punishment brought about by this random assignment could be used to test how views on specific
issues influence attitudes on other political and social issues, the purpose and role of government generally, and evaluations
of electoral candidates.
Natural
experiments provide further opportunity for downstream examination of seemingly intractable questions. In this vein, scholars
examine the second- and third-order effects caused by naturally occurring random or near-random assignment into treatment
and experimental groups. Looking to legislative research,
Kellerman and Shepsle (
2009) use the lottery assignment of seniority to multiple new members of congressional committees to explore the effects of future
seniority on career outcomes such as passage of sponsored bills in and out of the jurisdiction of the initially assigned committee
and reelection outcomes.
Bhavnani (
2009) uses the random assignment of seats reserved for women in local legislative bodies in India to examine whether the existence
of a reserved seat, once removed, increases the likelihood of women being elected to this same seat in the future. The original
intent of the reservation system is to increase the proportion of women elected to local office. Bhavnani exploits the random
rotation of these reserved seats to examine the “next election” effects of this program once the reserved status of a given
seat is removed and the local election is again open to male candidates. He focuses his analysis on the subsequent elections
in these treatment and control wards, but one could imagine using this type of natural randomization process to examine a
host of second-order effects of the forced election of female candidates ranging from changes in attitudes toward women to
shifts in the distribution of public goods in these wards.
As the education experiments indicate, randomized policy interventions used to test new and innovative ideas provide fascinating
opportunities to test the long-term ramifications of changes in individual and community level factors on a variety of outcomes.
Quasiexperiments and natural experiments that create as-if random placement into treatment and control groups provide additional
prospects. Many programs use lotteries to determine which individuals or groups will receive certain new benefits or opportunities.
Comparing these recipients to nonrecipients or those placed on a waiting list evokes assumptions and opportunities similar
to those of randomized experiments
(Green and Gerber
2002). Numerous scholars in a range of disciplines
(e.g., Katz, Kling, and Liebman
2001;
Ludwig, Duncan, and Hirschfield
2001;
Leventhal and Brooks-Gunn
2003;
Sanbonmatsu et al.
2006) have examined
the Moving to Opportunity (MTO) program that relocates participants from high-poverty public housing to private housing in
either near-poor or nonpoor neighborhoods.
Political scientists can and have benefited from this as-if random experiment as well. Observational research on the effects
of neighborhoods on political and social outcomes suffers from self-selection of subjects into neighborhoods. The factors
that determine where one lives are also likely to influence one's proclivities toward politics, social networking tendencies,
and other facets of political and social behavior. Downstream research into a randomly assigned residential voucher program
allows political scientists the opportunity to parse out the effects of neighborhood context from individual-level factors
that help determine one's choice of residential locale. Political scientists are just beginning to leverage this large social
experiment to address such questions.
Gay's (
2010) work on the MTO allows her to examine how an exogenous shock to one's residential environment affects political engagement
in the form of voter registration and turnout. She finds that subjects who received vouchers to move to new neighborhoods
voted at lower rates than those who did not receive vouchers, possibly due to the disruption of social networks that may result
from relocation. Future research in this vein could leverage this and similar interventions to examine how exogenously induced
variations in the residency patterns of individuals and families affect social networks, communality, civic engagement, and
other variables of interest to political scientists.
Other opportunities for downstream analysis of interventions exist well beyond those discussed here. As this short review
shows, however, finding these downstream possibilities often entails looking beyond literature in political science to other
fields of study.
American Psychological Association. 2003. Ethical Principles of Psychologists and Codes of Conduct. Washington, DC: American Psychological Association.
Angrist, Joshua D., Guido W. Imbens, and Donald B. Rubin. 1996. “Identification of Causal Effects Using Instrumental Variables.” Journal of the American Statistical Association 91: 444–55.
Angrist, Joshua D., and Alan B. Krueger. 2001. “Instrumental Variables and the Search for Identification: From Supply and Demand to Natural Experiments.” Journal of Economic Perspectives 15: 69–85.
Berelson, Bernard, Paul F. Lazarsfeld, and William N. McPhee. 1954. Voting: A Study of Opinion Formation in a Presidential Campaign. Chicago: The University of Chicago Press.
Bhavnani, Rikhil R. 2009. “Do Electoral Quotas Work after They Are Withdrawn? Evidence from a Natural Experiment in India.” American Political Science Review 103: 23–35.
Campbell, Angus, Philip E. Converse, Warren E. Miller, and Donald E. Stokes. 1960. The American Voter. New York: Wiley.
Converse, Philip E. 1972. “Change in the American Electorate.” In The Human Meaning of Social Change, eds. Angus Campbell and Philip E. Converse. New York: Russell Sage Foundation, 263-337.
Davenport, Tiffany C., Alan S. Gerber, Donald P. Green, Christopher W. Larimer, Christopher B. Mann, and Costas Panagopoulos. 2010. “The Enduring Effects of Social Pressure: Tracking Campaign Experiments over a Series of Elections.” Political Behavior 32: 423–30.
Delli Carpini, Michael X., and Scott Keeter. 1996. What Americans Know about Politics and Why It Matters. New Haven, CT: Yale University Press.
Gay, Claudine. 2010. “Moving to Opportunity: The Political Effects of a Housing Mobility Experiment.” Working paper, Harvard University.
Gerber, Alan S., Donald P. Green, and Ron Shacher. 2003. “Voting May Be Habit-Forming: Evidence from a Randomized Field Experiment.” American Journal of Political Science 27: 540–50.
Green, Donald P., and Alan S. Gerber. 2002. “The Downstream Benefits of Experimentation.” Political Analysis 10: 394–402.
Imbens, Guido W., and Joshua D. Angrist. 1994. “Identification and Estimation of Local Average Treatment Effects.” Econometrica 62: 467–75.
Imbens, Guido W., and Donald B. Rubin. 1997. “Bayesian Inference for Causal Effects in Randomized Experiments with Noncompliance.” Annals of Statistics 25: 305–27.
Kam, Cindy, and Carl L. Palmer. 2008. “Reconsidering the Effects of Education on Civic Participation.” Journal of Politics 70: 612–31.
Katz, Lawrence F., Jeffery R. Kling, and Jeffery B. Liebman. 2001. “Moving to Opportunity in Boston: Early Results of a Randomized Mobility Experiment.” Quarterly Journal of Economics 116: 607–54.
Kellermann, Michael, and Kenneth A. Shepsle. 2009. “Congressional Careers, Committee Assignments, and Seniority Randomization in the US House of Representatives.” Quarterly Journal of Political Science 4: 87–101.
Leventhal, Tama, and Jeanne Brooks-Gunn. 2003. “Moving to Opportunity: An Experimental Study of Neighborhood Effects on Mental Health.” American Journal of Public Health 93: 1576–82.
Ludwig, Jens, Greg J. Duncan, and Paul Hirschfield. 2001. “Urban Poverty and Juvenile Crime: Evidence from a Randomized Housing-Mobility Experiment.” Quarterly Journal of Economics 116: 655–79.
Milgram, Stanley. 1974. Obedience to Authority: An Experimental View. New York: Harper & Row.
Miller, Warren E., and J. Merrill Shanks. 1996. The New American Voter. Cambridge, MA: Harvard University Press.
Nie, Norman H., Jane Junn, and Kenneth Stehlik-Barry. 1996. Education and Democratic Citizenship in America. Chicago: The University of Chicago Press.
Peffley, Mark, and Jon Hurwitz. 2007. “Persuasion and Resistance: Race and the Death Penalty in America.” American Journal of Political Science 51: 996–1012.
Rosenstone, Steven J., and John Mark Hansen. 1993. Mobilization, Participation, and Democracy in America. New York: Macmillan.
Rosenzweig, Mark R., and Kenneth I. Wolpin. 2000. “Natural ‘Natural Experiments’ in Economics.” Journal of Economic Literature 38: 827–74.
Sanbonmatsu, Lisa, Jeffery R. Kling, Greg J. Duncan, and Jeanne Brooks-Gunn. 2006. “Neighborhoods and Academic Achievement: Results from the Moving to Opportunity Experiment.” Journal of Human Resources XLI: 649–91.
Schweinhart, L. J., Helen V. Barnes, and David P. Weikart. 1993. Significant Benefits: The High-Scope Perry Preschool Study through Age 27. Monographs of the High/Scope Educational Research Foundation, vol. 10. Ypsilanti, MI: High/Scope Press.
Sondheimer, Rachel Milstein, and Donald P. Green. 2010. “Using Experiments to Estimate the Effects of Education on Voter Turnout.” American Journal of Political Science 54: 174–89.
Sovey, Allison J., and Donald P. Green. 2011. “Instrumental Variables Estimation in Political Science: A Reader's Guide.” American Journal of Political Science 55: 188–200.
Stock, James H., and Mark W. Watson. 2007. Introduction to Econometrics. 2nd ed. Boston: Pearson Addison Wesley.
Tenn, Steven. 2007. “The Effect of Education on Voter Turnout.” Political Analysis 15: 446–64.
Verba, Sidney, and Norman H. Nie. 1972. Participation in America: Political Democracy and Social Equality. New York: Harper & Row.
Wolfinger, Raymond E., and Steven J. Rosenstone. 1980. Who Votes? New Haven, CT: Yale University Press.
Wooldridge, Jeffrey M. 2009. Introductory Econometrics: A Modern Approach. 4th ed. Mason, OH: South-Western Cengage Learning.