NOTES
1. INTEGRATING THE POLITICAL AND THE PSYCHOLOGICAL
1.  Flaming is an uninhibited (and frequently aggressive) reaction to a real or perceived aggressive comment; trolling, another frequently cited uncivil online behavior, is defined as the use of deception, baiting, or sometimes aggressive behavior to provoke others (Hmielowski, Hutchens, and Cicchirillo 2014).
2.  This is not to say that media-effects and political-communication research is ignoring interactive effects; many scholars have investigated the complicated relationship between humans and their media environment. See, for example, Zhu and Boroson (1997); Cacciatore, Scheufele, and Iyengar (2016).
3.  This does not have to be the case, however. The Diane Rehm Show on NPR, for example, frequently offered many different viewpoints but did so in a civil manner. This example would fall closer to “high school debate” in figure 1.1 than to the top right quadrant.
4.  A caveat is necessary here. In their articulation of politeness, Brown and Levinson note that “certain kinds of acts intrinsically threaten face” and are therefore impolite. These acts include the “raising of dangerously emotional or divisive topics, e.g. politics, race, religion, women’s liberation” (1987, 65, 67). Under this definition, all political conversation is considered impolite; if the two terms were truly equivalent, all political discussion would be uncivil and there would be no way to manipulate political incivility.
5.  A total of 61 percent of participants attributed the decline in civility to both radio and television news. Other media-related causes included blogs (42 percent), Glenn Beck (40 percent), late night talk shows like Stewart and Leno (38 percent), and Rachel Maddow (25 percent). It is worth noting that six of the twelve options were media-related, even if they were selected by the researchers rather than provided by the respondents.
2. THE POLITICAL PSYCHOLOGY OF CONFLICT COMMUNICATION
1.  Descriptions of each study are available in appendix A.
2.  The full scale shows high reliability, discriminant and convergent validity, and minimal influence of social desirability (Goldstein 1999). To assess each of these scale characteristics, Goldstein asked 350 student participants to complete a 150-item version of the CCS, from which the 75-item scale was ultimately developed. Participants were also randomly assigned to complete either the Marlowe-Crowne Social Desirability Scale, the Conflict Resolution Inventory, the Self-Disclosure Scale, or the Personality Research Form (Crowne and Marlowe 1960; Jackson 1974; Jourard 1979; McFall and Lillesand 1971). The Conflict Resolution Inventory and the Self-Disclosure Scale serve as independent measures of a similar trait for purposes of convergent validity. The Personality Research Form, an assessment of willingness to persevere on difficult tasks, was used as a measure of discriminant validity. Both the CCS subscales and individual items correlated minimally with the Marlowe-Crowne scale, demonstrating minimal social desirability bias. The subscales of the CCS correlated with the Conflict Resolution Inventory, Self-Disclosure Scale, and Personality Research Form in the expected directions, demonstrating convergent and divergent validity. Thirty of the students in the initial sample were also asked to take the 150-item CCS again three and a half weeks after the first administration. Looking just at the 75 items ultimately used in the scale, the five subscales all demonstrated test-retest correlations greater than 0.80, and each correlation was significant at p<0.001. Each of the 75 items also showed strong scale reliability, with item variance greater than 1.5 and Cronbach’s alphas greater than 0.80 for each subscale.
3.  Appendix A offers a full discussion of the scale, including distributions of conflict orientation across all seven studies.
3. TO LAUGH OR CRY? EMOTIONAL RESPONSES TO INCIVILITY
1.  I will not outline any of these specific theories here, but see McDermott (2004) for a good explanation of five theories of emotion as they relate specifically to decision-making and have implications for political science.
2.  In a pretest, three hundred Mechanical Turk participants were randomly assigned to watch one of six videos—a civil or uncivil clip from Morning Joe, The Dylan Ratigan Show, or Hannity. They were then asked, “To what extent was the clip you just watched uncivil?” They could respond on a scale from 1 to 5, with 1 indicating “not at all uncivil” and 5 representing “extremely uncivil.” Morning Joe and The Dylan Ratigan Show were found to be statistically indistinguishable in both the civil and uncivil conditions. The uncivil clips used to build the treatments in this paper were evaluated as follows: MMorningJoe = 2.89, MRatigan = 2.98, p < 0.69. Both the civil and uncivil clips from Hannity were seen as more uncivil than their MSNBC counterparts and were therefore excluded from the treatment set.
3.  Morning Joe has been on MSNBC since 2007. It currently airs from 6 to 9 a.m. EST. The Dylan Ratigan Show aired weekdays on MSNBC from 4 to 5 p.m. EST from January 2010 to June 2012. The show focused on debate and discussion related to politics, the economy, and business. I selected Dylan Ratigan over better-known MSNBC shows because of his focus on the economy and in a desire to minimize partisan bias in responding to the news clip.
4.  Statistical significance calculated from a two-sample, two-tailed t-test.
5.  The scale contained four of the five statements used in the first experiment. The fifth, “Arguments don’t bother me,” was dropped to keep the TESS-funded survey on the shorter side, thereby increasing the number of participants that could be collected. This statement was chosen because it showed the weakest correlation with the other measures in a series of pairwise correlation evaluations.
6.  Full regression results are available in appendix B, table B.2.
4. CHOOSING OUTRAGE: SELECTIVE EXPOSURE AND INFORMATION SEARCH
1.  The transcripts used in this content analysis were pulled from a LexisNexis search of coverage of the Arizona immigration law (SB 1070) and the congressional debate over health-care reform (specifically, passage of the Affordable Care Act) from March 1 to April 30, 2010. The initial search of television coverage of these two issues resulted in more than two thousand articles, and we randomly sampled from this population to produce a set of 666 program transcripts, 267 on health care and 399 on immigration. Within each of these transcripts, we coded any segment—a section of the program typically beginning with a return from commercial break and ending with the host shifting to a new topic or cutting to commercial again—that dealt directly with immigration or health care. The full coding scheme is available in appendix A. Many thanks to Edward Smith for his assistance with the coding and data collection for this project.
2.  image, image, image. In a one-way ANOVA, there was no statistically significant difference between the number of incidents found on MSNBC and Fox or the number of incidents found on NBC and CNN, but there was a difference between MSNBC, Fox, and the other three outlets.
3.  Participants were then asked a follow-up question that asked them to list their top three specific programs used for gathering political information. Unfortunately, the range of programs offered makes data analysis difficult; only a dozen programs were reported by enough participants to draw reliable statistical conclusions.
4.  This is unsurprising, given that this is a nonprobability sample; these participants are also selecting to participate in online surveys.
5.  Full tables of regression results for both the bivariate and multivariate models are presented in appendix B, tables B.3–B.6. I do not report the results of the multivariate models here beyond the interaction between media consumption and political interest, but there are few statistically significant results to be discussed from those models.
6.  Returning to the set of concerns about media-exposure measures, some scholars argue that these frequency measures of media are really measuring political interest instead of capturing any effects the media might have on an individual’s political ideas.
7.  For simplicity, this regression does not contain controls for the other demographic characteristics. I did not conduct the same analysis on media preferences in the Mechanical Turk 1 study because of concerns about the limitations imposed by sample size that already led many of the relationships to be statistically insignificant.
8.  There are also some counterintuitive findings concerning political interest. Participants who are not at all interested in politics and are extremely conflict-approaching use newspapers much more frequently than their conflict-avoidant peers. More investigation needs to be done into why this group is likely to turn to newspapers and if they are looking at the online versions or investing in paper subscriptions.
9.  Ideally, the treatments would vary in their level of civility but not in how informative they were seen to be. However, participants found the civil clip to be both more civil and more informative (image, sd = 0.98) than the uncivil clip (image, sd = 0.87, p < 0.01; image, sd = 0.97, p < 0.01). Based on these assessments, there is a possibility that participants were responding to how informative they believed the treatment to be, rather than to its level of incivility.
10.  All mediation models are available in appendix B.
5. MIMICRY AND TEMPER TANTRUMS: POLITICAL DISCUSSION AND ENGAGEMENT
1.  Data for the Project Implicit study were collected in March 2012, and people were asked if they had voted “in the last presidential election.” Mechanical Turk Study 1 combines data collected in December 2012 and June 2013; those participants were asked if they had voted “in the 2012 presidential election.”
2.  Because both sets of studies measure participation in the same way, I present only one set of results here unless there are major differences between studies in the outcomes of interest. The analyses for the Mechanical Turk Study 1 are in appendix B.
3.  In addition to this interaction between conflict orientation and political interest, I also explored the effects of conflict orientation across race and gender. The results of these interactions were almost all statistically nonsignificant and, for those that did demonstrate statistical significance, were not robust across samples. The results of these analyses are available in appendix B.
4.  Upon reading the content of the comments, it becomes clear that some people typed “comments” that essentially said “no comment” or “I don’t have anything to say.” These comments are captured in this 63 percent.
5.  In July 2015, Planned Parenthood came under fire from abortion opponents after a video was posted online that depicted a doctor from Planned Parenthood having a conversation with two off-camera individuals who were posing as buyers of tissue from aborted fetuses. Activists claimed the video was evidence that the organization was selling fetal tissue, while Planned Parenthood maintained that fetal remains were only donated to scientific research, and only after the consent of the patients (Calmes 2015).
6. A MORE DISRESPECTFUL DEMOCRACY?
1.  I ran interactions of conflict orientation and race (described here) as well as gender. None of the interactions of conflict orientation and gender is statistically significant, but the graphical results are displayed in appendix B.
2.  This finding does not replicate in the MTurk Study 1 sample. It is highly probable that the samples do not really contain an adequate number of participants who are African American to test the interactive relationship with appropriate statistical power.
APPENDIX 1. ADDITIONAL STUDY INFORMATION
1.  TIPI scale scoring (“R” denotes reverse-scored items): Extraversion—1, 6R; Agreeableness—2R, 7; Conscientiousness—3, 8R; Emotional Stability—4R, 9; Openness to Experience—5, 10R.