Morton Deutsch
Jennifer Goldman-Wetzler
Christine T. Chung
In this chapter, we propose a framework for conceptualizing research on conflict resolution initiatives (CRIs). 1 We first describe different types of research and for what kinds of issues each is most suited. Second, we briefly discuss types of audiences or users of research and what they want. Here, we explore some substantive issues or questions for research that practitioners consider to be important. Next, we consider some of the difficulties in doing research in this area, as well as what kinds of research strategies may be helpful in overcoming these difficulties. Finally, we offer a brief overview of the research in this area.
There are many kinds of research, all with merit. They have differing purposes and often require varying types of skill. There is a tendency among both researchers and practitioners to derogate research that does not satisfy their specific needs or does not require their particular kind of expertise. Thus, “action research” is frequently considered to be second-class research by basic researchers and “basic research” is often thought of as impractical and wasteful by practitioners. Such conflict, however, is based on misunderstanding rather than on a valid conflict of value, fact, or interest; it is what Deutsch (1973) has termed as a “false conflict.”
We turn to a discussion of several types of research that are relevant to conflict resolution: basic research, developmental research, field research, consumer research, and action research. Some researchers work primarily in one type; others move back and forth among them. We start our list with a discussion of basic research, but we do not assume the natural flow is unidirectional from basic to developmental research, and so forth. The flow is (and should usually be) bidirectional: basic does not mean initial.
There are many unanswered questions basic to knowledge and practice in the field of conflict resolution. To illustrate just a few:
These are only a few of the important questions that must be addressed if we are to have the kind of knowledge that is useful for those interested in making conflict constructive—whether in families, schools, industry, community, or across ethnic and international lines. Many other questions are implicit in the chapters of this book.
Much developmental research is concerned with helping to shape effective educational and training programs in this area. Such research is concerned with identifying the best ways of aiding people to acquire the knowledge, attitudes, and skills necessary for constructive conflict resolution by answering such questions as these: How should something be taught (e.g., using what type of teaching methods or pedagogy)? What should be taught (using what curriculum)? For how long? Who should do the teaching? In what circumstances? With what teaching aids? These best ways are likely to vary as a function of the age, educational level, cultural group, and personality of the children and adults involved.
There is a bidirectional link between developmental and basic research. To assess and compare the changes resulting from various educational and training programs, it is necessary to know what changes these programs were seeking to induce and also to develop valid and reliable measuring instruments and procedures for measuring these changes. We are now creating and testing such instruments and procedures. One example is the conceptual framework for comparative case analysis of interactive conflict resolution by d’Estree, Fast, Weiss, and Jakobsen (2001). This framework was devised as a tool that can be used to evaluate and compare the results of a diverse set of conflict resolution initiatives. The framework is described in the final section of this chapter. Another example is the action evaluation research initiative (Rothman, 1997, 2005; Rothman and Friedman, 2005; Rothman and Land, 2004; Rothman and Dosik, 2011; Ross, 2001), a process that has been developed (though still being tested and refined) to help CRIs identify the changes they seek to create and evaluate whether and how those changes have occurred. This project is also described at the end of this chapter. While these examples demonstrate the work currently being done to increase our ability to evaluate CRIs, there is still much work to do in order to empirically better understand what types of initiatives are most effective and most efficient.
Much developmental research can be done in experimental classrooms or workshops. However, field research is needed to identify the features of political systems, cultures, and organizations that facilitate or hinder effective CRIs. What type of effects do CRIs have with populations living under conditions of intractable ethnic conflict? What kinds of cultures are most favorable to such initiatives, and what kinds make it unfeasible or ineffective? Which levels in an organizational hierarchy must be knowledgeable and supportive of a CRI for it to be effective? In schools, what types of CRI models should be employed: extracurricular activities, specific courses in CR, an infusion model in all school courses, use of constructive controversy, or all of these? Is cooperative learning a necessary precondition or a complement to a CRI? What criteria should be employed in selecting CR practitioners? And so forth.
Most of these questions have to be asked and answered in terms of the specific characteristics of an individual setting, taking into account the resources, organization, personnel, population, and social environment. While this type of research can be difficult and costly (in both time and money) to conduct, examples can be found in the literature. Such research has been conducted on CRIs that have taken place over the past few decades between parties in ethnic conflict, including those between Israelis and Palestinians (see Abu-Nimer, 1999, 2004, 2012; Kelman, 1995, 1998, 2011; Maoz, 2004, 2005, 2011) and Greek and Turkish Cypriots (Angelica, 2005; Rothman, 1999), among others, both internationally and domestically.
It would be valuable to have periodic surveys of where CRIs are taking place, who is participating, what kinds of qualifications the practitioners have, and so on. Also, it would be good to know how the CRIs are evaluated by recipients both immediately after the initiatives and one year later. In addition to studying those who have participated in CRIs, it would be useful to assess what the market is for CRIs among those who have not yet engaged in these programs.
Most of the research on CRIs in organizations has been essentially studies of “consumer satisfaction.” The research usually involved studying the effects of CRIs in a particular classroom, workshop, or institution. Results are quite consistent in indicating a considerable degree of approval among those exposed to CRIs, whether in the role of practitioner or participant. This is indeed encouraging, but awareness of the Hawthorne effect suggests both caution in our conclusions and the need to go considerably beyond consumer satisfaction research. (The Hawthorne effect refers to the phenomenon of people changing their behavior, often for the better, when participating in a program and how this may result simply from the increased attention they receive in the context of being in the program and not due to the benefits of the program itself.)
Action research is a term originally employed by Kurt Lewin (1946) to refer to research linked to social action. To be successful, it requires active collaboration between the action personnel (the practitioners and participants) and the research personnel. What the action personnel do can be guided by feedback from the research concerning the effectiveness of their actions. To study the processes involved in successfully producing a change (or failing to do so) in a well-controlled and systematic manner, researchers depend on the cooperation of action personnel. Most studies on CRIs conducted in the field are a form of action research.
There are two main ways in which successful collaboration with practitioners increases the likelihood that research findings are used. First, participation usually raises the practitioners’ interest in the research and its possible usefulness. Second, collaboration with practitioners helps to ensure that the research is relevant to problems as they appear in the actual work of the practitioners and the functioning of the organization in which their practice is embedded.
However, there are many potential sources of difficulty in this collaboration. It is time-consuming and hence often burdensome and expensive to both the practitioners and researchers. Also, friction may occur because of the disparate goals and standards of the two partners: one is concerned with improving existing services, the other with advancing knowledge of a given phenomenon. The practitioner may well become impatient with the researcher’s attempt to have well-controlled independent variables and the intrusiveness involved in extensive measuring of dependent variables. The researcher may become exasperated with the practitioner’s improvisation and reluctance to sacrifice time from other activities to achieve the research objectives. In addition, there is often much evaluation apprehension on both sides: the practitioners are concerned that, wittingly or unwittingly, they will be evaluated by the research findings; the researchers fear that their peers will view their research as not being sufficiently well controlled to have any merit.
There are several audiences for research: foundations and government agencies, executives and administrators who decide whether a CRI will take place in their organization, CR practitioners, and researchers who do one or more of the types of research described above. The audiences rarely have identical interests.
Our sense is that most private foundations are less interested in supporting research than they are in supporting pilot programs, particularly if such programs focus on preventing violence. Their interest in research is mainly oriented to evaluation and answering the question: Does it work? Many government agencies have interests that are similar to those of private foundations. However, some domestic agencies, such as the National Science Foundation and the National Institute of Mental Health, are willing to support basic and developmental research if the research is clearly relevant to their mission.
Internationally, as humanitarian organizations integrate CRIs into their work, the need to evaluate CRIs for the purposes of reporting the results to funders of humanitarian organizations has become a significant and challenging aspect of CR work (Church and Shouldice, 2002; Culbertson, 2010; Hunt and Hughes, 2010). For example, while funders may be accustomed to evaluations of humanitarian programs that use immediate, concrete measures such as the number of people who participated in an initiative, a more accurate indicator of success for CRIs may be the long-term impact on the larger community. Working with funding agencies to reconcile the methods used to evaluate the short-term outcomes and long-term impacts of humanitarian-related CRIs can prove a challenging but worthy task.
With respect to the type of evaluation research needed, we suggest that there is enough credible evidence to indicate that CRIs can have positive effects. The appropriate question now is under what conditions such effects are most likely to occur—for example, who benefits, how, as a result of participating in what type of initiative, what type of practitioner, under what kind of circumstance? That is, the field of conflict resolution has advanced beyond the need to answer the oversimplified question, “How does it work?” It must address the more complicated questions discussed in the section on types of research—particularly the questions related to developmental research.
The executive and administrative audience is also concerned with the question of, “Does it work?” Depending on their organizational setting, they may have different criteria in mind in assessing the value of CRIs. A school administrator may be interested in such criteria as the incidence of violence, disciplinary problems, academic achievement, social and psychological functioning of students, teacher burnout, and cooperative relations between teachers and administrators. A corporate executive may be concerned with manager effectiveness, ease and effectiveness of introducing technological change, employee turnover and absenteeism, organizational climate, productivity, and the like.
It is fair to say that with rare exceptions, CRI researchers and practitioners have not developed the detailed causal models that would enable them to specify and measure the mediating organizational and psychological processes linking CRIs to specific organizational or individual changes. Most executives and administrators are not much interested in causal models. However, it is important for practitioners and researchers to be aware that the criteria of CRI effectiveness often used by administrators—incidence of violence, academic achievement, employee productivity—are affected by many factors other than CRIs. They may, for example, be successful in increasing the social skills of students, but a sharp increase in unemployment, a significant decrease in the standard of living, or greater use of drugs in the students’ neighborhood may lead to deterioration of the students’ social environment rather than the improvement one can expect from increased social skills. The negative impact of such deterioration may counteract the positive impact of CRIs.
One would expect executives and administrators to be interested in knowing not only whether CRIs produce the outcomes they seek but also whether it is more cost-effective in doing so than alternative interventions. Some research has evaluated the effectiveness of alternative dispute resolution procedures, such as mediation (see chapter 34) compared to adjudication, but otherwise little research has examined the cost-effectiveness of CRIs.
Conflict resolution practitioners often have questions about the degree to which their work successfully affects both individual and institutional change. With regard to each focus, practitioners have articulated a need to have measuring instruments that they can use to assess the effectiveness of their work. Such instruments could be of particular value to them in relation to funding agencies and policymakers. Practitioners often feel that the methods they use during their training and consulting, to check on the effects of their work, are more detailed and sensitive than the typical questionnaires used in evaluations. Their own methods may be more useful to them, even if these are less persuasive to funding agencies. Much of the general value could be gained from a study of the implicit theoretical models underlying the work of practitioners, as well as a study of how practitioners go about assessing the impact of what they are doing.
Practitioners’ focus on individual change tends to be concerned with such issues as these:
The focus on institutional change is concerned with other questions:
It is evident that the issues raised by the practitioners are important but complex and not readily answerable by a traditional research approach. In addition, the complexity suggests that each question contains a nest of others that have to be specified in greater detail before they are accessible to research.
Psychologically, other researchers are usually the most important audience for one’s research. If your research does not meet the standards established for your field of research, you can expect it to be rejected as unfit for publication in a respected research journal. This may harm your reputation as a researcher—and may make tenure less likely if you are a young professor seeking it. This may be true even if funding agencies, administrators, and practitioners find the research to be very useful to them.
The research standard for psychology and many other social sciences is derived from the model of the experiment. If one designs and conducts an experiment ideally, one has created the best conditions for obtaining valid and reliable results. In research, as in life, the ideal is rarely attainable. Researchers have developed various procedures to compensate for deviation from the ideal in their attempt to approximate it. However, there is a bias in the field toward assuming that research that looks like an experiment (e.g., it has control groups and before- and after-intervention measurements) but is not, because it lacks randomization and has too few cases (more on this later), is inherently superior to other modes of approximation. We disagree. In our view, each mode has its merits and limitations and may be useful in investigating a certain type of research question but less so in another.
We suggest three key standards for research: (1) the mode of research should be appropriate to the problem being investigated, (2) it should be conducted as well as humanly possible given the available resources and circumstances, and (3) it should be knowledgeable and explicit about its limitations.
Many factors make it very difficult to do research on the questions outlined in the previous sections, particularly the kind of idealized research that most researchers prefer to do (see chapter 42). For example, it is rarely possible to randomly assign students (or teachers, or administrators) to be trained (or not trained) by randomly assigned expert trainers employing randomly assigned training procedures. Even if this were possible in a particular school district, one would face the possibility that the uniqueness of the district has a significant impact on the effectiveness of training; no single district can be considered an adequate sample of some or all other school districts. To employ an adequate sample (which is necessary for appropriate statistical analysis) is very costly and probably neither financially nor administratively feasible.
Given this reality, what kind of research can be done that is worth doing? Here we outline several mutually supportive research strategies of potential value.
Experimental research involves small-scale studies that can be conducted in research laboratories, experimental classrooms, or experimental workshops. It is most suitable for questions related to basic or developmental research, questions specific as to what is to be investigated. Thus, such approaches would be appropriate if one sought to test the hypothesis that role reversal does not facilitate constructive conflict resolution when the conflict is about values (such as euthanasia) but does when it centers on interests. Similarly, it would be appropriate if one wished to examine the relative effectiveness of two different methods of training in improving such conflict resolution skills as perspective taking and reframing.
This kind of research is most productive if the hypothesis or question being investigated is well grounded in theory or in a systematic set of ideas rather than when it is ad hoc. If well grounded, such research has implications for the set of ideas within which it is grounded and thus has more general implications than testing an ad hoc hypothesis does. One must, however, be aware that in all types of hypothesis-driven research, the results from the study may not support the hypothesis—even when the hypothesis is valid—because implementation of the causal variables (such as the training methods), measurement of their effects, or the research design may be faulty. Generally it is more common to obtain nonsignificant results than to find support for a hypothesis. Thus, practitioners have good reason to be concerned about the possibility that such research may make their efforts appear insignificant even though their work is having important positive effects.
In good conscience, one other point must be made: it is very difficult and perhaps impossible to create a true or pure experiment involving human beings. The logic involved in true experiments assumes that complete randomization has occurred for all other variables except the causal variables being studied. However, human beings have life histories, personalities, values, and attitudes prior to their participation in a conflict workshop or experiment. What they bring to the experiment from their prior experience may not only influence the effectiveness of the causal variables being studied but also be reflected directly in the measurement of the effects of these variables. Thus, an authoritarian, antidemocratic, alienated member of the Aryan Nation Militia Group may not only be unresponsive to participation in a CRI but also, independent of this, score poorly on such measures of the effectiveness of the CRI as ethnocentrism, alienation, authoritarianism, and control of violence, because of his or her initial attitudes. Such people are also less likely to participate in CRIs than democratic, nonviolent, and nonalienated people. The latter are likely to be responsive to CRIs and, independent of this, to have good scores on egalitarianism, nonviolence, lack of ethnocentrism, and the like, which also reflect their initial attitudes.
With appropriate “before” measures and correlational statistics, it is possible to control for much (but far from all) of the influence of initial differences in attitudes on the “after” measures. In other words, a quasi-experiment that has some resemblance to a true experiment can be created despite the prior histories of the people who are being studied.
Correlations by themselves do not readily permit causal inference. If you find a negative correlation between amount of exposure to CRIs and authoritarianism, as we have suggested, it may be that those who are authoritarian are less likely to expose themselves to CRIs or that those who have been exposed to CRIs become less authoritarian or that the causal arrow may point in both directions. It is impossible to tell from a simple correlation. However, methods of statistical analysis developed during the past several decades (and still being refined) enable one to appraise with considerable precision how well a pattern of correlations within a set of data fits an a priori causal model. Although causal modeling and experimental research are a mutually supportive combination, causal modeling can be employed even if an approximation to an experimental design cannot be achieved. This is likely to be the case in most field studies.
Consider, for example, a study we conducted on the effects of training in cooperative learning and conflict resolution on students in an alternative high school (Deutsch, 1993; Zhang, 1994). Prior theoretical analysis (Deutsch, 1949, 1973; Johnson and Johnson, 1989), as well as much experimental and quasi-experimental research (see Johnson and Johnson, 1989, for a comprehensive review), suggested what effects such training could have and also suggested the causal process that might lead to these effects. Limitation of resources made it impossible to do the sort of extensive study of many schools required for an experimental or quasi-experimental study or to employ the statistical analysis appropriate to an experiment. Therefore, we constructed a causal model that in essence assumed training in cooperative learning or conflict resolution would improve the social skills of a student. This in turn would produce an improved social environment for the student (as reflected in greater social support and less victimization from others), which would lead to higher self-esteem and greater sense of personal control over one’s fate. The increased sense of control would enhance academic achievement. It was also assumed that improvement in the student’s social environment and self-esteem would lead to an increased positive sense of well-being, as well as decreased anxiety and depression. The causal model indicated what we had to measure. Prudence suggested that we also measure many other things that potentially might affect the variables on which the causal model focused.
The results of the study were consistent with our causal model. Although the study was quite limited in scope—having been conducted in only one alternative high school—the results have some general significance. They are consistent with existing theory and also with prior research conducted in very different and much more favorable social contexts. The set of ideas underlying the research appears to be applicable to students in the difficult, harsh environment of an inner-city school as well as to students in well-supported, upper-middle-class elementary and high schools.
Nonexperimental field research may be exploratory research, testing of a causal model, or some combination of both. Exploratory research is directed at describing the relations and developing the set of ideas that underlie a causal model. Typically it is inappropriate to test a causal model with the data collected to stimulate its development. Researchers are notoriously ingenious in developing ex post facto explanations of data they have obtained, no matter how their studies have turned out. A priori explanations are much more credible. This is why nonexploratory research has to be well grounded in prior theory and research if it is to be designed to clearly bear on the general ideas embedded in the causal model. However, even if a study is mainly nonexploratory, exploratory data may be collected so as to refine one’s model for future studies.
This form of research is widely used in market research; preelection polling; opinion research; research on the occurrence of crime; and collection of economic data on unemployment, inflation, sales of houses, and so on. A well-developed methodology exists concerning sampling, questionnaire construction, interviewing, and statistical analysis. Unfortunately, little survey research has taken place in the field of conflict resolution. Some of the questions that could be answered by survey research have been discussed earlier, under the heading of consumer research. It is, of course, important to know about the potential (as well as existing) consumers of CRIs. Similarly, it is important to know about current CR practitioners: their demographics, their qualifications to practice, the models and frameworks they employ, how long they have practiced, the nature of their clientele, the goals of their work, and their estimation of the degree of success.
Experience surveys are a special kind, involving intensive in-depth interviews with a sample of people, individually or in small focus groups, who are considered to be experts in their field. The purpose of such surveys may be to obtain insight into the important questions needing research through the experts’ identification of gaps in knowledge or through the opposing views among the experts on a particular topic. In addition, interviewing experts, prior to embarking on a research study, generally improves the researcher’s practical knowledge of the context within which her research is conducted and applied and thus helps her avoid the mine fields and blunders into which naiveté may lead her.
More important, experts have a fund of knowledge, based on their deep immersion in the field, that may suggest useful, practical answers to questions that would be difficult or infeasible to answer through other forms of research. Many of the questions mentioned earlier under the heading of field research are of this nature. Of course, one’s confidence in the answers of the experts is eventually affected by how much they agree or disagree.
There are several steps in an experience survey. The first is to identify the type of expert to survey. For example, with respect to CRIs in schools, one might want to survey practitioners (the trainers of trainees), teachers who have been trained, students, or administrators of schools in which CRIs have occurred. The second step is to contact several experts of the type you wish to interview and have them nominate other experts, who in turn nominate other experts. After several rounds of such nominations, a group of nominees usually emerges as being widely viewed as experts. The third step is to develop an interview schedule. This typically entails formulating a preliminary one that is tried out and modified as a result of interviews with a half-dozen or so of the experts individually and also as a group. The revised schedule is formulated so as to ask all of the questions one wants to have answered by the experts, while leaving the expert the opportunity to raise issues and answer the questions in a way that was not anticipated by the researcher.
Many years ago, Deutsch and Collins (1951) conducted an experience survey of public housing officials prior to conducting a study of interracial housing. The objective was to identify the important issues that could be the focus of a future study. It led to a study of the effects of the occupancy patterns: whether the white and black tenants were housed in racially integrated or racially segregated buildings in a given housing project. In addition, the survey created a valuable handbook of the various other factors that, in the officials’ experiences, affected race relations in public housing. It was a useful guide to anyone seeking to improve race relations in public housing projects.
Although it is possible for the experts to be wrong—to have commonly held, mistaken, implicit assumptions—their articulated views are an important starting point as either constructive criticism or a guide to informed practice.
Not only can the conflict resolution field learn from its experienced practitioners, it can also learn from the work done in other closely related areas. Many of the issues involved in CRIs have been addressed in other areas: transfer of knowledge and skills is of considerable concern to learning theorists and the field of education generally; communication skills have been the focus of much research in the fields of language and communication, as well as social psychology; anger, aggression, and violence have been studied extensively by various specialties in psychology and psychiatry; and there is an extensive literature related to cooperation and competition. Similarly, creative problem solving and decision making have been the focus of much theoretical and applied activity. Terms such as attitude change, social change, culture change, psychodynamics, group dynamics, ethnocentrism, resistance, perspective taking , and the like are common to CRIs and older areas. Although the field of conflict resolution is relatively young, it has roots in many well-established areas and can learn much from the prior work in these areas. The purpose of this Handbook is, of course, to provide knowledge of many of these relevant areas to those interested in conflict resolution.
As an educational and social innovation, CRIs in the form of training, workshops, and intergroup encounters are also relatively young. There is, however, a vast literature on innovation in education and the factors affecting the success or lack of success in institutionalizing an innovation in schools. In particular, by analogy, cooperative learning could offer much useful experience for CR training in this regard. Cooperative learning, which is conceptually closely related to CR training, has accumulated a considerable body of experience that might help CR practitioners understand what leads to success or failure in institutionalizing a school program of CR training.
In 1995, Deutsch wrote, “There is an appalling lack of research on the various aspects of training in the field of conflict resolution” (p. 128). The situation has been improving since then. For example, there is now much evidence from school systems of the positive effects of conflict resolution training on the students who were trained. Most of the evidence is based on evaluations by the students, teachers, parents, and administrators. In Lim and Deutsch’s international study (1997), almost all institutions surveyed reported positive evaluations by each of the populations filling out questionnaires. Similar results are reported in evaluations made for school programs in Minnesota, Ohio, Nevada, Chicago, New York City, New Mexico, Florida, Arizona, Texas, and California (see Bodine and Crawford, 1998; Johnson and Johnson, 1995, 1996; Lam, 1989; Flannery et al., 2003; Stevahn, Johnson, Johnson, and Schultz, 2002).
While research evaluating CRIs may have begun primarily with research conducted on conflict resolution training, in the last fifteen years conflict resolution evaluation research has expanded to include the development of tools, methodologies, and research conducted on a range of initiatives, including interactive conflict resolution workshops involving politically influential parties from both sides in international conflicts (see d’Estree et al., 2001; Fisher, 1997; Kelman, 1995, 1998), interethnic encounter groups (see Abu-Nimer, 1999, 2004; Maoz, 2004, 2005; Bekerman and McGlynn, 2007), and peace-building activities (see Lederach, 1997; Zartman, 2007), to name a few. In this section, we offer a brief overview of some of the methodologies and instruments developed and research conducted over the past few years. We begin with an example of an instrument created by d’Estree et al. (2001) to assess the short-, medium-, and long-term impacts of interactive conflict resolution and other similar initiatives.
D’Estree et al. (2001) created a framework, grounded in theory and practice, designed to be used as a tool for evaluating CRIs. While the framework was developed to address interactive problem-solving workshops (see Kelman, 1995, 1998; Fisher, 1997), it can be modified to address the particular goals of other types of CRIs as well.
The framework has four categories, and each category contains a set of criteria for assessing CRIs. The first category, changes in thinking, includes criteria regarding various types of new knowledge that participants may gain from an involvement in CRIs, such as the degree to which participants are able to attain deeper understanding of conflicts, expand their perspective of others, frame problems and issues productively, problem-solve, and communicate effectively. The second category, changes in relations, includes various indicators that the relationship between the parties in conflict has changed, such as the extent to which parties are better able to engage in empathetic behavior, validate and reconceptualize their identities, and build and maintain trust with the other side. The third category, foundations for transfer, includes criteria for assessing how well a CRI establishes a platform for transferring the learning to participants’ home communities once the CRI has ended. The criteria in this category include the extent to which participants have created artifacts (e.g., documents describing agreements, plans for future negotiations, joint statements) and put in place structures for implementing new ideas, and the extent to which the CRI has helped create new leadership. The foundations for outcome or implementation category include criteria that assess the extent to which the CRI contributed to medium- and long-term achievements that occur between the parties. Such criteria include the degree to which relationship networks have been created, reforms in political structures have occurred, new political input and processes have been created, and increased capacity for jointly facing future challenges can be demonstrated. It is important to note that the categories and accompanying criteria are interrelated, not mutually exclusive, and are not meant to be used in a linear fashion.
The framework also includes a matrix that differentiates between temporal phases of impact and societal levels of intervention. The temporal phases of impact are the promotion phase, in which a CRI attempts to promote or catalyze certain effects (assessed during the CRI); the application phase, in which attempts are made to apply or implement the effects of the CRI in the parties’ home environments (assessed in the short term after the CRI takes place); and the sustainability phase, in which the medium- and long-term effects of the CRI are assessed. The societal levels of intervention enable evaluators to distinguish between effects that occur at the individual (micro) level, societal (macro) level, and the community (meso) level, in which the transfer of effects from the individual to the societal level often takes place. D’Estree et al. (2001) suggest using a variety of unobtrusive methods to collect data along the dimensions of their proposed frameworks, including interviews, surveys, observations, content analysis, and discourse analysis.
Another methodology that has been developed to evaluate a wide range of CRIs is called action evaluation research (Ross, 2001; Rothman, 1997, 2005; Rothman and Friedman, 2005; Rothman and Land, 2004; Rothman and Dosik, 2011). Action evaluation research refers to a process of creating alignment and clarification about the goals of a CRI with a variety of stakeholders as a way of monitoring and assessing the successful implementation of a CRI. The action evaluation process centers on three main sets of questions: (1) What long- and short-term outcome goals do various stakeholders have for this initiative? (2) Why do the stakeholders care about the goals? What motivations drive them? For trainers or developers of the initiative, what are the theories and assumptions that guide their practice? (3) How will the goals be most effectively met? In other words, what processes should be used to meet the stated goals?
These questions form the baseline, formative, and summative stages of the research. At the baseline stage, the action evaluator engages project members in a cooperative goal-setting process. He or she collects data from all members using online surveys and interviews and then feeds back the data to the group with the purpose of creating a baseline list of goals that all stakeholders can use to monitor and evaluate the success of the CRI over time.
As the CRI is implemented, the action evaluation process enters the formative stage in which participants reflect on the action that has been taken so far, refine their goals as needed, and identify obstacles that need to be overcome in order to achieve the goals. The formative stage is an ongoing process of refinement and learning rather than a discrete, one-time process. The methods used at the formative stage include an online project log in which members can communicate with one another about important events, problems, and ideas; a shared journal in which participants communicate directly with the action evaluator about ideas and concerns; critical incident stories in which participants enter particularly positive or challenging events into a project database; and interviews conducted with participants. Once again, the action evaluator feeds back the collected data to the group members and works with them to continue clarifying the goals of the initiative, monitoring progress toward the goals, and directing future work. A progress report will be generated to compare the results thus far with the baseline stage goals. The report addresses questions such as, Toward what goals has observable progress been made? What new goals have emerged over time? Where have problems and obstacles occurred? The action evaluator helps participants assess the obstacles and make changes to address them as needed.
The summative stage occurs as a CRI reaches its conclusion or another natural point at which it makes sense to more formally evaluate the results of the CRI. At this stage, participants use the goals created at the baseline and formative stages to establish criteria for retrospective assessment of the CRI. As participants review their goals and examine whether they have reached them, they identify what worked well and what they would do differently to improve other similar CRIs in the future.
We now look at several research studies conducted to evaluate a variety of CRIs in different types of environments.
The Comprehensive Peer Mediation Evaluation Project (CPMEP), conducted by Jones and her colleagues, involved twenty-seven schools with a student population of about twenty-six thousand, a teacher population of approximately fifteen hundred, and a staff population of about seventeen hundred (Jones, 1997). They employed a three-by-three design: three levels of schools (elementary, middle, and high school); each level of school split into three possible conditions (peer mediation only, which was called a “cadre program”; peer mediation integrated with a schoolwide intervention, which was called a “whole school program”; or no training at all, designated as the control group. The training and research occurred over a two-year period.
The following draws on the report’s summary of general conclusions:
It is important to recognize that not only was this study well designed from a research point of view, but also the conflict resolution training was well designed and systematic. The trainings for the peer mediation only and peer mediation plus whole school conditions are outlined here.
The Negotiation Evaluation Survey (NES) is a time-delayed, multisource feedback approach to assessment and development of collaborative negotiation training and its effects on individuals and groups (Coleman and Lim, 2001). This approach uses a modified MACBE model (motivation, affect, cognition, behavior, environment) (Pruitt and Olczak, 1995) to assess change at the individual level in terms of conflict-related cognitions, attitudes toward the use of cooperative and competitive strategies, affect and behaviors, and at the group level in terms of conflict outcomes and work climate. In order to correct for self-report bias inherent in many evaluation tools, the NES was designed as a multisource feedback (MSF) tool, often referred to as a 360-degree feedback instrument (Church and Bracken, 1997). The MSF process elicits perceptions of a target person’s behavior from a variety of sources (in this study, from questionnaires filled out by self, a close friend, a supervisor, and a subordinate).
The NES was used to evaluate the effects of the Coleman Raider collaborative negotiation model, which was used in the twenty-hour Basic Practicum in Conflict Resolution course at Teachers College, Columbia University (see chapter 35). In addition to using the modified MACBE model as an organizing construct for the survey, the authors identified the elements of the Coleman Raider model and translated the elements into specific training objectives and then into measurable constructs that form the basis of the actual items used in the NES.
The study used a Solomon four-group experimental design with two treatment groups and two control groups. Both treatment groups received the training and a posttest survey, but only one took the pretest survey. Neither control group received the training, and both took the posttest survey, but only one took the pretest. No significant effects were found in any of the four groups from taking the pretest, and no interactions between pretesting and training were found.
Training was found to have a significant effect on participants’ collaborative negotiation behaviors, thoughts, feelings, attitudes, negotiation outcomes, and work climates. For example, as compared to participants who did not receive the training, those who received the training were found to have
Regarding the multisource feedback approach, the study found that
In an extension of this evaluation research study, Lim (2004) conducted a study in which she had the participants engage in a two-party negotiation simulation three weeks after taking the posttest. Participants’ behavior and attitudes during the simulation were measured by blind raters (who coded tape-recorded verbal exchanges) as well as by participants’ self-reports regarding their own and their negotiating partner’s behaviors and attitudes during the negotiation simulation. She found that compared to those who did not receive training, participants who received the training established a more cooperative climate in the simulated negotiation, did a better job probing for (as opposed to ignoring) the other party’s needs, demonstrated better active listening skills, and agreed to outcomes that better addressed both parties’ interests. Lim’s study also replicated the Coleman and Lim (2001) study, and its findings were similar to the original study.
In extensive field research, Maoz (2004, 2005, 2011) used a process evaluation approach to assess the extent to which intergroup encounter CRIs promote relationships, behaviors, and interactions that fulfill the standards of social justice, equality, and fairness that they strive to achieve within the larger conflict setting. Intergroup encounter interventions are programs that implement the contact hypothesis (Allport, 1954), a theory that explains how relations between highly contentious groups may be improved through facilitated meetings between the groups where they engage in cooperative tasks together. Much of the research done on these interventions focused on the quality of the outcomes rather than on the process of the encounter itself, and therefore it was difficult to ascertain what types of encounters led to improvements in group relations and which types did not (d’Estree et al., 2001; Pettigrew, 1998). Maoz addressed these limitations by using process evaluation, which delves into the key characteristics of an intervention, those characteristics that are theorized to improve intergroup relations, and then assessing the attitudinal and behavioral outcomes that result from these characteristics.
Maoz’s research examined intergroup encounter programs supported by the Abraham Fund for Jewish-Arab Coexistence, which brought together Jews and Palestinians living in Israel. As theory suggested that these CRIs may be successful when they establish for participants a cooperative orientation across lines of identity (coexistence model) and also to help participants relate to each other as equals (symmetry), these were the process characteristics explored by the study. The level of symmetry between the Jewish and Palestinian participants, as well as between the facilitators, in all forty-seven CRIs were assessed through direct observations of each CRI by the evaluation research team members, who recorded interactions and verbal exchanges using a detailed coding sheet and instructions booklet. Each CRI was assessed on a scale ranging from 1 (maximum dominance of one side, in this case Jewish) to 9 (maximum dominance of the other side, in this case Palestinian). A rating score of 5 reflected symmetrical participation by Jews and Palestinians.
The encounter programs were classified into three categories: (1) the coexistence model, which emphasizes commonalities among the two groups and encourages mutual understanding and cooperation; (2) the confrontation model, which focuses on the conflict and power relations between the groups and raises awareness of inequalities; and (3) the mixed model, which incorporates elements from both the coexistence and confrontation models in its interventions. The programs were identified as following one of these models through interviews with the directors and program coordinators, as well as examinations of their organizational materials and documents. The process evaluation of these programs found that
The results from these studies allow researchers to understand the processes that are critical to the success of intergroup encounter programs and offer ways in which they may be studied further to investigate the efficacy of such programs in promoting constructive conflict resolutions between groups in even the most difficult situations.
This chapter was initially stimulated by the paucity of research on conflict resolution initiatives. While evaluation research on CRIs has expanded over the past few years, the continued lack of systematic research is due to a number of factors, including lack of adequate funding to support the kind of research that would be valuable to conduct. Another important factor is the lack of appreciation of the large range of worthwhile questions that can be addressed by research, as well as the research strategies that are available to address them. In response to this latter factor, we have sketched out a framework for thinking about the research possibilities related to CRIs and have provided examples of innovative methodologies that have been developed and projects that have been conducted in this realm.
Abu-Nimer, M. Dialogue, Conflict Resolution and Change: Arab-Jewish Encounters in Israel . New York: State University of New York Press, 1999.
Abu-Nimer, M. “Education for Coexistence and Arab-Jewish Encounters in Israel: Potential and Challenges.” Journal of Social Issues , 2004, 60 (2), 405–422.
Abu-Nimer, M. “Building Peace in the Pursuit of Social Justice.” In M. D. Palmer and S. M. Burgess (eds.), The Wiley-Blackwell Companion to Religion and Social Justice (pp. 620–632). Hoboken, NJ: Wiley, 2012.
Allport, G. W. The Nature of Prejudice . Reading, MA: Addison-Wesley, 1954.
Angelica, M. P. “Conflict Resolution Training in Cyprus: An Assessment.” Retrieved from www.cyprus-conflict.net/angelica%20rpt%20%201.html
Bekerman, Z., and McGlynn, C. Addressing Ethnic Conflict through Peace Education: International Perspectives . London: Palgrave Macmillan, 2007.
Bodine, R. J., and Crawford, D. K. The Handbook of Conflict Resolution Education: A Guide to Building Quality Programs in Schools . San Francisco: Jossey-Bass, 1998.
Church, A. H., and Bracken, D. W. “Advancing the State of the Art of 360-Degree Feedback: Guest Editors’ Comments on the Research and Practice of Multi-Rater Assessment Methods.” Group and Organization Management , 1997, 22 (2), 149–161.
Church, C., and Shouldice, J. “The Evaluation of Conflict Resolution Interventions: Framing the State of Play.” Derry/Londonderry, Northern Ireland: International Conflict Research (INCORE), 2002. Retrieved from http://www.incore.nlst.ac.uk/policy/evaluation/eval_meeting.html
Coleman, P. T., and Lim, Y.Y.J. “A Systematic Approach to Evaluating the Effects of Collaborative Negotiation Training on Individuals and Groups.” Negotiation Journal , 2001, 17 (4), 363–392.
Culbertson, H. “The Evaluation of Peacebuilding Initiatives.” In D. Philpott and G. Powers (eds.), Strategies of peace: transforming conflict in a violent world , (pp. 65–90). Oxford University Press, 2010.
D’Estree, T. P., Fast, L. A., Weiss, J. N., and Jakobsen, M. S. “Changing the Debate about ‘Success’ in Conflict Resolution Efforts.” Negotiation Journal , 2001, 17 (2), 101–113.
Deutsch, M. “A Theory of Cooperation and Competition.” Human Relations , 1949, 2 , 129–151.
Deutsch, M. The Resolution of Conflict: Constructive and Destructive Processes . New Haven, CT: Yale University Press, 1973.
Deutsch, M. “The Effects of Training in Cooperative Learning and Conflict Resolution in an Alternative High School.” Cooperative Learning , 1993, 13 , 2–5.
Deutsch, M. “The Constructive Management of Conflict: Developing the Knowledge and Crafting the Practice.” In B. B. Bunker and J. Z. Rubin (eds.), Conflict, Cooperation, and Justice: Essays Inspired by the Work of Morton Deutsch . San Francisco: Jossey-Bass, 1995.
Deutsch, M., and Collins, M. E. Interracial Housing: A Study of a Social Experiment . Minneapolis: University of Minnesota Press, 1951.
Fisher, R. J. Interactive Conflict Resolution . Syracuse, NY: Syracuse University Press, 1997.
Flannery, D. J., Vazsonyi, A. T., Liau, A. K., Guo, S., Powell, K. E., Atha, H., Vesterdal, W., and Embry, D. (2003). “Initial Behavior Outcomes for the PeaceBuilders Universal School-Based Violence Prevention Program.” Developmental Psychology , 2003, 39 (2), 292–307.
Hunt, C., and Hughes, B. “Assessing Police Peacekeeping: Systemisation Not Serendipity.” Journal of International Peacekeeping , 2010, 14 (3–4), 403–424 .
Johnson, D. W., and Johnson, R. T. Cooperation and Competition: Theory and Research . Edina, MN: Interaction, 1989.
Johnson, D. W., and Johnson, R. T. “Teaching Students to Be Peacemakers: Results of Five Years of Research.” Peace and Conflict: Journal of Peace Psychology , 1995, 1 , 417–438.
Johnson, D. W., and Johnson, R. T. “Conflict Resolution and Peer Mediation Programs in Schools: A Review of the Research.” Review of Educational Research , 1996, 66 , 459–506.
Jones, T. S. “Comprehensive Peer Mediation Evaluation Project: Preliminary Final Report.” Report submitted to the William and Flora Hewlett Foundation and the Surdna Foundation, 1997.
Kelman, H. C. “Contributions of an Unofficial Conflict Resolution Effort to the Israeli-Palestinian Breakthrough.” Negotiation Journal , 1995, 11 (1), 19–28.
Kelman, H. C. “Social-Psychological Contributions to Peacemaking and Peacebuilding in the Middle East.” Applied Psychology: An International Review , 1998, 47 (1), 5–29.
Kelman, H. C. “A One-Country/Two-State Solution to the Israeli-Palestinian Conflict.” Middle East Policy , 2011, 18 (1), 27–41.
Lam, J. The Impact of Conflict Resolution on Schools: A Review and Synthesis of the Evidence (2nd ed.). Amherst, MA: National Association for Mediation Education, 1989.
Lederach, J. P. Building Peace: Sustainable Reconciliation in Divided Societies . Washington, DC: US Institute of Peace Press, 1997.
Lewin, K. “Action Research and Minority Problems.” Journal of Social Issues , 1946, 2 , 34–46.
Lim, Y.Y.J. Unpublished doctoral dissertation, Teachers College, Columbia University, 2004.
Lim, Y.Y.J., and Deutsch, M. International Examples of School-Based Programs Involving Peaceful Conflict Resolution and Mediation Oriented to Overcoming Community Violence: A Report to UNESCO . New York: International Center for Cooperation and Conflict Resolution, Teachers College, Columbia University, 1997.
Maoz, I. “Coexistence Is in the Eye of the Beholder: Evaluating Intergroup Encounter Interventions Between Jews and Arabs in Israel.” Journal of Social Issues , 2004, 60 (2), 437–452.
Maoz, I. “Evaluating the Communication between Groups in Dispute: Equality in Contact Interventions between Jews and Arabs in Israel.” Negotiation Journal , 2005, 21 (1), 131–146.
Maoz, I. “Does Contact Work in Protracted Asymmetrical Conflict? Appraising 20 Years of Reconciliation-Aimed Encounters Between Israeli Jews and Palestinians.” Journal of Peace Research , 2011, 48 (1), 115–125.
Pettigrew, T. F. “Intergroup Contact Theory.” Annual Review of Psychology , 1998, 49 (1), 65–85.
Pruitt, D., and Olczak, P. “Beyond Hope: Approaches to Resolving Seemingly Intractable Conflict.” In B. B. Bunker and J. Z. Rubin (eds.), Cooperation, Conflict and Justice: Essays Inspired by the Work of Morton Deutsch . San Francisco: Jossey-Bass, 1995.
Ross, M. H. “Action Evaluation in the Theory and Practice of Conflict Resolution.” Peace and Conflict Studies , 2001, 8 , 1.
Rothman, J. “Action Evaluation and Conflict Resolution Training: Theory, Method and Case Study.” International Negotiation: A Journal of Theory and Practice , 1997, 2 (3), 451–470.
Rothman, J. “Articulating Goals and Monitoring Progress in a Cyprus Conflict Resolution Training Workshop.” In M. Ross and J. Rothman (eds.), Theory and Practice in Ethnic Conflict Management: Theorizing Success and Failure . London: Macmillan, 1999.
Rothman, J. “Action Evaluation in Theory and Practice.” Beyond Intractability . 2005. Retrieved from www.beyondintractability.org/m/action_evaluation.jsp
Rothman, J., and Dosik, A. A. “Action Evaluation: A New Method of Goal Setting, Planning and Defining Success for Community Development Initiatives.” 2011. Retrieved from http://www.ariagroup.com/wp-content/uploads/2009/10/Action-Evaluation-A-New-Method.pdf
Rothman, J., and Friedman, V. “Action Evaluation: Helping to Define, Assess and Achieve Organizational Goals.” Action Evaluation Research Institute. 2005. Retrieved from www.aepro.org/inprint/papers/aedayton.html
Rothman, J., and Land, R. “The Cincinnati Police-Community Relations Collaborative.” Criminal Justice , 2004, 18 (4), 35–42.
Stevahn, L., Johnson, D. W., Johnson, R. T., and Schultz, R. “Effects of Conflict Resolution Training Integrated into a High School Social Studies Curriculum.” Journal of Social Psychology , 2002, 142 (3), 305–331.
Zartman, I. W. (ed.). Peacemaking and International Conflict: Methods and Techniques . Washington, DC: US Institute of Peace Press, 2007.
Zhang, Q. “An Intervention Model of Constructive Conflict Resolution and Cooperative Learning.” Journal of Social Issues , 1994, 50 , 99–116.