- Bernard Haring
The peace and conflict-resolution community is working toward improving the effectiveness of its programs. As a tool for determining the effectiveness of conflict-resolution interventions, evaluation is an important means to achieving this end. Historically, evaluations of conflict-resolution interventions have been overwhelmingly funder-driven, motivated by the interest of funders in ensuring the accountability of interventions. More recently, however, other stakeholders have begun to look to evaluation to further their interests. At the same time, conflict-resolution evaluation has come to serve a variety of purposes, from developing best practice to empowering stakeholders.
Although evaluation is an established practice, the field of conflict resolution is only just coming to terms with evaluation as a discipline. Evaluation models and theories have been developed to meet the specific needs of related fields, such as humanitarian assistance and development. However, the field of peace and conflict resolution lags behind in this respect, primarily because the conflict context poses obstacles and challenges to the practice of evaluation. The practice of conflict-resolution evaluation is therefore characterized by methodological anarchy.
The conflict-resolution community must come to terms with the practice of evaluation. Otherwise, evaluation models and theories derived from other fields will simply be applied wholesale to the evaluation of conflict-resolution interventions. The conflict context renders such imported evaluation models and theories profoundly inappropriate and ineffective. To cope with the challenges of evaluation in the context of conflict, the conflict-resolution community must therefore develop a range of appropriate tools.
Evaluation is a process that demands resources and must be driven. The forces driving conflict-resolution evaluation have proliferated during the past decade, and a variety of evaluation practices have developed in response to these driving forces.
1. Funder-Driven Evaluation
Historically, conflict-resolution interventions have been evaluated because funders have wanted to hold funded interventions accountable for their actions. An evaluation can be designed to assess whether an intervention conforms to funder requirements and manages its resources effectively and legitimately. Funders can then use the findings of such evaluations to compare the relative utility of interventions and prioritize their funding accordingly.
However, many funders now carry out evaluations of interventions in order to render themselves more accountable, both to the general public and to those affected by the interventions they fund. Funders have also begun to use the evaluation of funded interventions to determine whether they are meeting the goals and targets laid out in their own strategic plans. Finally, many funders now perceive evaluation to be an important way for interventions to learn to do their work better. Evaluation is considered to deliver a bigger bang for the funder's buck, by improving both the effectiveness of funded interventions and the funder's own practice and analysis.
2. Practitioner-Driven Evaluation
Stakeholders other than funders have recently become interested in evaluation, as an abundant source of lessons learned and as a means of determining and developing best practice. Practitioners have begun to initiate and drive evaluations, with their goal nearly always being to improve the intervention's performance. Practitioner-driven evaluations are therefore designed to provide detailed feedback and recommendations to the intervention. These evaluations generally take place during an intervention, so that changes can be made before it is too late.
In contrast, funder-driven evaluations are often cluster evaluations, in which several interventions are evaluated simultaneously in order to assess the contribution of each to the funder's mandate. Funder-driven evaluations are often carried out at the end of an intervention, in order to deliver summative judgement on the intervention's performance.
3. The Drive for Good PR
Evaluation is an important source of validation for individual interventions, the funder community, and the field of conflict resolution as a whole. This poses a dilemma for the evaluator, who may feel that the public relations (PR) motive for an evaluation compromises the integrity of the evaluation. Yet, evaluation is inevitably a political act - it always serves someone and some purpose.
It is important that conflict-resolution evaluation is not entirely subordinated to political or PR purposes. In the conflict context, doing evaluation well matters in pragmatic terms because poor interventions cost lives; moreover, doing evaluation well matters in ethical terms because it helps weed out poor interventions before they exact such a cost. Ultimately, evaluation should be driven by the field's sense of ethical responsibility to those in conflict situations.
While the evaluation of conflict-resolution interventions is generally ad hoc, there are a number of innovative approaches to conflict-resolution evaluation. This section briefly describes five such approaches: Participatory Evaluation, Utilization-Focused Evaluation, Impact Evaluation, Action Evaluation, and Macro-Evaluation.
1. Participatory Evaluation
Participatory Evaluation is a 'bottom-up' or 'people-centered' approach to evaluation. It aims to include as many stakeholders in the evaluation process as possible. The capacity of these stakeholders to collect and analyze data, and to generate recommendations for change, is developed during the course of the evaluation. Participatory Evaluation facilitates the management of change and collaborative problem solving by stakeholders. The inclusion of a broad variety of stakeholders provides unique perspectives for reflection and learning. However, Participatory Evaluation is extremely demanding of stakeholders' time and resources.
2. Utilization-Focused Evaluation
The guiding principle of Utilization-Focused Evaluation is that an evaluation should be judged according to its utility. A Utilization-Focused Evaluation identifies a group of 'intended users' who determine the 'intended uses' of the evaluation data. Through active involvement in the evaluation process, the intended users develop the willingness to implement the evaluation's findings. Utilization-Focused Evaluation builds the capacity of stakeholders to think and act evaluatively.
3. Impact Evaluation
Impact Evaluation aims to determine the impact of an intervention. In theory, this is simple; however, the conflict-resolution community has yet to adequately define the term 'impact.' In contrast to both 'outputs' and 'outcomes,' 'impact' implies the longer-term consequences of interventions, which can be difficult to measure. Impact Evaluation must account for both the positive and intended impacts, as well as the negative and unintended impacts of an intervention. It must also determine the extent to which the intervention, or the external environment, has given rise to impacts. Impact Evaluation informs the decisions of policy makers, funders, and practitioners about whether to expand, modify, or eliminate interventions.
4. Action Evaluation
Action Evaluation aims to ensure the success of interventions by encouraging stakeholders to define and monitor success. Action Evaluation begins with a phase of collaborative goal setting, which clarifies the purpose and functions of an intervention. It commits stakeholders to the achievement of shared goals, and the action evaluator facilitates the continuous monitoring and assessment of these goals. The goal-setting phase is repeated throughout the intervention's life cycle, so the shared goals of an intervention can evolve over time. Action Evaluation is therefore especially suited to the volatile conflict context.
Broadly speaking, Macro-Evaluation seeks to determine how grassroots micro-level interventions 'ripple up' to the regional or national level. By one definition, Macro-Evaluation means assessing all policy instruments and interventions that affect the dynamics of conflict---including humanitarian interventions, developmental interventions, military interventions, etc. Alternatively, Macro-Evaluation is defined as assessing the effects of only conflict-resolution interventions on the dynamics of conflict. However, it is difficult to put any definition of Macro-Evaluation into practice, because it is very hard to determine what intervention caused what effect. In addition, many practitioners will refuse to support the process because of competition between interventions and between fields of work.
Evaluation in the Conflict Context
The legacies of conflict, and the ongoing changes that characterize the conflict context, pose specific challenges to conflict-resolution evaluation. This section examines some challenges to four activities that are fundamental to the practice of evaluation: timing evaluation, tracking change, attributing change, and engaging stakeholders.
1. Timing Evaluation
There is broad agreement that evaluation is of greatest use to conflict-resolution interventions when it takes place during the intervention's life cycle. An evaluation may, for instance, show certain aspects of an intervention to be working particularly well or badly. If presented with the results of the evaluation at an early enough stage, practitioners can adapt their strategies accordingly and improve the likelihood of the intervention achieving or exceeding its goals. Evaluating during the course of an intervention facilitates improvement of the intervention's effectiveness, before it is too late to make necessary changes.
In the conflict context, however, it is not enough for evaluation to occur just once during the intervention's life cycle. In most conflict-resolution interventions, there are significant differences between short-, medium-, and long-term impacts. For example, conflict-resolution interventions often seek to restore qualities such as trust, confidence, dignity, and faith. Their success in doing so may become evident only in the long term.
It is also important to re-evaluate interventions in order to track changes in their impacts and to determine their sustainability. A positive impact may evolve into a negative impact over time, and vice versa. Finally, if evaluation is carried out only once during an intervention's life cycle, then the intervention may be held to account for subsequent changes in the conflict context that are beyond its control.
Although evaluating an intervention in the short, medium, and long term seems logical, the current reality in the field is that evaluations are carried out when funders demand it. Most often, this means that evaluations are carried out after interventions have been completed. However, there is evidence to suggest that this situation is slowly changing as funders' interests in evaluation proliferate.
Local customs and contexts must be taken into account when considering the timing of evaluation. The timing must vary according to the intervention, otherwise the evaluation may violate the principle of "do no harm". Nevertheless, intermittent evaluation throughout the life cycle of an intervention may be a necessary aspect of good evaluation practice.
2. Tracking Change
Evaluators must judge the extent to which conflict-resolution interventions achieve their stated goals and objectives. In the conflict context, these goals and objectives are often unclear or inflated. There is often discrepancy between the stated goals and objectives of an intervention, and what the intervention does in practice. Evaluators often have to guess the 'real' objectives and goals of interventions, or disaggregate these goals and objectives into components that can be evaluated.
There are also few appropriate indicators by which to understand and measure the positive and intended, as well as the negative and unintended, impacts of interventions. By looking at indicators before and after the implementation of an intervention, an evaluator can track change. Useful indicators for conflict-resolution evaluation include: social indicators (e.g., intermarriage between groups), security indicators (e.g., conflict-related deaths), and psychological indicators (e.g., groups' perceptions of one another).
Both quantitative and qualitative indicators are necessary to conflict-resolution evaluation. A current groundswell of opinion suggests that evaluation is becoming constrained by funders' demands that evaluations use only quantitative indicators and provide only or primarily quantitative results. However, even when evaluators are free to use qualitative indicators, evaluation remains problematic.
Qualitative indicators cannot be counted but must be described, analyzed, and perceived, for example, through interviews, role-plays, art, and drama. Measuring qualitative indicators is difficult, demands stakeholders' time and resources, and the qualitative impacts of interventions rarely fit into the standard time frame of an evaluation. There is a need to track longer-term changes (for example, in relationships, attitudes, and behaviors) that are triggered by conflict-resolution interventions.
Throughout the evaluation, evaluators must remain free to redefine and reselect indicators in response to ongoing changes in the conflict context. Standardizing indicators for the practice of conflict-resolution evaluation is therefore a matter of controversy. Nevertheless, it is important that the conflict-resolution community begins to articulate indicators appropriate to its needs. Otherwise, the practice of conflict-resolution evaluation is likely to be limited by inappropriate indicators.
3. Attributing Change
Evaluators must not only track change, but must attribute change to specific interventions. In complex conflicts, it may be difficult to know which actions have brought about which outcomes. The external environment, in which the intervention operates, rather than the intervention itself, may be responsible for changes tracked by the evaluator. Nevertheless, it is the evaluator's job to map connections between interventions and impacts.
In doing so, the evaluator must take into account 'transfer.'  Transfer describes the 'ripple' or multiplier effects of interventions, beyond their immediate scope of action. These wider and deeper effects of an intervention can be positive and negative, as well as intended and unintended. The evaluator must map the connections between an intervention and its direct impacts, as well as its indirect impacts. The field has yet to develop a range of tools capable of mapping these connections in the complex context of conflict.
Once an evaluation has attributed impacts to a specific intervention, it often concludes with a judgement about the intervention's overall success or failure. Participatory Evaluation and Action Evaluation explicitly take into account all stakeholders' concepts of success and failure. However, most evaluations are funder-driven and are therefore judged against the funder's concept of success and failure. This can result in inaccurate judgements, since practitioners are often reticent to admit 'failure' for fear of losing their funding, and may mislead an external evaluator. Funders must therefore provide stakeholders with the freedom to acknowledge failure.
All stakeholders, including funders, need to move away from the absolute concepts of success and failure, and instead recognize degrees of success and failure. As long as funder-driven evaluations deliver judgements about interventions' overall success or failure that are likely to jeopardize the future of interventions, evaluations will fail to gauge the true value of interventions. Resources will subsequently be less effectively deployed, and learning opportunities will be lost. Conflict-resolution interventions should perhaps be evaluated according to the concept of 'good enough' rather than the absolute concepts of success and failure.
4. Engaging Stakeholders
Engaging stakeholders in evaluation is difficult, because most evaluations are carried out by external evaluators. Although external evaluators are generally believed to safeguard the objectivity and transparency of evaluations, the evaluator's professional livelihood is heavily dependent on the funder community. As a result, evaluations often serve the ends and meet the expectations of funders, which makes it difficult to persuade other stakeholders to commit their scarce time and resources to evaluation. The evaluator must strike a balance between providing the intervention with useful feedback, while operating within the time constraints and terms of reference set by the funder.
In contrast to the fields of aid, development, and humanitarian assistance, there is no specific training available for evaluators working in conflict resolution. It has been suggested that a code of ethics should be developed for conflict-resolution evaluators. Such a code would help evaluator's better position themselves between the competing demands of funders and other stakeholders. A code of ethics would also render evaluators more accountable to all stakeholders, thereby encouraging their engagement in evaluation.
Rather than distance him or herself from interventions, as an objective and neutral observer, an evaluator should perhaps play the role of a family doctor, engaging in ongoing and supportive relationships with stakeholders. In the conflict context, it is certainly ethically questionable for an external evaluator to include people in negotiations without due regard for the participants' subjectivity and the impact of the evaluation on their situation.
In practice, conflict may limit evaluators' access to those who have participated in an intervention. It can be difficult for external evaluators, in the short time they have available, to overcome the deep mistrust locals often feel for "outsiders" in conflict situations. Even if the evaluator builds a trusting relationship with stakeholders, a range of ethical dilemmas may result. For example, the evaluator may become party to prior knowledge of a violent act, through interviewing participants in an intervention. This is likely to be information that the evaluator did not want, and cannot use because it was supplied on a confidential basis.
5. Ownership of Evaluation Results
Evaluations are often designed and conducted without an explicit understanding of who is going to see or make use of the evaluation results. After an evaluation has been completed, information may be drawn from an evaluation and manipulated for political purposes, without the knowledge of the evaluator. Funders may also claim exclusive ownership of funder-driven evaluation data, to the extent that those who participated in an evaluation are unable to see the results. It is therefore not surprising that stakeholders other than funders tend to fear and resist evaluation.
In the conflict context, some feel that it is ethically difficult to justify the acquisition of data by a funder if the data will not be shared with others. It is therefore important that members of the conflict-resolution community agree to increase sharing of evaluation results. Competition within the field naturally militates against this. However, two international initiatives have recently been launched to collect lessons learned from conflict-resolution interventions: the 'Lessons Learned in Conflict Interventions and Peace building' by the European Platform for Conflict Prevention; and the 'Reflecting on Peace Practice Project' by the Collaborative for Development Action.
This short essay has sought to demystify the practice of peace and conflict-resolution evaluation. To date, evaluations have been primarily funder-driven, and designed to ensure the accountability of funded interventions. However, this paper suggests that evaluation can serve a range of purposes, from developing best practice to ensuring the success of interventions. Evaluation is therefore in the interests of all stakeholders, not only the funder community. Since the pressure to evaluate conflict-resolution interventions will almost certainly increase in the future, it seems wise for the conflict-resolution community to come to terms with the practice of evaluation sooner rather than later.
This does not mean just importing evaluation models and theories from other fields. The conflict context creates challenges to activities fundamental to the practice of evaluation, including timing evaluation, tracking change, attributing change, and engaging stakeholders. A range of tools needs to be developed to cope with these challenges. With the development of tools more appropriate to the evaluation of conflict-resolution interventions, both new and more refined approaches to the practice of conflict-management evaluation may result.
The conflict-resolution community is ultimately presented with a 'Humpty Dumpty Problem'; it is extremely difficult, expensive, and risky to put back together societies that have been shattered by conflict. It is crucial that those working in the field improve upon their performance and better deploy their finite resources. As a tool for determining the value of conflict-resolution interventions, evaluation is an important means to achieving this end.
 Michael Scriven, Evaluation Thesaurus, Fourth Edition (London, New Delhi: Sage Publications, 1991).
 OECD/DAC, Guidance for Evaluating Humanitarian Assistance in Complex Emergencies, available at http://www.oecd.org/development/evaluation/2667294.pdf
 Michael Scriven, op. cit
 C. Church and J. Shouldice, The Evaluation of Conflict Resolution Interventions: Framing the State of Play (Letterkenny: INCORE, 2002).
 Agneta Johannsen -- Measuring Results of Assistance Programs to War-Torn Societies (United Nations Research Institute for Social Development (UNRISD), available online here.
 Mary B. Anderson, Do No Harm: How Aid Can Support Peace or War (Boulder, CO: Lynne Rienner Publishers, 1999).
 Kenneth Bush -- A Measure of Peace: Peace and Conflict Impact Assessment (PCIA) of Development Projects in Conflict Zones available online here.
 Mark Hoffman -- Peace and Conflict Impact Assessment Methodology: Evolving Art Form or Practical Dead End? (2000), available online here.
 Mary B. Anderson -- Experiences With Impact Assessment: Can We Know What Good We Do? (2000), available online here.
 Herbert C. Kelman, The Contribution of Non-Governmental Organizations to the Resolution of International Conflicts: An Approach to Evaluation. (Massachusetts: Harvard University).
 M.H. Ross, 'Good enough' isn't so bad: thinking about success and failure in ethnic conflict management." Peace and Conflict: Journal of Peace Psychology 6 (1), 2000, pp. 27-47
 Marie Smyth and Gillian Robinson, eds. Researching Violently Divided Societies: Ethical and Methodological Issues (Tokyo, New York, Paris: United Nations University Press, 2001).
 Insider-Outsider Roles and Relations -- The Reflecting on Peace Practice Project (RPP). Issue Paper. 2001, available at http://www.cdainc.com/rpp/publications/issuepapers/rpp-insiders.php - No longer available.
 Smyth and Robinson, op. cit
Use the following to cite this article:
Lewis, Helen. "Evaluation and Assessment of Interventions." Beyond Intractability. Eds. Guy Burgess and Heidi Burgess. Conflict Information Consortium, University of Colorado, Boulder. Posted: September 2004 <http://www.beyondintractability.org/essay/evaluation>.