Impact evaluation: Difference between revisions

Content deleted Content added
punct, refpunct, cap, simplify headings, layout, rm inline el
Explanations of placement vs. self-selection bias were added, along with definitions of "impact evaluation", as compared with "impact assessment".
Line 51:
Biases are normally visible in two situations: when the measurement of the outcome with program exposure or the estimate of what the outcome would have been without the program exposure is higher or lower than the corresponding "true" value (p267). Unfortunately, not all forms of bias that may compromise impact assessment are obvious (Rossi et al., 2004).
 
The most common form of impact assessmentevaluation design is comparing two groups of individuals or other units, an intervention group that receives the program and a control group that does not. The estimate of program effect is then based on the difference between the groups on a suitable outcome measure (Rossi et al., 2004). The random assignment of individuals to program and control groups allows for making the assumption of continuing equivalence. Group comparisons that have not been formed through randomization are known as non-equivalent comparison designs (Rossi et al., 2004).
 
===Selection bias===
 
When there is an absence of the assumption of equivalence, the difference in outcome between the groups that would have occurred regardless creates a form of bias in the estimate of program effects. This is known as selection bias (Rossi et al., 2004). It creates a threat to the validity of the program effect estimate in any impact assessment using a non-equivalent group comparison design and appears in situations where some process responsible for influences that are not fully known selects which individuals will be in which group instead of the assignment to groups being determined by pure chance (Rossi et al., 2004). This may be because of participant self-selection, or it may be because of program placement (placement bias).<ref name=":0">{{Cite book|url=https://www.adb.org/sites/default/files/publication/392376/impact-evaluation-development-interventions-guide.pdf|title=Impact Evaluation of Development Interventions: A Practical Guide|last=White|first=Howard|last2=Raitzer|first2=David|publisher=Asian Development Bank|year=2017|isbn=978-92-9261-059-3|location=Manila|pages=}}</ref>
 
Selection bias can occur through natural or deliberate processes that cause a loss of outcome data for members of the intervention and control groups that have already been formed. This is known as attrition and it can come about in two ways (Rossi et al., 2004): targets drop out of the intervention or control group cannot be reached or targets refuse to co-operate in outcome measurement. Differential attrition is assumed when attrition occurs as a result of something either than explicit chance process (Rossi et al., 2004). This means that "those individuals that were from the intervention group whose outcome data are missing cannot be assumed to have the same outcome-relevant characteristics as those from the control group whose outcome data are missing" (Rossi et al., 2004, p271). However, random assignment designs are not safe from selection bias which is induced by attrition (Rossi et al., 2004).
Line 101:
*An evaluation carried out some time (five to ten years) after the intervention has been completed so as to allow time for impact to appear; and
*An evaluation considering all interventions within a given sector or geographical area.
Other authors make a distinction between "impact evaluation" and "impact assessment." "Impact evaluation" uses empirical techniques to estimate the effects of interventions and their statistical significance, whereas "impact assessment" includes a broader set of methods, including structural simulations and other approaches that cannot test for statistical significance.<ref name=":0" />
 
Common definitions of 'impact' used in evaluation generally refer to the totality of longer-term consequences associated with an intervention on quality-of-life outcomes. For example, the Organization for Economic Cooperation and Development's Development Assistance Committee (OECD-DAC) defines impact as the "positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended".<ref>[http://www.oecd.org/dataoecd/8/43/40501129.pdf OECD-DAC (2002) Glossary of Key Terms in Evaluation and Results-Based Management Proposed Harmonized Terminology, OECD, Paris]</ref> A number of international agencies have also adopted this definition of impact. For example, UNICEF defines impact as "The longer term results of a program – technical, economic, socio-cultural, institutional, environmental or other – whether intended or unintended. The intended impact should correspond to the program goal."<ref>[http://www.unicef.org/evaldatabase/files/UNICEF_Eval_Report_Standards.pdf UNICEF (2004) UNICEF Evaluation Report Standards, Evaluation Office, UNICEF NYHQ, New York]</ref> Similarly, Evaluationwiki.org defines impact evaluation as an evaluation that looks beyond the immediate results of policies, instruction, or services to identify longer-term as well as unintended program effects.<ref>{{cite web|url=http://www.evaluationwiki.org/index.php/Evaluation_Definition:_What_is_Evaluation%3F#Impact_Evaluations|title=Evaluation Definition: What is Evaluation? - EvaluationWiki|publisher=|accessdate=16 January 2017}}</ref>