Monitoring and Evaluation

Developing an Evaluation Strategy

AGI Lessons Learned

Evaluations of comprehensive projects should untangle the effects of the various project components. AGI evaluations were not able to compare the relative impact of the technical training versus the life skills training or the impacts of specific project strategies, such as coaching and mentoring. There are also important questions around the intensity and duration of placement assistance that are not answered in the AGI evaluations. Evaluations of new comprehensive programs should try to pinpoint the most effective elements and provide cost-benefit information by project component.

Better measures of female youth empowerment are needed to be validated in developing country settings. Empowerment measures used in the AGI evaluations were adapted from other contexts and, at times, they showed limited relevance for the AGI populations. Ideally, empowerment measures should be based on in-depth local research, usually qualitative, to elicit information from the girls about the problems and challenges they face. Researchers can then define indicators, ideally using the same words that the girls used. If such advance research is not possible, try to find measures that have been used successfully in the context or work with local researchers who can help translate the concepts of measures that have been successful in other contexts.

Economic interventions targeted to young women should explicitly measure changes in economic empowerment. For example, AGI evaluations assessed whether participants could retain economic gains or if they were compelled to give their wages to their husbands or other authority figures in their lives. The AGI also measured whether the projects contributed to young women's economic self-reliance and, in particular, if projects decreased participants' economic dependence on boyfriends.

Look at disaggregated effects to understand who is and who is not benefitting. Youth are not a homogeneous group; important distinctions arise from differences in sex, age, ethnicity, schooling/enrollment status, marital status, parenthood, urban/rural, family wealth, and so forth. At a minimum, projects should be able to disaggregate impacts by age and sex. Disaggregated impact data can help projects make informed decisions about the appropriate beneficiary profile. Programs can then adjust the selection and recruitment processes to focus on those who can benefit the most or programs can adjust the design to improve outcomes among certain subgroups.

A mixed-methods evaluation approach can provide a more comprehensive portrait of a project’s implementation and effectiveness than quantitive means alone. Qualitative research is particularly helpful to answer "how" and "why" questions that cannot be addressed in quantitive surveys, for eliciting beneficiary feedback, and for exploring more complex social and behavioral outcomes such as empowerment. Qualitative techniques may also be more informative than quantitative methods for addressing sensitive topics such as sexual behavior or in-depth experience with violence.

AGI pilots included a strong evaluation component to assess each project's overall success in achieving their intended outcomes. Evaluation is important for learning and generating evidence of what works, as well as for building the project legitimacy and credibility.

When feasible, AGI evaluations applied impact evaluation methodologies (see table below). An impact evaluation assesses the changes—both intended and unintended—that can be attributed to a particular policy or program. Impact evaluation asks how would outcomes have changed if the intervention had not happened? This process involves counterfactual analysis—a comparison between what actually happened and what would have happened in the absence of the intervention.

AGI Evaluation Strategies
Methodology Rationale Resources
Randomized The most rigorous way to construct a counterfactual is by comparing participants with a control group of similar people who do not participate (or have yet to participate) in the program. Where possible, AGI pilots maintained an experimental control group methodology. Resource icon South Sudan Impact Evaluation Concept Note
Randomized Pipeline In some cases it was not possible to maintain a strict control group, so a pipeline approach was used. The project was delivered in successive rounds (so the control group eventually also received the program treatment). The limitation of this approach is that there is a short time period available to measure outcomes before the control group enters the program.

Resource icon Haiti Impact Evaluation Protocol for CBOs

Resource icon Liberia EPAG Impact Evaluation Concept Note

Resource icon Liberia EPAG Evaluation Factsheet

Non-Randomized Forming a control group is not always a popular idea—particularly when working with vulnerable youth who are in need of services. If doing so is not possible, there are other options to consider but they may limit the rigor of the results. For example, the Nepal evaluation employed a quasi-experimental approach using propensity score matching. AGI pilots in Afghanistan and Rwanda were evaluated using nonexperimental pre- and post-surveys.

Resource icon Nepal Impact Evaluation Concept Note

Resource icon Afghanistan Evaluation Concept Note

Resource icon Rwanda Evaluation Concept Note

AGIs measure changes in:

  • Economic outcomes: Income-generating activities and business practices, financial literacy, savings and loans, expenditures and transfers, and assets
  • Noneconomic outcomes: Relationships, family and social networks, life satisfaction and worries, time use, personal behaviors, sexual behaviors, HIV/AIDS awareness and violence
  • Empowerment outcomes: Mobility, decision making and control over resources; aspirations, expectations and confidence, self-regulation
  • Heterogeneous impacts: Sociodemographic characteristics, analytical ability, and wartime experiences
Description Quantitative Resources and Tools
A description of the AGI outcomes of interest and the rationale for their inclusion are described in the AGI Core Indicators. The document also provides illustrative indicators for each of the outcome areas.

Resource icon AGI Core Indicators

Changes in outcomes using specific indicators are collected in two surveys:

i) a survey conducted among the young women,

ii) a separate household survey.

Each instrument was tailored to the specific project and the country context, but generally the questions were standardized to ensure comparability across settings.

Tool icon Liberia EPAG Individual Survey Instrument

Tool icon Liberia EPAG Household Survey Instrument

Most AGI projects also conducted qualitative evalutions to:

  1. Verify and clarify outcome findings, particularly for more complex social and behavioral outcomes such as empowerment
  2. Gauge stakeholders’ (such as the trainees, their husbands/families, trainers, and so on) satisfaction with the project implementation and solicit their ideas for improvement
  3. Document the project implementation process to understand how project outcomes were achieved
Research Objectives Qualitative Resources and Tools
In Liberia, the project conducted endline qualitative exit polls to assess the general satisfaction level with the training, to gather ideas for improvement, and to collect qualitative data on project outcomes.

Resource icon Liberia EPAG Exit Poll Report Round 1

Resource icon Liberia EPAG Exit Poll Report Round 2

In Rwanda, the project conducted a qualitative process evaluation at the midline and a qualitative endline assessment to capture noneconomic outcomes, collect feedback from project stakeholders, and clarify implementation processes that occurred since the midline.

Tool icon Rwanda Qualitative Endline Focus Group Discussion Instrument

Resource icon Rwanda Qualitative Midline Report

For more information on the AGI evaluations and the outcomes of interest, see: AGI Learning from Practice Note on Measuring Impact in the AGI Pilots | World Bank | 2013.

Top of page