Contribution analysis (CA) helps managers understand their program’s specific impact on observed results by working with theories of change (TOCs) to enable adaptive management, yet this evaluation approach has unique decision points that must be addressed to maximize its utility. CA[1] verifies links in TOCs between shorter-term and longer-term outcomes while recognizing external factors affecting these results. It has grown in popularity because it overcomes common challenges for evaluators, like a lack of counterfactuals, and is complexity aware in that it helps programs make sense of unclear causal pathways.

However, CA has unique decision points that evaluators should anticipate and address upfront to ensure their studies produce useful findings and recommendations (see Figure 1 for these decision points mapped to key CA steps), including:

1) Will CA serve as the study’s overarching framework or as a type of data analysis? CA offers decision makers an integrated package of methods and processes to solve for a program’s impact on systems-level change.[2] It can also be used it in a more limited sense as a type of analysis to answer a research question about contribution.[3] To make this decision, evaluators should consider their study’s purpose. If tracing contribution is the primary aim, applying CA as a framework is appropriate. If the study intends to measure progress in generally, like a performance evaluation, it may make sense to use CA as a type of data analysis.

steps to theory of changeFigure 1: The Steps and Decision Points of Contribution Analysis

2) Does a TOC exist, or does one need to be developed for CA? Ideally, a TOC guided the program’s implementation which can be used for CA to explore links between the intervention and observe results. If not, evaluators will need to facilitate the development of a TOC with the program retrospectively to conduct CA.[4] (Step 2)

3) How will the study measure and present the strength of evidence found by CA? To determine whether a program’s contribution is credible and robust, CA surfaces, aggregates, and validates data and draws conclusions about its significance. Some studies present these measurements using a scale[5] or score aligned with a TOC to create a compelling visualization to help audiences understand where impact occurred (see Figure 2 for an illustrative example). Other studies avoid such objective measures, especially if the evidence is contested.

theory of changeFigure 2: An Illustrative Example of Measuring and Visualizing the Strength of Evidence Using a Scale Aligned with a Theory of Change

4) How will CA tell the program’s contribution story?​ CA aims to iteratively draft, validate, and deliver a compelling case supported by evidence from which managers can conclude with confidence the impact of their intervention. Depending on decision makers’ needs, these cases range from stories of a dozen pages with data in graphs and tables to several statements of a few sentences with bullet points of evidence.[6] Evaluators should consult with their study’s users on how to tell their story in a way that delivers conclusions with convincing evidence.

CA presents decision makers and evaluators with a relatively straightforward approach that can fit varying levels of time, budget, and need. It supports adaptive management by working with TOCs and context monitoring by identifying external factors. However, evaluators should address CA’s unique decision points upfront, often with input from their study’s users, to maximize the approach’s utility.

 

[1] For more on the Contribution Analysis approach in general and its relevance to CLA, please see:

https://usaidlearninglab.org/sites/default/files/resource/files/glam_-_contribution_analysis.pdf

[2] For an example of CA as an overarching framework, please see USAID’s MCSP Rwanda’s Impact on Improving the Quality of Maternal, Newborn, and Child Health Services Results from a Contribution Analysis.

[3] For an example of CA serving as a type of data analysis, please see USAID’s MEL Initiative Study of USAID/El Salvador’s Development Credit Authority (DCA).

[4] USAID’s MCSP Rwanda study retrospectively developed a TOC to conduct CA.

[5] USAID/El Salvador’s DCA study used a scale to measure the strength of evidence found by CA.

[6] USAID’s MCSP Rwanda study uses a contribution story, while USAID/El Salvador’s DCA study uses contribution statements.

About the Author:
Chris Thompson leads the technical implementation and oversight of monitoring, evaluation, and learning (MEL) contracts for Social Impact around the world. His over 15 years of experience includes long-term, field-based senior management and technical positions in Indonesia, Liberia, the West Bank, and Afghanistan where he put into practice pioneering evaluation approaches and award-winning collaboration, learning, and adapting (CLA) techniques.
Photo by: Curtis Gregory Perry