Program Evaluation: Are we ready for RCTs?

Author(s)
Nelson, J.
Publication language
English
Pages
2pp
Date published
01 Mar 2008
Type
Factsheets and summaries
Keywords
Evaluation-related, NGOs, Research methodology, Standards
Countries
Democratic Republic of the Congo, Liberia

 

It is clear that a new wind is blowing in the discussion
about how best to evaluate international aid.
Three back-to-back events in Washington, DC recently
attracted the attention of bilateral agencies,
government ministries and private donors: December’s
announcement of the first director for the International
Initiative for Impact Evaluation (or 3IE) was followed by
a conference held by the World Bank, entitled “Making
Smart Policy: Using Impact Evaluation for Policy Making,”
and the convening of members of the Network
of Networks on Impact Evaluation (NONIE). These
meetings shared a focus on rigorous impact evaluation:
“analyses that measure the net change in outcomes for
a particular group of people that can be attributed to a
specific program using the best methodology available,
feasible and appropriate to the evaluation question that
is being investigated and to the specific context,” to
quote 3IE’s founding document of March 2007.
For some, the phrase above hints at an ominous turn
toward randomized control trials (RCTs) – the epitome
of experimental research and an investigative approach
assumed by many to be largely inappropriate for the
contexts in which NGOs work. Online discussions suggest
concern about a donor-driven evaluation agenda
and the overly scientific measurement of tangible results.
The worry likely stems from two views. First, that
assigning aid programs randomly is unethical because
it contradicts principles about serving those in need, the
most vulnerable or excluded. And second, that the use of
“rigorous” approaches contradicts all we have learned
about the importance of evaluation as a means of empowerment,
not just measurement.