How do communities think we should be measuring success?

15 November 2021

The idea of "success" is highly dependent on context and perspective. As an evidence and learning team, we wanted to explore what local communities considered as a "successful" programme and the ways in which this could be measured. Over the past year, we have been working alongside local consultants to develop evaluation tools that ensure the questions being asked during information gathering are culturally appropriate and conducted in local languages. In June this year we also published a commitment to 'explore local perceptions of programme success…so that our accountability really lies with the people that have been affected by crises'.

During a recent independent monitoring of a heatwave response in Pakistan, our local consultants (GLOW Consultants) spoke with a range of people who had experienced cooling facilities which provided shade and water, and messaging around how to recognise, protect against and act in relation to heatwaves. The community provided highly pragmatic feedback on how we should be measuring success: observe, keep it simple and take time to explore.

1. Observe

The message was very clear: watch what people are doing. If people are using the service you have provided and they seem happy to do so, then you are doing the right thing. If this needs a quantifiable measure, count the number of people using your service.

This intervention represented a unique opportunity to observe how and whether people used the service because – unlike most humanitarian responses – this was a service for everyone rather than a targeted delivery to specific households or individuals. Our evaluation included on-site observation tools for each of the three cooling facilities, however, based on this feedback we would in future like to extend the observation and focus on who as well as how many are using the facility. 

 

2. Keep it simple

Many questions put into impact assessments, evaluation tools or post-distribution monitoring forms are complex. They ask, for example, about awareness around feedback mechanisms and compliance to certain standards which are probably not easily translated into local language nor even relevant to those you are speaking to. If people do not understand the question, they will not provide reliable answers.

The message from those we spoke to in Sibi was to keep our direct questions very simple and relevant to the end-user.

Other ways to increase understanding were proposed, such as using pictures to test the knowledge of practices after a behaviour messaging campaign.

3. Take time to explore

Affected communities also suggested that it is important to probe on certain issues to better understand if you are providing what they need and if there is anything you can do to improve. The focus should really be on understanding their experience rather than undertaking a tick-boxing exercise.

A key learning for us is that this probing takes time and works better when done in person, and with someone they can trust. When people are allowed to speak freely, their feedback will contain all the answers to how successful the project has been.


The way we capture these stories may also need to be revisited. During the pandemic, many evaluators, including the evidence and learning team at Start Network, have relied on remote data collection and interviews conducted by phone. Although these were a necessity due to the travel restrictions during COVID-19, we have recognised that it does have its limitations. In the case of the response in Pakistan, we would have missed the voices of many if we had relied solely on phone interviews.

We hope to continue this research on understanding success from a community perspective, and also to take onboard the important lessons from this first study in the way we conduct evaluations. This will include in-person interviews where possible, on-site observations and revision to our evaluation tools. Although most evaluations include qualitative data as supplementary information to their key performance indicators or quantitative metrics of success, they rarely have these as the mainstay of understanding project success. Perhaps this needs to change?


Start Network have published a sister blog to this piece – How do crisis-affected communities define a 'successful' humanitarian intervention? – highlighting key findings from interviews with community members affected by the Nyiragongo volcanic eruption in Democratic Republic of the Congo (DRC). They have also prepared an infographic summarising their findings on what "success" in early action responses means to affected communities, focusing on how and what organisations should be evaluating.