Participatory Project Assessment: Strengthening the Stakeholder Link

June 05, 2013 | bit-wartung | Methods & Tools, SDC Experiences |


Rating: none

Participatory project assessment (also known as Beneficiary Assessment) holds the promise of moving evaluation closer to primary stakeholders: the individuals, communities and organisations in the local project context. Is it an effective tool to do this?

By Riff Fullan, Helvetas

Supported by SDC’s Quality Assurance Poverty desk, I am participating in a Beneficiary Assessment (BA) designed to include householders, communities, local service providers and local government more explicitly than evaluation exercises usually do. The assessment is of a water, sanitation and hygiene project implemented by HELVETAS Swiss Intercooperation in Nepal.

Photo by Ramesh Bohara  

Who is involved?

An important aspect is appropriate involvement of project staff. If someone is asking questions about their own project, you would not expect people to respond freely, so staff cannot do the actual field-based BA implementation. But they are crucial to the broader exercise. They are intimately familiar with the project, contribute to the initial BA concept, and help set up local logistics. The information about households, service providers and partners that is needed to identify who to talk to – and for a selection process that is as unbiased and as representative as possible – is brokered or provided by project staff.

Let’s not forget they are also key stakeholders! They want the project to succeed, their work is central to project outputs, outcomes and impact, and they are the ones who will implement any changes arising from the BA results.

One of the most interesting parts of the BA is the engagement of Citizen Observers (COs), people from local communities who together make up a representative cross-section of primary stakeholders. It is the COs who visit households and communities to talk with people about how they think their lives are after the project has been completed compared to before it was implemented. In this case, there was a need to achieve not only a gender balance among COs, but also a balance in terms of caste/ethnicity, because the link between caste/ethnicity and marginalisation is significant in the Nepal context.

What have our challenges been so far?

1. We worked closely with two national co-facilitators in Nepal to plan CO training and discussed a range of issues around the BA. Our virtual collaboration needed significant investment of time, but we were lucky to work with people who are highly skilled and experienced.

2. Even with this positive foundation, we had to deal with the fact that training of COs was in the Nepali language, so our inputs and engagement had to be supported by two-way translation. This also worked well, but required attention to detail.

3. A third challenge was that a few COs had difficulties reading and writing in Nepali. As the BA depends upon asking a range of questions to householders and communities in a consistent way AND in writing down what people say, this was the biggest hurdle to overcome, but it is also to be expected if you want a truly representative group of COs.

We managed to cope with this situation in two ways: we paired up COs who needed reading/writing support with those who had stronger skills; it also became clear early in the training that COs who could not easily read or write tended to be really good at leading conversations and remembering a lot of detail! This makes a big difference, because during the post-training implementation in the field, COs and the national facilitator get together every evening to discuss the day’s experiences and to consolidate, so everyone can make a strong contribution.

Photo by Ramesh Bohara

What are the pros and cons?

As with any methodology, there are positive and negative aspects. On the minus side, it can be expensive, perhaps more time and effort than a normal project evaluation (though not necessarily). It also is not a guarantee that you will get an unbiased result: on the other hand, there is no such thing as an unbiased result.

On the plus side, it involves far more engagement of primary project stakeholders than most evaluation exercises. It also provides much greater scope for an understanding informed by local perspectives – something we rarely achieve in project evaluations. Although a BA could be characterised as being too qualitative at the expense of ‘hard’ quantitative assessment, the two types of assessment can complement and be used to validate each other. Moreover, a BA can and should incorporate some quantitative elements.

What about the contribution of a BA to improved effectiveness?

A core expectation of BA exercises as we understand them, is responsiveness on the part of both project implementers and funders: if there is no room for change in the way the project is implemented, what’s the point of doing a BA? It is this element, along with a greater appreciation of local perspectives, that is the real promise of participatory assessment.

What experiences or observations can you share about participatory approaches to project or program evaluation?

Find more ressources here:

♦  Beneficiary Assessment at SDC within the Quality Assurance Poverty desk. You can also find there “How-to-Note Beneficiary Assessment”

♦ The World Bank‘s Beneficiary Assessment


No Comments

Leave a Reply