sdclan


Influencing Social Change – a Complex Task

July 14, 2010 | Adrian Gnägi | Learning Elsewhere |

Share

Rating: none

Adrian picture for sdclanBy Adrian Gnägi
On May 20th/21st 2010 I participated in a conference entitled „Evaluation revisited – improving the quality of evaluative practice by embracing complexity“. In the lines below I sum up my take on this most inspiring event.

Who travels to Utrecht to reflect on evaluation and complexity?

I went to the Utrecht conference to exchange with other development practitioners grappling with social change, to meet up with other people who are also dissatisfied with our business’ practice of logframing social change initiatives. I did not know who else would be there, I had learned about the event through on-line discussion fora. On the plane, when looking at the  list of 200+ participants, a few  groups popped up:

  • Robert Chambers, and with him a group of people from and around the “Participation, Power and Social Change” team of IDS
  • Quite a few people I knew from the Outcome Mapping community
  • Staff from many different aid agencies whose job title had something to do with MfDR (Managing for Development Results, part of the Paris/Accra aid effectiveness agenda).

During the morning session of the first day, I discovered there was a fourth group:

  • Evaluation practitioners who had participated in a global conference on impact evaluation held in Cairo roughly a year ago, and who were confronted there with the fact that their orientation towards evaluation was being sidelined by the “Randomized Control Trial” (RCT) mainstream.

“Reclaiming rigor” was the motto of the Utrecht conference. The event turned out to be about methods, but also about alliance building: this was a gathering of people unhappy with RTC evaluation orthodoxy and wanting change towards more realistic approaches to social change.

 Theories of  change as major challenge

The main critique voiced against the RTC evaluation mainstream was summed up by Patricia Rogers as “the naïve experimentalism view of evidence based practice”:

  • Find out and prove (through RCT) that thing “A” produces certain benefits
  • Declare “A” as best practice
  • Repeat best practice “A” in other contexts
  • Get multiple benefits from “A”.

The problem is not with RCTs as such, but rather with ignoring the adequacy limits of this methodological design (treating RCT as “gold standard”): the RCT approach is adequate for simple domains where cause and effect relationships are well understood and stable, but it is not suited for complex change processes. The problem with declaring RCTs  – or its correspondence in planning, logframes – as business standards is that the complex nature of social change has to be ignored, all interventions have to be treated as simple. This is the classical “tail waging the dog” situation: instead of choosing methodologies adequate for the task, the task is conceptualized in a simplistic way so that standard methodologies can be applied.

In most of the workshops and talks I participated in during the first day, rigorous evaluative practice of social change was discussed as being linked to realistic, mid-range theories of change (sometimes also referred to as change models, impact hypothesis, impact pathways). The main idea was that rigorous evaluative practice should compare unfolding social change with what was expected to happen according to the theories of change underpinning the interventions, and provide information to correct and refine the change theories.

RCTs for accountability, impact evaluation for organizational learning on social change

My second day of the conference was marked by a presentation of Maarten Brouwer. He advocated for a separation of the learning from the accountability function in evaluation. He argued that since accountability requirements keep growing, learning is increasingly being crowded out. The current RCT dominance is above all due to accountability requirements, it is about proofing that disbursement of public funds produced promised results. Maarten Brouwer argued for a tight package comprised of RCTs, single loop learning to improve result production processes, and reporting on results. This package should methodologically and institutionally be separated from impact evaluation, organizational learning on action strategies, and improving our understanding of social change.

I was intrigued, and so were many colleagues. Maarten Brower was proposing a radical solution to a problem – learning from evaluations – we had grappeled with for a long time. There was intense debate, though, whether there actually is a continuity of accountability requirements from line managers to higher levels, whether the aid agencies’ constituencies really are satisfied with results only and are not asking for information on impact. Maarten Brouwer advocated for publicly debating this issue and publicly explaining that impact attribution for social change is not possible: if we continue to pretend we can plan and measure our contribution to social change, we are bound to lose when asked to proof it. Within simple, bounded, known domains we should more rigorously measure and improve our practice, here distilling “good professional practice” makes sense. For the integration of those “bounded results” into social change initiatives we need improved theories of change that allow us to pilot along emergent pathways.

Better theories of change and better program documents

The Utrecht conference not only was intellectually stimulating, it was also highly entertaining. I think I have never participated in an equally creatively moderated conference. One of the highlights for me was a the “Methods & More Market”. I will report on it in two future posts: one will be on theories of change, the other on diagrams we use in program documents to explain what we are doing.

Next steps

I had the impression the atmosphere of the conference was marked by two different attitudes:

  • There was nostalgia, mourning for loss (the “golden nineties”, when qualitative methodologies and participatory approaches were à la mode)
  • There was avant-gardism, pride to be on the fore-front of something deemed to change the orientation of our business.

 It felt as if the community was not quite sure whether this was a funeral or a babtism. Are you interested to witness what is happening? The next scene of the process plays in Brighton, September 22nd:  “big push back meeting”, organized by Rosalind Eyben of IDS [R.Eyben@ids.ac.uk]

No Comments

Leave a Reply