What is a good program document?

July 27, 2010 | Adrian Gnägi | Methods & Tools |


Rating: none

Adrian picture for sdclanBy Adrian Gnägi
This is the third post inspired by the conference “Evaluation Revisited: improving the quality of evaluative practice by embracing complexity” held in Utrecht on 20./21. May 2010.

A few weeks ago, Freiburgstrasse 130 (SDC Head Office) was struck by an earthquake. Work flows stopped, the atmosphere changed, some colleagues shut their office doors and stopped talking, others wandered from office to office and talked for days. Something unprecedented had happened: In one single operations committee meeting, 3 entry proposals for local governance programs were turned down. Millions of Swiss Francs, months of preparatory work, scores of people concerned. Emotions, arguments, alliances, strategies, formal and tacit norms – one huge mess. But one overriding impression – what had happened was not right. Within days, roughly one fifth of SDC Head Office staff had signed a petition to senior management. No one can remember having seen something like this before.

In the days and weeks after the “quake”, two major fault lines were identified:

  • Local governance was questioned as appropriate intervention area for SDC by at least some senior managers
  • The quality of the credit proposals was questioned by at least some senior managers

This post looks into the second issue: what is a good program document? I will argue that the usefulness of a program document for a specific use context depends to a large degree on the main diagram it is built on. A program document is not “good” or “bad” as such, it is more or less useful for certain purposes. The critique voiced during the “earthquake” is above all a critique on the logframe logic the credit proposal format is built on. It seems to me some senior management members were implicitly arguing this logic no longer serves their main use context – accountability – well.

During the Utrecht conference , Barbara Rogers facilitated a workshop on “Program theory for complicated and complex situations”. This reflection note is inspired by her most fascinating input. She argued that program documents can be looked at as a literary genre. Program document templates are instructions for constructing reality in texts. She encouraged not to take program texts and diagrams for granted, not to treated them as natural phenomena. They should be deconstructed. One of the analysis questions she mentioned is especially relevant for understanding the SDC “earthquake”: What purpose do specific program documents (not) serve?

The rationale behind the question is the following:

  • Certain ways to explain programs are better suited for some use contexts than for others.
  • Theoretically it is possible to combine different explanatory discourses at least partially in texts. But since diagrams frequently only serve one purpose well, and inconsistencies between diagrams and texts normally are read as conceptual weaknesses, program narratives typically are shaped by the logic of the dominant program diagram.
  • The choice of the program diagram therefore determines which use contexts are served well by a program document, and which ones are neglected or violated by the program narrative.

We discussed this relationship between program diagram and use contexts in the above mentioned workshop with Barbara Rogers. In the text below, I mainly focus on the logframe, since the SDC credit proposal is built on this diagram.

The Logical Framework Matrix provides for a clear relationship between costs, activities and outputs, and provides metrics for assessment of compliance. It is well suited for resource budgeting, justification of contributions to partner organizations, single loop learning, and output accountability. Not well served use contexts are:

  • Context and risk analysis: The main axiom that justifies the use of logframes is that, given certain assumptions, the main cause => effect relationships of the intervention are known. When the program narrative is based on the logframe, this conditional clause is inversed: since logframe logic is used, the main cause => effect relationships of the intervention must be known. This renders risk and political analysis awkward: how can one talk about risks and politics (by definition probabilistic and emergent) in a system and then pretend the main cause => effect relationships of the intervention into this same system to be known in advance? Risks, politics and logframes do not mix.
  • Justification of investment (impact accountability): Logframes are about the activities to be implemented, they determine the system boundaries. Impact is on the level of social change. Social change is the emergent result of interlinked actions by many independent actors over time, and of  influences from outside the system. Logframes and theories of change (change models or pathways that explain how the vectors of different agent add up to social change) do not fit with each other. Logframe based program narratives necessarily end up in the attribution gap: they end where funding ends, where other actors come in. If a change model would be added, other actors, time frames, probabilities, influences from outside the system would come into play. For all those things there is (literally) no space and no use in logframes. Logframe based narratives are useful to justify expenditures for simple undertakings. They always and necessarily have a “tail wagging the dog” feel, they can never convincingly justify an investment in relation to impact.
  • Sense making, negotiation, and alliance building:  With logframes, resources applied to stratagems produce outputs (right => left aggregation logic) and results (bottom to top aggregation logic). There are no actors or timeframes. Agency issues (typically coined as “political will”) are placed outside the system’s boundaries in the “assumptions” column. A program narrative built on the logframe always feels detached from concrete actors who are going to change real situations, there is no place for human initiative, social struggle and competing orientations. “Langue de bois” in program narrative is a direct result of the logframe matrix. This allows for different actors to easily buy into programs, but since tasks, competencies, and responsibilities are not addressed, the result frequently is a “working misunderstanding”: there is superficial agreement on what should be done, but not on who should do what when.
  • Program implementation: With logframe planning, cause => effect relationships are pretended to be known. Program steering equals roll-out of plan and reporting. If things do not happen as assumed in the plan, program managers face a dilemma: they either have to violate the plan or reality – both later on render reporting awkward. Logframes leave very little room of maneuver for program managers – a fact mirrored by the corresponding PCM methodology that mostly reduces program management to planning, monitoring and reporting.
  • Multiple objectives: The “(resources & stratagems = activities) x y = results” aggregation logic of  logframes is geared at one single objective the project/program is to attain. Development cooperation programs today typically have to serve 4 objectives simultaneously: the development objectives of the partner country, the foreign policy objectives of the donor countries, the future/positioning objectives of the development agencies, and the procedure adherence objectives of  public funds accountability.  Multiple objectives ask for trade-offs in decision making. For program managers having to implement single objective logframe based programs, this leads to a similar dilemma as unfolding of development reality not according to plans: since trade-offs are not foreseen with logframes, either the pressure of objectives not served is ignored, or plans are not being followed. Both strategies later ask for imaginative reporting. 

 Other diagrams we looked at include:

The Program Cycle Diagram provides for a clear phasing of implementation. It is best suited for scheduling “moments forts” and for communicating when, how, on what basis and possibly by whom steering decisions will be taken. Most other potential use contexts are not well served, since neither resources, actors nor objectives are included. Program cycle diagrams are firmly rooted in project/program thinking and therefore are not suited to reflect on impact.

The Result Chain or Result Frameworks show how outputs translate into outcomes and into impact. They provide a clear illustration of a simplified social change model. They are frequently used in combination with logframes, since their basic logic is the same, but result frameworks allow to go beyond the system’s limits of a project/program. They are well suited to justify funding and for sense making. Their main limits in usefulness are the same as with logframes: the pretension to know cause => effect relationships renders context analysis superfluous, the oversimplification of social change renders them not suited as reference for steering or as outlines for reporting, they cannot provide for multiple objectives.

Impact Diagrams provide for an illustration of the different main factors influencing a specific change process. Their realism serves alliance building well and provides guidance for the design of monitoring systems. They are a good basis for program steering and stimulate double loop learning. Due to their complexity, they are a weak support for communication. They are not convincing as justifications for funding, since the use of funds is not specified in the middle and long term. But they are a good justification for investment, although not for investments with multiple objectives.

A simple Program Matrix provides an overview of different program elements, without specifying relationships or an overall aggregation logic. While weaknesses are obvious and multiple, Barbara Rogers pointed out one use context for which they are brilliant: since relationships are not privided in the diagramm, they force joint sense making.

Gant charts provide for a brilliant overview of activities in time. Gant charts are frequently associated with logframes (critical path analysis) and then help to overcome some of logframes’ limits regarding actors and timeframes. But they in fact can be combined with nearly any other program diagram to provide for clarity on who does what when. Their best use context is planning and steering of implementation. They obviously are not useful for reflection on impact or multiple objectives.

The Balanced Score Card Matrix allows for an overview over multiple objectives  of a program and provides for metrics on the way to reach them. It’s primary use context is program steering, guiding trade-off decisions with multiple objectives.  It is designed as the basis of a management information system and therefore serves reporting well. The Balanced Score Card is not geared to planing, but it can be combined with (multiple) impact diagrams. Is is one of the only program diagrams that allows to handle complexity in a (reasonably) non-reductionist way.

During the SDC “earthquake”, four main points of critique were voiced by senior management regarding SDC credit proposals:

  • Weak risk and political analysis 
  • “Langue de bois”  
  • Weak impact orientation
  • Swissness not included

Following Barbara Rogers deconstructivist logic, the critique on the program document format points towards a change in intended use context. My interpretation is: the logframe based SDC credit proposal was useful to justify contributions to partner organizations in the past, but it is no longer seen as useful by all members of senior management for today’s or tomorrow’s accountability requirements. The main issue no longer is to justify contributions (activities), but to justify investments (impact).  There no longer is one single objective (poverty reduction), but there are several. My take is that the days of the logframe based credit proposal format are counted.


Comments to“What is a good program document?”

  1. Ernst Bolliger says:

    Dear Adrian
    Thanks a lot for this “Utrecht report” linked to a recent SDC earthquake. Very interesting to read.
    I share your take that the days of the log frame based credit proposal format are counted. However I am asking myself: What will follow?
    Colleagues of mine have collected quite some experience with the planning approach “Outcome Mapping”. This approach has been developed by Canadians in the context of educational programs. The cause-effect logic of the log frame approach – predominant in technical interventions – is somhow replaced by the logic of behavioural change. I think that the Outcome Mapping approach includes a lot of elements that can contribute to new formats for planning and reporting development programs.
    A few questions related to future steps:
    – How do multinational enterprises plan, report and justify their programs, based on what locics?
    – What is the respective discussion about within other development agencies?
    – Who is taking up innovative (planning and reporting) processes in SDC?

  2. Pierre Walther says:

    Dear Adrian

    I highly appreciate your comments.

    I tried to convince SDC that it should have a closer look at management systems in other policy areas, e.g. health, education, social work.

    These areas have a similar problem: how to ensure quality in a system in which quality is the result of the collaboration of people.

    In these systems, indicators have a very short life time. Methods like Logframe bear the risk that management promotes a compliance attitude instead of empowering the project teams who’s creativity and committment is needed to achieve quality.

    Everybody interested in this discussion, do not hesitate to contact me.
    I stay in a permanent discussion with a competence centre on these issues.

    All the best
    Pierre Walther

  3. Kuno Schläfli says:

    Dear collegues,

    I fully subscribe to that analysis. In line with our (Denis Bugnard’s and mine) argumentation at a recent DEZA-Forum on accountability issues, I think we have to be clear about different purposes of our instruments and also of the different competences needed for specific tasks within the field of “rendre compte”.
    Therefore, a project document like a credit proposal destined to inform the senior management about all aspects including political implications, benefits to different stakeholders, and risks and chances (in the partner country, for certain groups, but also in Switzerland) can never sufficiently serve the purpose of well informing the great Swiss public and Swiss non-professionnals of int. cooperation (like members of the parliament). As a professionnal agancy, the complexity of our working contexts imposes refering to “technical language” and sophisticated working methods which we have the right to reflect in internal documents.

    But that is not a pleadoyer for the LogFrame; I also think that the times of Logframe are counted. As we increasingly accept the political complexity of our work (which has been confirmed by our director in the Workshop on local governance of the 28th of April) ,we have to adapt our instruments, but also to be more self-critical and transparent on risks and changes. Which at the end means, also to accept taking risks. In risky situations (which are becoming “normality” for many working situations) the LogFrame has a potential to be abused for creating artificial “security” on clear objectives and expected impacts, thus conceiling risks.

    I think Outcome mapping is a very good approach, bringing about more modesty (emphasising the role of local actors and partners) and more flexibility to adapt activities and change objectives to evolutive contexts.

    It may mean less “swiss flag” combined with the illusionnary ambition to save the world through every project within a set time frame of four years, but it would encourage a more realistic positioning of SDC’s possibilities in a world of gobal challenges which we can not control or improve on our own.

    I think the actual methodological discussion at SDC, in the course of Reo, would call for a second, new, serious look at this Outcome Mapping Method (not to be confused with Outcome monitoring!!!That’s a different thing).

    If after the exhaustive effort to reconstruct our methodological framework anybody still has the energy to go into that….

  4. BLOGadmin says:

    Dear Ernst and Kuno

    Thanks for your reference to Outcome Mapping (OM). In the OM community there recently was a discussion on the relationship between OM and the logframe approach (see This discussion was summed up as:

    1. More and more organisations are searching for alternatives to the logical framework approach because of its lack of sensitivity to complex processes of social change. They don’t necessarily seek to replace the logical framework approach but instead seek to compliment it with other more complexity oriented methodologies such as outcome mapping. This is often necessary because the logical framework is still a requirement for many organisations and back donors.
    2. More models are becoming available that can help practitioners to develop a practical integrated OM-LFA approach that can work in their contexts. The OM-LFA fusion model of Daniel Roduner and colleagues was given as an example. Also the social framework of Rick Davies was discussed as an alternative version of the LofFrame that can be reconciled with OM ideas.
    3. A growing body of practical experiences with integrating OM and LFA (e.g. the cases of VECO and VVOB which were explored in previous discussions) is becoming available for practitioners to learn from. However, limited concrete details were given in this discussion about other examples of how OM and LFA are integrated in practice. The advantages and possible challenges of such integration were not fully elaborated.

    I also like OM. I will write on the OM “strategy map” in an upcoming blogpost. The OM “monitoring journals” are great instruments to keep track of what is happening on different levels, not only in the project. And when it comes to complex processes, “progress markers” are far superior to indicators for steering.

    Having said this:
    • OM is based on “intentional design” as is the logframe. Not much notion of complexity awareness there.
    • I find the workshop approach described in the OM manual to agree on a common vision with partners sociologically and politically naïve.
    • Strategy is treated as a matter of negotiation with partners only, no need for a theory of change or evidence of what worked in the past or elsewhere.



Leave a Reply