What is wrong with MfDR?

January 19, 2011 | Adrian Gnägi | Learning Elsewhere |


Rating: none

Adrian picture for sdclan

By Adrian Gnägi 
There is growing international frustration with the way the MfDR (managing for development results) agenda developed. In this post, I reflect on a widely read article by Andrew Natsios, former head of USAID.

A few weeks ago IDS organized an event entitled “the big push back meeting”. The aim of the meeting was to galvanize a movement against the “current trend for funding organisations to support only those programmes designed to deliver easily measurable results”. During the event, a recent essay by Andrew Natsios on what has gone bad with the results agenda in aid was frequently referred to. Natsios message is that “Obsessive Measurement Disorder” (OMD, “… an intellectual dysfunction rooted in the notion that counting everything in government programs will produce better policy choices and improved management”, p.4 ) has spread in development agencies to a degree that it nowadays prevents transformational development. He claims that the drive for transparency and accountability has become the major enemy of good development practice, the main obstacle for developmental impact. Natsios is careful in pointing out that the results agenda was well intended and produced some desirable change in aid. His focus is on the loss of balance, though, on the sickening consequences of taking into account what is measured only.

 Natsios describes the following main consequences of the management for development results agenda:



Standardized procedures Innovation
Quantitative Metrics Understanding Change Processes
Management Information Systems Field Impact Evaluations
Decisions based on management standards Decisions based on effectiveness criteria
Mandates Grants
Big well known northern partners Small, new, local organizations
Staff time used for accounting, reporting, controlling, tendering Staff time dedicated to field visits, dialogue with partners, understanding change
Staff with compliance qualifications Staff with technical qualifications
“Hard science” sectors like health “Soft science” sectors like governance
Service delivery with quick, measurable results Social transformation, institution building and policy reform with delayed impact
Meaning of “development”: implementing MDGs Meaning of “development”: building institutions that deliver needed services


Development programs that are precisely and easily measurable, but are non-transformational, receive the bulk of funding. Development programs that address needed social and institutional change, are innovative and locally owned, but require time and trust, increasingly are not funded any more.

One of the outstanding characteristics of Natsios article is the ethnographic style of writing. Andrew Natsios writes as a positioned subject: he tells the story of his and his organization’s struggle in political and institutional force-fields since the 1960ies. This is an angry man talking, dishing out at Robert McNamara, Condoleezza Rice and many other public figures. He explains the spread of the “Obsessive Measurement Disorder” in USAID as a consequence of concrete acts: Parliamentary oversight interventions, power fights between institutions, personal animosities and political deals. That makes for fascinating reading. But it also is the most troubling aspect of his essay for me. I do share his analysis about the unintended consequences of the results agenda, but I am afraid his explanations why OMD spread are too contextual. He explains the changes in USAID as emergent outcome of contextualized developments: unintended consequences. This is good social science practice. Things unfolded differently in other aid agencies, though, yet the outcomes are similar. And “Obsessive Measurement Disorder” is not spreading in development agencies only, but also in fields like education or health.

 So what drives development organizations into “Obsessive Measurement Disorder”? Rosalind Eyben in her report on the IDS-organized “big push back meeting” also notes that OMD is not limited to aid, and then provides a list of roughly 20 explanation elements for OMD. Are we stuck with “everything is context” or “too many dogs are the rabbit’s death”? I do see some forces structuring emergence. Those structuring vectors do not do justice to all the different processes in agencies, but they may explain why different processes have similar outcomes.:

  • MfDR is used by aid managers as a mechanism to handle anxiety in an environment marked by high insecurity. Managers of aid organizations face an inherent justification challenge because there is no obvious “right” way to implement aid.  The LogFrame was adopted by the development industry in the late 1960ies because of this innate necessity to justify what is done and how it is done. Natsios argues that the LogFrame was the infection from which OMD spread. He claims that “USAID took a double or triple dose [of performance and results management] and applied quantitative results-based indicators to areas that made no sense” (p. 37) without being forced to do so from the outside.
  • The (well intended or pushed by sceptics of aid ) introduction of new oversight/accountability procedures or reporting formats in aid organizations provide for arenas in which power relations can be re-negotiated. Bureaucratic forces have won most of those power fights in recent years. Natsios shows that this can become self-perpetuating, bureaucrats proofing the need for more data/reports/control to justify their own existence. 
  • The big foundations as new development funders, and health as one of their main fields of intervention, have contributed to a changed perception of data standards during the past decade: randomized control trials as “gold standard” clearly was facilitated from this part of the aid spectrum.
  • MfDR is an expression of and a means to a new definition of “development”: delivering services to meet basic needs (versus creating the institutional and policy framework to have them met sustainably). This re-definition of what “development” means is frequently linked to  the success of the Chinese development model and to changes in the political landscape of donor countries – moves towards populism and to the right. These moves often are linked to our ageing populations.

 There is a  movement of development professionals working on more constructive approaches. Some ideas for change are:

  1. Breaking the silence on the opportunity costs of MfDR.  Making sure development programs lead to desired change and are supported by stakeholders is as necessary a reform agenda as it was 1o years ago – when the MfDR agenda was introduced – or 40 years ago – when the LogFrame became standard operating procedure in our business. Our current problem is that the MfDR agenda was hijacked into bureaucracy. Measurement and reporting are costly endeavors, nowadays consuming too much scarce time of development workers.  One of the holes in the MfDR defense wall is the fact that more data and beefed up reporting do not necessarily lead to stronger trust in aid, and more robust data does not automatically lead to better evidence based decision making. We need to challenge the “value for money” of control and accountability procedures by asking to see the results they produce. The allegation that scarce resources are wasted often is a good door opener to start discussions on constructive alternatives.
  2. Destroying the invincibility nimbus of the counting logic: When reading Natsios’ article or listening to debates about methodes & metrics one frequently gets the impression of a movement with inevitabilty characterists. It is important to show that things are done differently also, that there are other trends, that certain organizations have chosen other paths. Understanding why those alternatives exist, how they function and how they are measured and reported on might provide pathways for alternative practices. Some of those  alternatives are:
    a) In most aid organizations, budget support and core contributions co-exist with micro-controlled mandates
    b) In GTZ, ZOPP was replaced with Capacity Works (see my forthcoming blog post on Capacity Works)
    c) In education, while most European national education systems increasingly are adapted so that pupils are taught what PISA measures, the International Baccalaureate curriculum developed in a radically different direction.
  3. Developing reflective program steering practices influenced by theories of change. Criticizing what went wrong with MfDR is necessary, but not sufficient. The most serious problem we face is, as Natsios puts it, that “those programs that are the most transformational are the least measurable”. The hospital-building type operations need no worry, they are well served with LogFrame-type planning procedures, critical-path-type monitoring instruments, and quantitative results indicators. The real issue are organizational change and social transformation programs. The analysis of aid programs by complexity theorists shows that many of the current MfDR problems are due to such programs being treated as complicated ones only (see Ben Rammalingan’s post for a summary). For programs addressing complex change, LogFrame-type planning should be replaced by approaches based on theories of change (see my earlier post on theories of change). Reflective program steering should focus on whether change unfolds as expected. The “progress markers”  used in Outcome Mapping is a lean but very powerful tracking system for this, much better suited than result indicators. Critical windows that need observation are the “appropriation hypothesis”, providing information on whether the planning assumptions about how outputs produced by the program are appropriated (used) by the actors of change are correct (I have a blog post on progress markers and appropriation hypothesis in the pipeline). Michael Patten’s new book “Developmental Evaluation”  is a great inspiration for reflective program steering practice.
  4. Servicing different use contexts with separate measurement and reporting procedures. “Managing for development results” is a politically correct slogan masking the fact that most aid programs today are expected to contribute to four different agendas: partner country development goals, donor country foreign policy interests, agency market positioning objectives, and agency process and product quality requirements. The “burn rate” example provided by Natsios (ability of a program to disperse funds according to plan) clearly shows the perverse effects a measurement criteria valid for one perspective can have when applied as absolute result indicator. The essence of the problem is that most programs today are forced to gather too much data and report on too many things, while the data and reporting is not geared enough to the use contexts. A huge effort with little effects. Reflective program steering practice needs, as sketched above, primarily information on how change unfolds and on appropriation practices. There is anecdotal evidence that whatever amount of quantitative data will not convince MPs or the public that tax payers’ money was well spent on aid, but stories like the article by Richard Gerster on Switzerland’s contribution to lower Tanzania’s child mortality rate do.  Joint monitoring and reporting in partner countries, both geared at partner Governments and at partner country populations, is a largely unresolved issue, despite the excessive amount of resources spent on measurement and reporting. Managers’ worries are hardly influenced by unspecific narrative or tables. They need evidence that programs influence change in partner countries in the desired direction, that programs do contribute to donor country self-interests, that the reputation of the agency is served, and that the production of outputs adhers to recognized quality standards. 

The colleagues from IDS call the 1990ies the “golden years of participation and empowerment”. The past decade then was the “golden years of basic needs (MDGs) oriented service delivery”. What’s coming next? I am worried by colleagues who see the trend towards “Obsessive Measurement Disorder” (OMD) primarily as a reaction of the aid industry to a new meaning of aid (development for people, not by the people). If they were right, trying to influence the current MfDR agenda would be a “tail wagging the dog” exercise, trying to counter the influence of the socio-political context on aid. I am convinced that the anxiety of decision makers in aid agencies is at least the translation mechanism from changed socio-political climate to OMD, if not the primary driver for OMD itself. I have seen too many colleagues who used to be supporting empowerment approaches when they were program officers becoming highly critical when they moved into decision making positions.  Just yesterday a friend told me: “I am getting more and more uneasy signing off those things. You cannot imagine how we scrutinize the annual budget in our municipality. We ask them to redo and redo the calculations. But here, it is millions, just like that. The only justification is: I believe it will work”. We need to become able to convincingly show that those unmeasurable programs deliver on their transformational promises. That’s the real challenge we face, that’s where MfDR failed.


Comments to“What is wrong with MfDR?”

  1. Excellent post – it was a real relief and satisfaction reading this!

    11 years ago, Meg Wheatley wrote a very similar article on the obsession of measurement, and it’s more topical than ever:

    Thanks, Adrian!

  2. Oh, and there is a second article of her speaking to the issues you raise, a nice illustration from the educational system in the US:

  3. François Rohner says:

    (Translation below)

    Ich habe den Blog von Adrian zum Thema „What’s wrong with MfDR?“ mit Interesse gelesen. Dazu vier kurze Bemerkungen
    1. Es hat m.E. erstaunlich lange gedauert, bis endlich einer wie A. Natsios aufsteht und das Kind beim Namen nennt:“Obsessive Measurement Disorder” (OMD, “… an intellectual dysfunction rooted in the notion that counting everything in government programs will produce better policy choices and improved management”.
    2. Ich hoffe, die Kritik Natsios’ lost eine vernünftige Diskussion aus – nicht nur in eurem Netzwerk sondern möglichst bald auch auf Direktionsebene und weiter zwischen DEZA, SECO und andern Bundesämtern, aber auch zwischen DEZA und NGOs, in der Beratenden Kommission und – last not least – im DAC!
    3. Ich gehe mit Adrian einig, Aid Agencies sind gefordert, ihren Aufsichtsbehörden, den Medien und letztlich der breiten Oeffentlichkeit verständlich zu machen, dass es zahlreiche für die Entwicklung eines Landes sehr wichtige „Projekte“ gibt, deren Nützlichkeit und Impact sich nicht anhand von ein paar leicht messbarer Indikatoren beurteilen lassen.
    4. Generell würde ich den Schreiberinnen und Schreibern eures Netzwerkes, welche Themen zur Diskussion stellen, empfehlen, sich kürzer zu fassen. Vergesst nicht, dass viele Eurer (auch potenzielle) LeserInnen wenig Zeit für zusätzliche Lektüren haben… Und Euer Ziel ist ja sicher nicht, dass nur ein paar wenige Eure Produkte lesen und darauf reagieren.

    Mit freundlichem Gruss

    I have read the blog post of Adrian on “What’s wrohg with MfDR?” with interest. My four remarks :

    1. It took an astonishingly long time until finally somebody like A. Natsios gets up and names the baby: “Obsessive Measurement Disorder” (OMD, “… an intellectual dysfunction rooted in the notion that counting everything in government programs will produce better policy choices and improved management”.
    2. I hope that Natsios’ criticism will initiate a sensible discussion, not only in your network but if possible among senior management and, beyond, between SDC, SECO and other Federal Offices, as well as with NGOs and in the Consultative Committee and – last but not least – within OECD DAC.
    3. I agree with Adrian: Aid Agencies have the obligation to sensitize their supervisory authorities, media and the public about the fact that utility and impact of important development relevant projects can not in any case be assessed on the basis of easily measurable indicators.
    4. I’d like to recommend to the authors of your network to remain brief. Do not forget that your potential public only has little time for supplementary reading. … And your aim is definitely not that only few read your posts and comment.

    Best regards


  4. I am puzzled by this proposal: “For programs addressing complex change, LogFrame-type planning should be replaced by approaches based on theories of change”

    A good LogFrame should contain a Theory of Change (ToC), in the form of a chain of “If…and… then…” statements. connecting each level of the LogFrame (i.e. narrative statement..assumptions..narrative statement)

    The real challenge is to develop better means of representing complex ToCs. For example where a program is operating in 10 different districts of the same country, and where in each district has some autonomy over how the program is developed. Or where a program may have multiple stakeholders, each with a different view of its objectives and strategy.

  5. I read the article of Adrian and the comments with great interest as I expressed similar concerns, also to SDC, for quite some time.

    I my view, the issue is “quality management”. How to distinguish good from bad projects.

    MfDR is one among other aspects of quality.

    How to manage quality in a systems in which quality is the result of collaboration of people. There is a discussion on methodologies in the health and the education sector.

    In my assessment, development agencies have not yet linked to this discussion.
    MfDR does not lead automatically to quality.

    Do not hesitate to contact me if you are interested. 031 351 79 40


Leave a Reply