One country, seven districts, seven international NGOs, five million dollars (excluding food commodity monetization), and…
How do you begin to collect, analyze and, more importantly, utilize data on 161 indicators in this scenario?
Got me. And I was the Technical Advisor on Monitoring & Evaluation for this project – ha!
What a beautiful, i.e. funded, proposal it was—concocted by the lead INGOs’ most well-respected, imported-from-headquarters PhDs, in absentia of any real meaningful consultation with implementing partners, i.e. local organizations who do the actual work with communities, let alone the people they intended to serve.
I've been working in monitoring & evaluation (M&E) for over ten years now and during this time in the international development sector as a whole, I've seen an increasing desperation to find “evidence” of what is often inherently beyond logic and induction, also discussed recently by Rick Davies
, Ramaswami Balasubramaniam
, Lawrence Haddad
, Dennis Whittle
, Ben Ramalingam
, and Alanna Shaikh
, just to name a few. Caroline Preston
also reports on “the data dash” of recent years in the philanthropic sector, which too often overlooks how “personal relationships, social networks, family and community dynamics, passion for causes, and other factors” that shape change.
The delusion of thinking you can conquer your world leads you to lose your soul. ~Cornel West
I see the development and philanthropic sectors locked in an increasing fixation on solving the problem of poverty through reductive ways of measurement. However, abstract metrics and experimental design is quite far from the intimate, difficult, and complex factors at play at the community level. Thus I believe it is time to examine our belief that there are technocratic, precise ways of measuring progress in order to make consequential judgments based on these measures. The business sector seems to have a healthier relationship with risk, perhaps something we need to further explore in the development sector.
M&E implemented solely for the purpose of accountability time and again fails to result in improved programming and, in many cases, undermines the effectiveness of the very interventions it is trying to measure. (See related research paper
by Blackett-Dibinga & Sussman.) And the latest trend towards using the "gold standard" of randomized control trials is especially troubling when one is talking about community-level initiatives. Imposing such incredibly risk-averse behavior through evaluating every single intervention can most certainly be a drain on the time and scarce resources of people who are in the process of organizing at the local level, let alone the development professionals engaged in supporting them.
As someone who has worked extensively in building the M&E capacity of grassroots organizations in Africa, what I have found is that abstract metrics or research frameworks don’t often help people understand their relationship to improving the well-being of those they serve. Rather than using any theory or logframes, local leaders, as members of a community, read real-time trends via observation of what’s happening on the ground, which, in turn, drives intuition. Most Significant Change
, Outcome Mapping
, and The Good Enough Guide
are examples of alternative approaches to M&E that are better grounded in this reality.
For the past few years, I led the development of an innovative training and mentoring approach to monitoring and evaluation among local indigenous organizations in Lesotho, Zambia and Malawi. The approach is based on a premise that people at the grassroots level have the most expertise in terms of defining and measuring success, based on internal reflection processes of their own values and goals. One mantra at its core—M&E should never detract from the work at hand, which is serving people—kept it grounded in the practicalities of day-to-day work with families and communities. The approach’s success has also come from its focus on making M&E accessible through the “de-technicalization” of M&E concepts and practical exercises that utilized existing data from the organizations themselves, further developing critical reflection skills. (You can see an overview presentation of this training approach here
.) Trained groups continue to demonstrate the effectiveness of the approach through their program adaptation, enhanced advocacy, and increased fundraising.
Those who work selfishly for results are miserable. ~Sri Krishna, Bhagavad Gita, The Song of God
Make no mistake. I’m not arguing that more rigorous evaluation techniques do not have their place, especially for larger, publicly-funded projects. I myself am a big data geek. A database full of numbers or a pile of raw stories to play with and I’m in my bliss, identifying trends and constructing potential causal inferences.
But as practitioners, we must always consider what is the appropriate cost and complexity needed for evaluation, especially given the size, scope, level of intervention, and length of the program. We must also aim for proportional expectations so we ensure M&E is a tool for learning and improvement, not just policing. Yet again, how matters.
Yes, let's pursue and obtain useful data from the ground, but at a scale at which information can be easily generated and acted upon by those we are trying to serve. One hundred sixty-one indicators will ensure that surely it is not.
My hope is that the dominance of quantitative statistical information as the sole, authoritative source of knowledge can continue to be challenged so that we embrace much richer ways of thinking about and understanding development.
This post originally appeared at: http://www.how-matters.org/2010/11/17/161-indicators/
Beyond the Ribbon Cutting
A New Discipline for Development Practitioners
Rethinking Trust, by Ben Ramalingam
More on Why ‘How Matters’
The Conundrum of Counting Beneficiaries