I recently came across a fascinating and potentially groundbreaking blog called Admitting Failure. It is a blog created by staff at the Engineers Without Borders Canada that chronicles the failures of EWBC in order to learn from them. The most recent blog post is by someone from MSF, talking about how they failed to learn a lesson about emphasizing comprehensive health care rather than infectious disease treatment--even though countless evaluations had highlighted that fact.

And it got me thinking, how many organizations have institutionalized and/or centralized processes to transfer lessons learned from both success and failures? And what do these processes look like? 

Here at Search for Common Ground, we have several ways that support programme learning and the transfer and institutionalization of lessons learned. Central to this is our commitment to publicly share evaluation reports on our website. At the programme level, after each evaluation has been completed we organize a post evaluation review where we reflect on the recommendations and findings and decide how we can use this information to improve our future programming. We also circulate key evaluation findings and lessons learned across the organization so that other colleagues can learn from these findings. We also look for opportunities to replicate successful models across our different country programmes. For example, we took our Radio for Peacebuilding in Africa model and replicated that in Asia. We have taken a successful model for women’s empowerment developed in Indonesia and developed modified versions of this in Pakistan, Burundi, Sri Lanka and Zanzibar.

As well as success, evaluations can often show what has not worked and we should be as keen to learn from these insights as from our successes.

So I am curious, how does your organization learn from failure? And how does your organization actually learn 'lessons learned'?

Join the conversation now at http://dmeforpeace.org/discuss/how-does-your-organization-learn-fai....

Views: 546

Reply to This

Replies to This Discussion

I am affraid we still don't have very strong mecanisms to learn from our practice in our organisation, but I do want to share a methodology that seemed to me really interesting on evaluation, though not from failure but from success:

I recommend have a look at the methodology "Learning from success" (there are others, as "Apreciative Inquiry", or even - though slightly diferent - the "Most Significant Change"), which seem very interesting for me. They are based on the idea that learning from failure is OK, but not the only thing. It is also important to learn from what worked out, and so all the questions that we ask, and all the items we take into account for the monitoring and evaluation of projects need to focus on both aspects, positive and negative evaluation...

Interesting and coherent with (positive peace)building!

Thanks for the links anyway. Really interesting.

Thanks for the reply Cecile. I certainly don't mean to suggest that learning from failure is the only way to learn, but what I found so groundbreaking about that blog is the transparent and honest manner in which failures (or failure to learn) are aired. 

The Learning from Success model certainly seems to be relatively popular right now, especially the MSC methodology in peacebuilding. But does this emphasis on success restrict our ability to learn the most from our work, and therefore to improve our work? I find it necessary to examine both successes and failures, and to acknowledge them as such.

To quote from Cheyanne Scharbatke-Church's latest article on peacebuilding evaluation (Evaluating Peacebuilding: Not Yet All It Could Be, Berghof Handbook for Conflict Transformation 2011, p. 473-4) (apologies for the long quote, but I find it directly relates to this discussion): 

Given peacebuilders’ emphasis on evaluations contributing to learning, the question arises:
is the field actually learning from the evaluative process and products? In other words, does the average evaluation report provide concrete input that enables single- or double-loop learning? It is difficult to speak for the field as a whole, as there certainly are some quality evaluation processes that have catalysed learning, however evidence suggests that this is not yet the norm.

This is particularly true at the level where a programme evaluation may have contributed to team learning, but is not doing so for the broader peacebuilding community. The reasons for this vary according to agency types, sizes and contexts; however, there are some clear commonalities across the field. Despite a commonly heard commitment to learning by many, most agencies do not have an organisational culture where learning is a central pillar (Hopp/ Unger 2009). This is visible in the absence of supported structures, opportunities or time for staff to allocate to substantive learning. Yet if this is not supported by action it is difficult to translate lessons identified in an evaluation to lessons learned, as there are no set processes that enable this uptake.

Furthermore, there is the question of incentives and consequences. Outside individual commitment, where are the incentives within organisations to encourage staff to prioritise learning? Conversely there are few, if any, direct tangible consequences to organisations or individuals for not learning. Without this enabling context, it becomes very difficult to ensure that evaluations indeed contribute to learning.

A second common constraint is that many evaluations are not designed to support learning, despite the organisation’s stated commitment to this end. The intended ‘learners’ may not be clearly identified or may be too numerable to be realistic. The Terms of Reference may focus on interesting areas of inquiry that are simply not actionable, regardless of the findings. The report deadline may be weeks after a key learning opportunity, such as a new proposal or restructuring of the organisation. Learning also requires an evaluation that is deemed credible,  therefore, poor-quality evaluations, of which there are far more than high-quality ones, are not good material for producing new knowledge.

Moreover, a report identifying lessons is not the same as lessons learned. Thus a process
must be planned to facilitate this transfer, which frequently gets omitted due to lack of time or simply because people are not aware of it. It is within these issues that lack of evaluation capacity can be seen, as most practitioners do not have enough experience to bring the goal of the evaluation in line with its subsequent actions. 

Finally, learning is not isolated to the evaluation report, but can also occur throughout the evaluation itself. Called “process use”, it can offer as much learning as the final report when intentionally planned. Yet, process use is rarely fully capitalised upon in peacebuilding evaluation processes.

'identifying lessons is not the same as lessons learned' - that's a very important point about an over-used and abused concept. Thanks for posting that quote, I will use it!

In GPPAC, we try to trace and keep track on the changes we made as a result of 'lessons' in monitoring reports, monitoring days (reflecting on reports) and evaluation workshops. The question is, does it mean that we 'learned the lesson' when noting that we changed the way we did things, or had we only learned the lesson once the way we changed made a positive difference...? 

Not quite answering the original question, how do you respond to failure, but by far the most difficult terrain to cross is the way small organisations have to shape-shift like an elastic ball to fit the criteria summoned up out of nowhere by a civil servant seated in an office in London or Brussels or Washington/NYC.  It seems to me someone with no particular expertise on, say, training future women leaders in sub-Saharan Africa comes up with a set of criteria that may better suit a survey of polar bears' mating habits in the time of Captain Scott.  It would be a far better use of the world's taxpayers' cash if we at the sharp end could identify the need, put together a costed programme which would best address that need, compete with others with similar on-the-ground expertise and then get on with it, without looking over our shoulders at whether we have twisted and turned and diverted (or lied) enough to fit the faulty criteria all the time.

Tim Symonds, Eyecatcher/Shevolution 

Thanks. I have been thinking about this topic for some time as there are many failures and challenges. Often we are hesitant to publicly report on them given the need to demonstrate progress, get continued funding, etc. At some point might try to do a research project on this but would certainly enjoy hearing what other efforts are being done around this within organizations or more publicly

These replies have been very interesting, and I am glad others are thinking about this subject as well.

Perhaps I started off this post too narrow; should we first instead seek to define 'failure' in peacebuilding? I would imagine that each organization will define it differently--if there is a definition at all. And if we seek to define failure, does that also suggest that we develop indicators not just for success but also for failure (as part of the logframe or Terms of Reference)? Are any organizations currently doing this?

To connect this back to the point Cheyanne raises in her article on single and double loop learning, it strikes me that the examination of failure--either through honest and transparent sharing, or through research projects which might help conceal the organization's identity (we do, after all, have to work within the existing constraints of our field, and competition for resources and a hesitancy to admit failure)--may help enable greater double loop learning: if something failed, we are, I think, more likely to re-examine the fundamental assumptions of the project than if it was successful. 

This also seems to directly connect to a conversation on PCDN Forums by Mark Rogers on failed theories of change.

All great questions and something I think worthy of much more discussion and reflection both in academia and practice.

Interesting. I never though of defining indicators of failure, but it could make sense if we do risk analysis to "do no harm".

I want to share a link of a working group I discovered not long time ago, and that is reflecting from practice on those issues: https://www.frient.de/en/home.html. A very long term effort (for ten years now!) to reflect (don't know if to learn) on conflict sensitivity.

I relate this to another issue, which is the responsability of the funders in promoting this knowledge. While speaking to civil servants in the cooperation agency in my country, they all claim they prefer that the organisation undertake a real and deep evaluation, recognising their mistakes, and use evaluation as a learning process, rather than a formal one, only to certify they have accomlished the task. But most organisations are still reluctant to share this information.

Maybe it is because funders are not clear enough saying that if an evaluation repport shows mistakes, this won't mean they'll loose further foundings. But there are also other mechanisms that could help, as:

- Allowing organisations to spend a part on the budget on deep evaluations (it does exist in some countries or organisations, but not all)

- Establishing mechanisms to share easily this information (FriEnt, or Admitting Failure, can be useful tools for that but information on evaluations should be about ALL the funded projects, as a transparency measure)

- Making shure that not only the information is shared but that it is used for future programming, such as requiring organisations to define structural criteria after identifying lessons, or the funders themselves try to define criteria (if it makes any sense to apply lessons learned from one context to another - which is another debate) for future projects.

-...?

Thanks for raising this debate!

RSS

Sponsored Link

Please Pay What You Can to Support PCDN

Please consider Paying What You Can to help PCDN grow. We encourage you to consider any amount from $1 and up. Read the SUPPORT page prior to making a payment to see PCDN's impact and how your payment will help.

Sponsored Link

Translate This Page



PCDN NETWORK TWITTER FEED

PCDN Guidelines and Share Pages

By using this site you're agreeing to the terms of use as outlined in the community guidelines (in particular PCDN is an open network indexed by Google and users should review the privacy options). Please note individual requests for funding or jobs are NOT permitted on the network.

Click BELOW to share site resources Bookmark and Share
or Share on LINKEDIN


FOLLOW PCDN on TWITTER, FACEBOOK or GOOGLE+

Google+

 

© 2014   Created by Craig Zelizer.

Badges  |  Report an Issue  |  Terms of Service