No one ever fails in ICT4D. Isn’t that amazing! Technologies come and go quickly – bye, bye PDA’s, Windows Vista, and soon Nokia – yet in ICT4D, each project has impact and we never fail. We just have lessons learned. In fact, can you name a single technology program that has publicly stated that it failed?
This is Oscar Night Syndrome, the need to always look good, and ICT4D is deep in denial with it. At the Best Practices in Measurement and Evaluation Technology Salon we dove into the need for monitoring and evaluation in ICT4D and the tools that can help us do that. What did we find?
ICT4D does not have an M&E culture
Now ICT projects do not exist in a vacuum. Many funders have indicators they expect a project to impact, and they often require some level of M&E. But often this evaluation is an after thought at best, where inputs (number of trainings) and outputs (number of trained people) are counted but there isn’t any qualitative analysis (how did the attendees mindset change after the training).
Add to this the need to show results to the donor, their minimum tolerance for failure or anything else that could be seen as waste, and the current climate of “accountability” in political circles, and be it the foolish organization that doesn’t turn in a shiny result complete with great storylines and images.
Just think about all the lessons (re) learned in every project, listed deep in a report, while the picture of a woman smiling with a mobile phone is on the cover and everything is rosy in the press release.
Help us improve: take the Technology Salon Survey – you could win an Android smartphone!
How can we change that?
Our great focus at the Salon was how to change the current M&E climate in ICT4D. How to better monitor, measure, and evaluate the projects we work on to improve our outcomes and our profession. We identified four areas where could improve M&E in ICT4D.
1. Quasi-Experiments
In health, randomized control trials (RCT), are used extensively for impact evaluation. Technically called “experiments” RCT’s have a few limitations – they are expensive, take a while, and can only test one hypothesis. A better option for the developing world context, and with ICT especially, are “quasi-experiments”.
Quasi-experiments are exactly like experiments (or RCTs) but without random assignment to control groups – it’s almost the same but more feasible and possibly more ethical. Quasi-experiments can also incorporate the rapid change in technology ecosystems.
Regardless of the experimentation level, there is no excuse for us not continuously measuring outcomes – now and for years after the project ends. How else can we really know the impact of our work unless we track it beyond the 1-3 year grant cycle?
2. Qualitative Analysis
Everyone loves numbers, yet often the best results are qualitative – changes in beneficiary perceptions that cannot be defined by numerics. How can we bring these tangible yet “fuzzy” results into ICT4D M&E? In person interviews, observations, focus groups, and the like performed in country are the best. Qualitative results can also be used in the formative stages of project design to guide future actions and form the basis of the statistical quantitative monitoring.
One way to cheaply collect direct qualitative results is to monitor social networks like Twitter and Facebook to see what your beneficiaries are saying about the project. Just be sure that you remember user bias. The users of Facebook and Twitter tend to be the elite in the developing world. Nothing can replace the face-to-face.
3. Common Standards
In developing this Salon, I thought M&E stood for “measurement and evaluation” when it actually is “monitoring and evaluation” which is just one example of the need for a common language for M&E. From there, we can dive deep into different measurements that ICT affords – from click rates or retweets – yet we need to remember that we should be targeting the non-technology audience and they should understand our terms.
Even better than common language would be a common ICT4D M&E framework. Something along the lines of NPOKI, a health-centric performance management system shared among different health organizations. This multi-organizational M&E framework allows for an apples-to-apples comparison of project effectiveness that transcends specific projects or even organizations.
4. Implementation Evaluations
Yes, your project may have great outcomes, but was your implementation of that project the best it could be? What about measuring ICT implementations – the very act of deploying a project? We are missing out on great opportunities to learn how we can do our jobs better and improve the ICT4D profession as a whole by not engaging in implementation evaluations, be they formal reviews or at least internal reviews. I know I would like to know how I compare with my peers in ICT deployment. Am I faster, better, cheaper, or do I just talk a good game?
World Vision has a company-wide programme management information system that tracks common indicators in both project delivery and outcomes, helping the organization pinpoint good practices and effective programming. Nethope is also investigating a consortium-wide M&E systems to help organizations better allocate internal resources.
Creating Space for Failure
While these are four tools we can use to build an M&E culture, we must change the mindset of ICT4D practitioners if we expect any of these tools to really be used. One way to do that is to have regular meetings where we can talk about what works and doesn’t – which is the Technology Salon. Another way is a Fail Faire – a positive celebration of failure.
So coming this fall will be a second Fail Faire in Washington DC, building on last year’s event and other internal Faires. If you wanna be one of the cool kids who helps organize it, be sure to email me today!
Together we can change this Oscar Night Syndrome and create a real monitoring and evaluation culture in the information and communication technologies for development community.
Prove me wrong – show me how ICT4D has M&E culture in comments (Wayan Vota)
More than a cultural issue I think is an issue of professionalism versus amateurism. Everyone who manages projects in a professional way, sets up goals to achieved. And these goals should be measurable. Therefore, culture of M&E is something implicit in the management of a project in any field.
If you can’t find this culture in ICT4D perhaps is due to some “amateur origin” where the heart has been placed before the head. In these cases it’s necessary a change to a professional management of ICT4D projects that include some kind of metrics.
Additionally to this comments, I would like to make a contribution with a very simple but useful tool to improve M&E. To assess quality and quantity of the implementation, you can make use of methods of assessment of competences (especially in training, but can also be applied to other fields).
So, knowing that’s what you want to improve, you need to set up some points of evaluation. With them, you can produce a survey to be supplemented at least twice: before and after the project (or more often if you want intermediate evaluations). From the comparison of the questionnaires, you can see the evolution. And from the statistical analysis of data you can highlight the successes and the needs for improvement.
This assessment system provides easily understandable information and brings satisfaction to donors, to organizations and to the project beneficiaries.
If we assume that professionalism exists, but the M&E of the project may be conditioned by the request of the donors or funding, then I agree that it is a cultural issue poorly assimilated.
Include this culture can mean at times giving a minor scope to the project to get a visible M&E. But for the transparency of projects and to a clear destination of aid resources, we can not give up M&E.
It may also be necessary “train” the donor to require M&E. And show the M&E during the process in form of real objective of the project.
I think a major aspect is the fear of failure – by the donor as much as the project. It takes a lot of confidence to admit your project isn’t succeeding, and when budgets are tight, anything “failing” gets cut. This can have an impact on current employment as much as future funding opportunists.
I insist that the key have to be train the donor. We need to change this idea of fear of failure by learning from mistakes. Only then will succeed in assimilating the culture of M&E in the whole chain and we’ll improve what is important: the impact of ICT4D. Must see this as an opportunity to innovate, not as an insurmountable problem.
Mistakes make us human. Rectify mistakes makes us wise. Learning from mistakes makes us innovative.
Some good news on the M&E front: these skills are being increasingly emphasized within Development Practitioner programs. At Columbia University’s School of International and Public Affairs, Evaluating Development Results had a large waitlist, and Patricia Mechael & Matt Berg’s ICT and Development class had a module dedicated to M&E & ICT.
So although the culture in development may not quite be there for M&E, there is a definite movement toward it and a realization on behalf of the next generation of development practitioners that M&E is an essential, and useful, component for development work (when designed correctly).
On a tangent: we may also need some organizations and projects some slack. M&E isn’t easy. Once you develop the eval purpose, the indicators and the process for measurement, there’s then this whole use component…how do we actually use monitoring for adaptive management, and how do we utilize evaluation for learning in our future projects? Add to this the changing information landscape (SMS, crowdsourcing, Twitter, Fbook), which is bringing decision makers increased information sources at greater speeds. So now there’s this organizational change component where organizations have to assess how they’re going to incorporate the most essential information from this new landscape into their systems. It’s an exciting time, and I think we’re only starting to see dev organizations begin to adapt. Would be interested to hear what others think.