Technology Salon


Sponsored by

a discussion at the intersection of technology and development

How can ICTs improve Monitoring, Evaluation and Learning (MEL)?


“Don’t let the tech wag the MEL dog.” This quote sums up the conversation from the London Tech Salon focused on “How can ICTs improve Monitoring, Evaluation and Learning (MEL)?” Essentially it means that when exploring the applicability of ICT we have to ensure we start from what MEL is setting out to achieve and not be driven by technology solutions.

Conversations at Mozilla, London at a recent London Technology Salon on “Technology, teaching and learning – What can we learn from the evidence?“, reminded us that technology can only take us so far in underpinning MEL processes. It’s one thing for us to capture data to get an understanding of “the what” and there are clear examples where technology can enable more efficient data capture of measurable indicators. However, it is another thing to unpick and understand “the why” as we begin to analyse data – reminding us that technology can never replace good research methodology and principles for evaluation.

George Flatters (Program Development Manager of Aptivate) and Eleanor Harrison (Chief Executive of GlobalGiving UK) led the discussion, providing insights into the challenges of MEL in relation to collecting and using data, and selecting and implementing tools.

Some participants shared that MEL can often be “funder focused” as there are requirements for organisations to present impact to donors on a central level. There are new opportunities to look at this differently as technology is increasingly being used more dynamically for day-to-day conversations on the ground. This is allowing staff to be more responsive in near real time or connect metrics connected to service delivery. By harnessing this ubiquity of technology and the new ways in which people communicate, the group emphasised the need for MEL to be more participatory by empowering the people most proximate to MEL data capture and prioritising feedback to people who data is about. In this respect, we kept coming back to the point that we need to always think about MEL for whom.

There was recognition in the room that this increased access to such volumes of data is not always a good thing, however. (Someone even described it as data haemorrhaging!) As organisations accumulate and aggregate data, there can be a trade off with precision and proportionality. This means that as we ask an ever expanding list of questions, we should keep asking ourselves “what do people really need to know?” Someone suggested that we might conceive of not only utilising big data, but harnessing the value of small data too. There was some further interesting discussion on using qualitative stories to make sense of quantitative data.

Only capturing data which is necessary is a crucial pillar in the ongoing responsible data agenda as we explore the responsibility to use information we collect. This is not only to reduce the burden on communities in the process of data collection, but furthermore to stress the importance of ensuring contributors are represented based on the information they have entrusted to others. While discussing ethical issues, questions were also raised about who owns the data anyway? And how do we respect the rights of groups of people as information is aggregated, when the norm is to focus on the individual?

The crucial implication for using technology for MEL is how to integrate solutions into the design itself – as one participant said “just throwing a phone into the mix isn’t going to help.” Once the design requirements which are conducive to technology have been defined, the group explored how technology for MEL is a confusing space to navigate. So we kicked off with a great conversation on the challenges of tools selection summarised by three core problems:

  1. How can MEL staff find appropriate technology and navigate available tools? There are complex options available, so how should we evaluate tools and decide what to invest in or when we should create new tools?
  2. How can we enable interoperability and promote data sharing? Many systems are not compatible with other tools and we have different systems of codification.
  3. How can we make tools attainable and sustainable, given common needs in MEL? Many tools are expensive and out of reach for many organisations that lack skills and resources to rely on open source solutions. What should we pool and share?

Furthermore, we need investment to train, maintain and sustain these tools and must consider how these factors inform the initial system selection.

Overall, there was a strong feeling coming from the Tech Salon that we need to get away from making decisions without data and we need to encourage others to understand the potential ways technology can support this process. No doubt, the risk remains that people make decisions based on bad data so we must challenge assumptions about what data means and not underestimate the analysis or interpretation required. There was a strong feeling that when considering different layers of MEL, we should rethink who is getting the benefit and explore routes to systems which are more participatory and driven by feedback.

This post was written by Amy O’Donnell of Oxfam GB

Comments are closed.