Technology Salon

Washington DC

Sponsored by

a discussion at the intersection of technology and development

Evaluation Criteria for Using Artificial Intelligence in Development Programs

artificial intelligence international development

This year’s technology buzz word is artificial intelligence, which means you’ve already been asked how your organization can incorporate AI and machine learning in your programs.

Hopefully, you answered with this. Or you could be more serious and reply that we all already are using aspects of AI to augment and enhance, not replace, activities we’re already doing, such as running natural language chatbots and utilizing pattern recognition satellite imagery.

Yet, as AI is such a new technology, there are few, if any, resources available to thoroughly evaluate the what, where, and how of using AI in our programs. So far, there are four strong publications to ground our thinking about this new technology:

While each of these publications advance our understanding of AI, we are still missing a foundational document.

Evaluation Criteria in Designing Artificial Intelligence Activities

We need to have a set of criteria to evaluate how we are designing and developing AI systems to ensure that we are being responsible with this new technology, evoking the simplest and strongest ethical code: do no harm.

That was the focus of the Technology Salon on How to Evaluate Artificial Intelligence Use Cases for Development Programs? As part of the event, we developed an evaluation framework for artificial intelligence solutions with guidance from these thought leaders:

Salon members helped draft an AI evaluation framework that built on the Principles for Digital Development to create an approach we can all use in our international development programming.

Please review and edit the draft AI evaluation framework here.

Your input is specifically requested to improve this document, which will serve as the foundation for a future publication.

Humans Are Still Central to Artificial Intelligence

The need for human input and control in every aspect of artificial intelligence activities flowed throughout the Technology Salon and comes through the draft AI framework. Core ideas included:

It’s our responsibility explain AI

As development practitioners and technology experts, it’s our responsibility to make sure that AI applications, and their components (data, algorithms, output) are explained in a way that our constituents can understand.

We should augment humans, not replace them

We need to focus the conversation on how AI can augment human decision making and enhance our reach, building on the much-need human touch. This is counter to one current narrative that AI is made to replace human efforts

Data divides drive many concerns

Like digital divides, there are many data divides. One of the largest is the basic lack of data on the constituents that we’d need for training, using, and validating AI, therefore driving the use of proxy data, which can radically increase bias in results.

Overall, with AI rising up the hype cycle to the peak of inflated expectations, we need to continue discussions like this one to make sure that we can utilize AI for good.

Comments are closed.