Technology Salon

London

Sponsored by

a discussion at the intersection of technology and development

How to Safeguard Constituents in Digital Development

digtial migrants asia

At the recent Technology Salon London, we debated How to Safeguard Our Constituents in Digital Development? with discussants (under the Chatham House rule) from a wide range of development organizations including DFID, DAI, Accenture Development Partnerships, Save the Children, Plan International, Chemonics, Comic Relief, World Wide Web Foundation, Cherie Blair Foundation, Child Hope UK, Christian Aid, Internews, On Our Radar, World Vision, IIED and others.

The discussion covered a range of issues and perspectives related to safeguarding, but one part that really stood out for me was the idea of “zero tolerance” when it comes to prevention of harm. At first this seemed a no-brainer, and something the room fairly unanimously agreed with, but on deeper discussion things quickly became more nuanced.

Firstly, what do we really mean by zero-tolerance? Can we ever truly ensure that there is a 100% guarantee that not a single person will be harmed by our work. Unlikely… But how close to this guarantee can we try to get..?

Two different kinds of risks were highlighted in the conversation that may need to be considered differently:

Predictable, preventable risks

These types of risks are simply not acceptable, and we should be able to stick as close as possible to zero-tolerance of them – for example, we know that leaving a CD on a train full of unencrypted personal data could happen, we know that hackers exist, and therefore we should not create the circumstances in which these types of mistakes can be made.

For example, by removing the ability to download unencrypted personal data to any device or media. There is rarely an excuse for designing programs in which these kinds of mistake happen or are facilitated, and these kinds of failure can and should be eliminated.

Unpredictable risks by virtue of trying something new

The second kind of risk is subtler. The only to prevent all unpredictable risks is never to try anything new. While few suggesting that the aid or development sectors do this and ignore modern technology, approaches and innovation, there is a very pressing question of how far on this spectrum of new vs. established is valid.

How can practitioners balance the need to improve practice and impact through innovating while avoiding risky experiments on vulnerable populations (often without their informed consent).

This theme is widely discussed in the humanitarian space – helpful overview in Do no harm: A taxonomy of the challenges of humanitarian experimentation.

This is a hugely important topic and a morning’s conversation only dented the surface, but a few suggestions emerged that might be useful for anyone else struggling with this tension of mitigating risks and harm, while seeking to innovate and use emerging technologies to improve their work:

1. What can we learn from work in other sectors?

When it comes to medical research there are all sorts of checks and balances in place – for example over what kind of lab testing is required before anything can be utilised in real populations.

This same shared understanding and approach does not seem to exist in most of aid/development where, despite a plethora of pilots and experiments, we lack a robust system for learning from these to evaluate the positive and negative consequences of new technology, leaving it to individual practitioners to decide what they use or what they don’t… A level of rigour and common processes, along with ensuring the results of all such trials are published, shared, and independently monitored might help.

Other sectors also offer valuable learning – when new tech is rolled out in commercial environments, it typically goes through a barrage of tests for validity, to prevent hacking, to test for scale etc. This is not always the case in the development sector, which can lead to preventable and predictable failures.

2. Design with the user

Adopting the Digital Principles core mantras and engaging real users / stakeholders from the earliest stages of design, enables risks to be foreseen by these users that development practitioners might not be able to see or predict.

3. Risks extend to structures and systems

Some supposedly ‘unpredictable’ risks are only unpredictable if we ignore established discourse from feminism and other disciplines on power, control and structural/systemic inequality. Programs designed with these lenses in mind from the start may find it easier to avoid the less obvious harms that digital technology and scale can bring far more easily – job displacement, impact on and from local cultures etc.

4. Give people agency

Finally, informing and teaching people about the risks which might occur so they are better able to self-police as individuals, and monitor as communities, rather than relying on a top-down approach common to much development/aid work – which can be unintentionally exacerbated by the addition of large-scale digital technologies.

The Ongoing Challenges of Safeguarding

What was most clear from the conversations is that the challenges of safeguarding and preventing harm are huge, important, and intricately intertwined with other conversations around the use of technology in the development sector.

This is clearly not a challenge any one organization will solve alone, but perhaps more open conversations like this Salon will help to move things forward.

Until then, there are a wealth of resources in existence that might help practitioners struggling with safeguarding in their own work, some examples include:

This blog represent the views of the author and of attendees of the September Tech Salon and do not represent the views of DIAL or any of the organizations mentioned.

Comments are closed.