Technology Salon

New York

Sponsored by

a discussion at the intersection of technology and development

How to Design and Deploy Responsible AI in LMIC Healthcare

unga health tech salon

Artificial Intelligence (AI) is rapidly reshaping health worldwide. But for low- and middle-income countries (LMICs), where workforce shortages, fragile infrastructure, and inequities in access are everyday realities, AI’s impact has the potential to be transformative.

As we explore both the promise and the risks, our focus is on governance, early lessons, and ethics that ensure AI strengthens — not destabilizes — health systems and improves people’s health and wellbeing.

At a recent Technology Salon NYC – UNGA Week, we asked how do we design and deploy responsible AI in healthcare — especially in low- and middle-income countries (LMICs), where the stakes are highest and the risks most complex?

Patricia Mechael, CEO of Health Enabled, led the session and was joined by:

  • Ricardo Baptista Leite – HealthAI
  • Rebecca Distler – OpenAI
  • Antonio Spina – World Economic Forum
  • Neha Verma – Intelehealth
  • Adele Waugaman – Bill & Melinda Gates Foundation
  • Rositsa Zaimova– Dalberg Data Insights

Trust and Governance as the Starting Point

The greatest bottleneck to AI adoption in health is not the technology itself but trust. Progress will advance only as quickly as clinicians, patients, and policymakers are confident in its safety and reliability.

Oversight must evolve from one-time approvals to ongoing monitoring and surveillance, recognizing that AI models continuously learn and change. International regulatory networks are beginning to form, creating opportunities for LMICs to build shared capacity and leapfrog legacy inefficiencies.

Practical Insights from Deployment at Scale

Recent large-scale implementations of AI-enabled health services show both the potential and the challenges of integration. When clinicians work with AI copilots, diagnostic accuracy improves, consultations take less time, and unnecessary referrals are reduced.

The key question is: what level of accuracy makes AI safe to use independently, and when must it remain an assistive tool? A tiered approach is emerging, with straightforward cases suitable for AI-led care and more complex situations requiring human oversight.

Evaluation and Equity

Responsible integration requires rigorous evaluation at multiple levels: model performance, product usability, user experience, and health outcomes. An intentional equity lens is essential.

Without careful design, AI risks amplifying historical biases related to gender, income, or geography. With intentional approaches, however, AI has the potential to correct systemic inequities and expand access to care in underserved settings.

Financing for Sustainability

As traditional development assistance for health declines, sustainable AI deployment will depend on innovative financing and new partnership models. Public, private, and philanthropic actors must come together in blended arrangements that align incentives and share risks.

Beyond global corporations, local and regional private sectors will be central to ensuring solutions are affordable, trusted, and sustainable.

Shifting Ethical Expectations

Evidence increasingly shows that in certain domains, AI can outperform humans. This raises profound ethical and legal questions: at what point does it become negligent not to use AI? Companies developing AI for health must embed their work within strong safety systems, prioritize affordability, and ensure global relevance.

Infrastructure, Capacity, and Sustainability

Reliable connectivity, local computing power, and strong data governance frameworks are prerequisites for equitable AI deployment. User-centered design is critical, since cultural and contextual factors shape how communities prefer to engage with digital health tools. For long-term sustainability, AI must prove that it can deliver better outcomes at lower cost than existing systems in ways that are both ethical and equitable.

Key Questions Moving Forward

AI holds enormous potential to close gaps in capacity, affordability, and access in LMIC health systems. But the most pressing challenges are not technological — they are ethical, regulatory, and systemic. Getting governance, financing, and design right will determine whether AI disrupts health systems or strengthens them for the future.

  • How can regulators in LMICs keep pace with rapidly evolving AI while safeguarding equity?
  • What mechanisms will build trust among clinicians, patients, and policymakers?
  • How do we demonstrate economic sustainability in AI-enabled health systems?
  • When does withholding AI become unethical if it could save lives?

Written by Patricia Mechael.

Comments are closed.