AI is no longer a futuristic tool; it’s a present-day force shaping our world, from predicting floods to optimizing energy grids. In 2025, AI-driven systems are intertwined with the urgency of climate change – and the opportunity to accelerate action. But they also bring a new set of risks and costs. And a much welcomed opportunity to bridge the sticky digital divide.
This was the central theme of a recent interactive discussion convened by Climate Collective and Technology Salon during NY Climate Week, where a diverse group of practitioners, policymakers, and funders explored how we can ensure AI helps, rather than harms, our collective efforts to build a more resilient planet.
Anna Lerner Nesbitt, CEO of Climate Collective led the discussion. The key takeaway? AI’s role in climate resilience isn’t predetermined.
Its impact—positive or negative—depends entirely on the processes, systems, and intentions that guide its development and deployment. We must actively steer AI toward equitable and sustainable outcomes.
The Core Challenges: Understanding the Landscape
The conversation began by acknowledging the complex nature of the climate crisis. We’re facing increasingly frequent disasters, fragile supply chains, and health shocks, which disproportionately affect vulnerable communities. The discussion highlighted a critical tension:
- High-income countries often focus on mitigation (reducing emissions)
- Low- and middle-income countries (LMICs) urgently need adaptation strategies.
This divide is exacerbated by a significant data gap.
For instance, precision farm data in the U.S. can be as detailed as 0.1 hectares, whereas in places like Bangladesh or Nepal, it can be over 100 hectares. This creates a market failure, as smallholder farmers can’t afford the high-resolution data needed to benefit from climate AI, making them more vulnerable to climate shocks.
Compounding these issues are governance gaps.
Existing climate frameworks, like Nationally Determined Contributions (NDCs), rarely include AI considerations. While national AI institutes are emerging, their focus is often on technology safety rather than sustainability or climate resilience, leaving a major blind spot.
Focusing on Systems and Intentions
The discussion emphasized that the true promise of AI lies not just in specific applications but in how we build the systems that support them. Instead of simply highlighting what AI can do—like improving early warning systems or optimizing agriculture—the focus was on the underlying principles required to make these applications work for everyone.
The group identified several key areas for intentional action:
1. Democratizing AI Literacy
To prevent a new form of digital and climate colonialism, it’s essential to equip local communities with the knowledge to safely and effectively use AI tools.
This means moving beyond top-down solutions and implementing “train-the-trainer” models that empower local NGOs and governments to be their own change agents. Today, 42.1% of Kenyan internet users aged 16 and over used ChatGPT in the preceding month. But for what and at what cost?
2. The Handprint, Not Just the Footprint
The conversation distinguished between AI’s footprint (its own carbon emissions) and its handprint (its societal impact). While concerns about the energy needed to run AI models are valid—especially when 1 in 5 new gas plants being built in the US are constructed to power data centers—it’s crucial to weigh this against AI’s potential to drive large-scale, positive change.
For example, using AI to optimize renewable energy deployment can lead to a far greater reduction in emissions than the energy used to train the model.
3. Data Sovereignty
Many LMICs lack local data infrastructure, making them dependent on external tech companies. Vulnerable communities are often not aware of the level of information they give away by using digital tools. To address this, we must invest in local data creation and ensure communities maintain ownership and control over their own information.
4. Ethical Governance
As AI governance frameworks are developed, they must be inclusive. The risk of regulatory capture, where actors with strong bias influence policies, is high. To counter this, we need to ensure the voices of communities most affected by climate change are central to the conversation, not just an afterthought.
Claiming a seat at the table to influence this discussion will require more ubiquitous AI skills and fluency – leading us back to #1.
A Call to Collective Action
AI’s role in climate resilience is a matter of choice. The Climate Collective and TechSalon discussion made it clear that we have a responsibility to guide this technology toward a positive future. For this to happen, practitioners, funders, and policymakers must take specific actions:
- Support AI literacy and skills building globally. This is not just about ‘knowing how to use ChatGPT’ but just as much about knowing when not to use it, or how to use it as efficiently as possible.
- Invest in equitable AI infrastructure to avoid lock-ins and data colonization.
- Embed AI governance in international climate frameworks like NDCs or other international transparency and standards efforts. The Global Energy Monitor and efforts by the Green Software Foundation were discussed.
The climate crisis is urgent. AI offers powerful tools we cannot afford to ignore, but it also carries significant risks we cannot afford to overlook. By prioritizing intentional design and inclusive human-first governance, we can ensure that AI’s handprints for good far outweigh its footprints, building a more resilient and just world for everyone.
Written by Anna Lerner Nesbitt