Panel on AI for Social Good: Innovation for Impact (in collaboration with Wadhwani AI)
6th September 2024
Hyderabad, India
Introduction

The nasscom Responsible AI Hub, in collaboration with Wadhwani AI, organised a multistakeholder panel discussion on AI for Social Good: Innovation for Impact at the Telangana Global AI Summit on 6th September 2024.

The panel discussion gathered top experts to examine issues such as problem identification, impact assessment and measurement, along with other challenges in creating and implementing AI-based solutions for large-scale social impact.

The discussion was moderated by Mr. Ankit Bose, Head of AI, nasscom.

The list of panelists is provided in the Annexure.

intorduction image
Key takeaways from the discussion include
key takeaways
AI for social impact requires careful design
Panelists stressed the importance of ensuring that social impact AI solutions are rooted in local contexts, problems, and needs.

Panelists underscored the significance of inclusive AI development, emphasising the need for stakeholder participation throughout the AI lifecycle. This approach ensures that AI solutions are tailored to local contexts and address real-world issues while considering emerging risks. They pointed out that innovators who have a deep understanding of local needs and contexts are ideally positioned to design effective AI solutions and promote their widespread adoption.

Panelists emphasised the importance of well-thought-out accuracy metrics for AI solutions aimed at social impact, particularly in high-risk areas like healthcare.
AI for social impact is vexed with challenges
Panelists noted that cost is a major obstacle to AI adoption in India, putting marginalised communities at a disadvantage.

Panelists pointed out that weak incentives for data collectors could lower dataset quality, hurting the effectiveness of AI models.

Panelists echoed the critical need to educate end-users about the capabilities and limitations of the AI solutions they use.
AI for social impact demands rigorous and continuous evaluation
Panelists suggested that AI-driven social impact initiatives need iterative design and continuous evaluation, supported by clear metrics for measuring impact and behavioural change. They also acknowledged that achieving meaningful social change requires a long-term commitment.

Panelists noted that evaluating an AI solution requires a two-pronged approach, concentrating on both model performance and real-world impact analysis after deployment. They also highlighted that effective evaluation requires interdisciplinary collaboration involving social scientists, monitoring and evaluation experts, and other relevant stakeholders.
AI for social impact should be built on responsible and inclusive governance
Panelists recommended a participatory approach to AI governance, emphasising the need to engage affected stakeholders and users in deliberations around safe and responsible AI adoption. They stressed the importance of prioritising their interests in such deliberations, particularly when designing regulations for the auditing and explainability of AI systems.

Panelists discussed measures for responsible AI development, underlining the need for stress-testing, evaluation, supervised user experiments, and randomised control trials. They stressed the importance of data governance policies and the inclusion of fairness, accountability, and transparency (FAT) principles throughout the AI lifecycle, along with continuous monitoring and human oversight of AI solutions.
ANNEXURE:
List of Panelists
  • Amrita Mahale, Director, Product Innovation, ARMMAN
  • Gaurav Godhwani, Founder, Civic Data Lab
  • Kalika Bali, Principal Researcher, Microsoft Research India
  • Makarand Tapaswi, Senior ML Scientist, Wadhwani AI
  • Sachin Malhan, Co-founder, Agami
annexure image