AI for social impact requires careful design
Panelists stressed the importance of ensuring that social impact AI solutions are rooted in local contexts, problems, and needs.
Panelists underscored the significance of inclusive AI development, emphasising the need for stakeholder participation throughout the AI lifecycle. This approach ensures that AI solutions are tailored to local contexts and address real-world issues while considering emerging risks. They pointed out that innovators who have a deep understanding of local needs and contexts are ideally positioned to design effective AI solutions and promote their widespread adoption.
Panelists emphasised the importance of well-thought-out accuracy metrics for AI solutions aimed at social impact, particularly in high-risk areas like healthcare.
AI for social impact is vexed with challenges
Panelists noted that cost is a major obstacle to AI adoption in India, putting marginalised communities at a disadvantage.
Panelists pointed out that weak incentives for data collectors could lower dataset quality, hurting the effectiveness of AI models.
Panelists echoed the critical need to educate end-users about the capabilities and limitations of the AI solutions they use.
AI for social impact demands rigorous and continuous evaluation
Panelists suggested that AI-driven social impact initiatives need iterative design and continuous evaluation, supported by clear metrics for measuring impact and behavioural change. They also acknowledged that achieving meaningful social change requires a long-term commitment.
Panelists noted that evaluating an AI solution requires a two-pronged approach, concentrating on both model performance and real-world impact analysis after deployment. They also highlighted that effective evaluation requires interdisciplinary collaboration involving social scientists, monitoring and evaluation experts, and other relevant stakeholders.
AI for social impact should be built on responsible and inclusive governance
Panelists recommended a participatory approach to AI governance, emphasising the need to engage affected stakeholders and users in deliberations around safe and responsible AI adoption. They stressed the importance of prioritising their interests in such deliberations, particularly when designing regulations for the auditing and explainability of AI systems.
Panelists discussed measures for responsible AI development, underlining the need for stress-testing, evaluation, supervised user experiments, and randomised control trials. They stressed the importance of data governance policies and the inclusion of fairness, accountability, and transparency (FAT) principles throughout the AI lifecycle, along with continuous monitoring and human oversight of AI solutions.