Panel on Open-Source AI: Opportunities, Challenges, and Governance (in collaboration with GIZ Fair Forward AI)
6th September 2024
Hyderabad, India
Introduction

The nasscom Responsible AI Hub, in collaboration with GIZ – the German Agency for International Cooperation, organised a multistakeholder panel discussion on Open-Source AI: Opportunities, Challenges, and Governance at the Telangana Global AI Summit on 6th September 2024.

The panel discussion delved into the definition of open-source AI and its divergence from traditional open-source software standards. Panellists also explored how open-source AI could strengthen the Indian AI ecosystem, while also reflecting on the emergent risks associated with its development and use.

The discussion was moderated by Mr. Ankit Bose, Head of AI, nasscom.

The list of panelists is provided in the Annexure.

intorduction image
Key takeaways from the discussion
key takeaways
AI openness exists on a spectrum
Panelists delved into the complexities of defining open-source AI, highlighting the discrepancies between traditional open-source software principles and the evolving notion of ‘openness’ in AI model development. Specifically, while open-source software typically ensures the freedom to use, study, redistribute, and modify; ‘open’ AI models often restrict the ability of downstream users to use, inspect, or modify these models.

Panelists noted that openness in AI models exists on a spectrum which allows for varying levels of access to model’s code, weights, and training datasets.
Open-source enables democratisation of AI
Panelists highlighted that open-sourcing AI could broaden participation, encourage competition, and ensure continuous improvement through independent testing and validation.

Panelists noted that open-source AI empowers users with greater control and privacy by enabling on-device deployment of AI models, reducing the need for cloud-based processing and data sharing. As a result, governments can effectively harness open-source AI to develop tailored solutions while safeguarding digital sovereignty and reducing reliance on proprietary solutions.
Open-source AI should be leveraged responsibly
Panelists emphasised the importance of addressing AI model and data bias, to ensure that model outputs are fair and inclusive, especially in high-risk contexts.

Panelists recommended that organisations deploying open-source AI solutions must clearly identify problem areas and use-cases where AI solutions can lead to measurable impact. For instance, the Union Ministry of Agriculture & Farmers Welfare, Government of India has launched an AI Chatbot for the Pradhan Mantri Kisan Samman Nidhi (PM-KISAN) Scheme, with support from the BHASHINI division. The chatbot, available in 22 Indian languages, helps farmers access information about the PM-KISAN scheme, empowering them to make informed decisions.
Panelists explored strategies to promote equity and inclusivity in AI development by emphasising stakeholder engagement throughout the AI development process and building feedback mechanisms to ensure that AI systems are adequately representative of local and regional contexts.

Panelists also highlighted key challenges faced by open-source AI developers, such as the shortage of high-quality open datasets, which greatly hinders AI development, particularly for Indian languages with a relatively low digital presence.
Open-source AI should be regulated with caution
Panelists noted that any future AI regulation must be carefully crafted to clearly define the specific responsibilities of open-source AI developers and deployers.

While stressing the importance of fostering an ecosystem that allows Indian startups to benefit from open-source AI, participants warned against enforcing overly prescriptive compliance requirements for developers and deployers without established industry-wide standards for AI trust and safety, as this could stifle innovation.

Panelists discussed the importance of ensuring legal regulations and terms of use policies for open-source AI do not unintentionally discourage independent third-party testing and red-teaming of AI models. AI safety researchers should have access to advanced models capable of generating novel security threats. This access is crucial for enhancing the technical understanding of AI’s potential malicious capabilities and for fostering the development of innovative solutions that promote the safe and responsible use of AI.
ANNEXURE:
List of Panelists
  • Amitabh Nag, Chief Executive Officer, Digital India Bhashini Division (BHASHINI), Ministry of Electronics & Information Technology Electronics, Government of India
  • Sunil Abraham, Public Policy Director, Data Economy and Emerging Technologies, Meta India
  • Zainab Bawa, Chief Operating Officer, Hasgeek
annexure image