Abstract

The rapid evolution of artificial intelligence has placed technology leaders at a crucial juncture, where decisions about AI infrastructure on whether to deploy workloads in the cloud, on-premises, or through a hybrid model. A recent Nasscom roundtable in Delhi, attended by 15 growing AI startups, discussed the multi-dimensionality nature of this topic, revealing that it extends far beyond technical considerations. Factors such as trust, industry-specific requirements, cost metrics, and customer expectations play pivotal roles in shaping infrastructure decisions. The discussion emphasised on a growing preference for hybrid approaches, which balance the flexibility of cloud solutions with the control and security of on-premises systems, while also highlighting the increasing relevance of edge AI for high bandwidth applications like smart city implementations.

Economic realities and business priorities further complicate the decision-making process, with organizations struggle with the long-term cost implications of cloud dependency versus the inefficiencies of underutilized on-premises infrastructure. As AI compute demand escalate, the need for adaptive, scalable infrastructure strategies becomes inevitable. The insights from the roundtable suggest that successful organizations will align their infrastructure choices with both technical needs and broader business objectives, leveraging hybrid models and continuous optimization to navigate the evolving AI landscape effectively.

Introduction

In the rapidly evolving landscape of artificial intelligence, technology leaders face a critical infrastructure decision: should AI workloads run in the cloud, on-premises, or through a hybrid approach? This seemingly technical question has profound implications for business strategy, cost management, security, and customer experience.

A recent Nasscom roundtable in Delhi brought together 15 innovative startups working at the forefront of AI implementation. The discussion revealed that the cloud vs. on-premises decision is rarely made based on technical considerations alone—it's shaped by a complex interplay of business requirements, industry regulations, customer expectations, and organizational capabilities.

Image 2

Beyond the Technology: What Drives Infrastructure Decisions?

Trust and Control: The Foundation of Choice

Several founders emphasized that trust remains a fundamental factor driving infrastructure decisions. Organizations—particularly in regulated industries like healthcare and government—often prefer on-premises solutions for their most sensitive data. As one participant noted, many brownfield hospitals have a strong psychological attachment to on-premises infrastructure, expressing sentiments like "Hamare servers hain" (these are our servers), indicating deep-seated concerns about data sovereignty and control.

This trust deficit in cloud environments isn't always rational; it's frequently emotional and cultural. Organizations value knowing their boundaries and being able to govern access themselves, especially when handling sensitive information like patient records or financial data.

Industry-Specific Considerations

The roundtable highlighted how infrastructure decisions vary dramatically across industries:

  • Healthcare institutions, particularly established players like Apollo and Vedanta, favor on-premises solutions due to patient data sensitivity and prior cybersecurity incidents. However, managing in-house infrastructure also comes along with additional costs and organizations will need to be mindful of it.
  • Smart city implementations face unique challenges with bandwidth and latency constraints. With hundreds of cameras generating massive data volumes, edge processing becomes necessary, with only processed insights being sent to the cloud.
  • Government agencies demonstrate strong resistance to cloud adoption, largely due to existing on-premises investments and strict data sovereignty requirements.
  • Early-stage startups often begin with cloud-based solutions for quick experimentation, especially during proof-of-concept phases where real data isn't yet in play.

The Economic Reality

While technical and security considerations are important, economic factors often drive the final decision. Several participants noted the surprising cost dynamics at play:

  • Cloud isn't always cheaper: After 2-3 years of sustained usage, certain organizations may discover that cloud costs exceed what they would have spent on owned infrastructure.
  • Utilization efficiency: On-premises deployments can suffer from low GPU/infrastructure utilization, making them cost-ineffective for sporadic workloads.
  • Rising AI compute demands: As AI models grow more sophisticated, compute requirements increase dramatically, thereby necessitating balanced strategies to manage high-volume processing.

One startup shared their journey from complete cloud dependency to a 60% on-device processing model, driven by escalating costs associated with code generation and search queries running into millions of lines monthly.

Image 3
Emerging Solutions: Beyond the Binary Choice

The Hybrid Advantage

The roundtable consensus pointed strongly toward hybrid approaches as the pragmatic middle ground. Several implementation patterns emerged:

  • Data on-premises, applications in the cloud: Keeping sensitive data local while leveraging cloud flexibility for applications.
  • Cloud-adjacent models: Building data centers physically close to cloud providers' infrastructure and connecting via high-speed, low-latency links to minimize transfer issues.
  • Containerization for IP protection: Using containerized deployments with IP-specific whitelisting to protect proprietary technology when deploying on client premises.

The Rise of Edge AI

Edge processing is gaining momentum, particularly for bandwidth-intensive applications like video analytics. By processing data locally and sending only relevant insights to the cloud, organizations can achieve better performance while managing costs effectively.

One participant working with smart city implementations detailed how processing video from hundreds of cameras necessitates edge deployment, with only actionable insights being transmitted to cloud-based LLMs for higher-level analysis.

Beyond Technical Architecture: The Business Strategy

Customer as Partner

Perhaps the most insightful theme from the roundtable was the emphasis on customer partnership. Successful AI implementations treat customers not just as clients but as strategic partners:

  • Business Value Discovery Workshops: Conducting thorough pre-implementation workshops to understand client needs and potential risks.
  • ROI Tracking: Defining and monitoring clear return-on-investment metrics with customers.
  • Shared Technology Benefits: Passing technology-driven savings back to clients to demonstrate commitment to their success.
  • Collaborative Roadmaps: Building clear implementation timelines that evolve with changing business needs.
Image 4

Cost Optimization as a Journey

Several startups shared their cost reduction strategies, with one reporting a dramatic decrease in per-user costs from $20-25 to $5-7 through better local operations and improved efficiency. This demonstrates how infrastructure decisions should never be static but should evolve with changing usage patterns and business needs.

Key cost optimization approaches included:

  • Regular cost monitoring (quarterly or monthly)
  • Implementing observability tools to identify inefficiencies
  • Sprint-based work processes to maintain flexibility
  • Industry-specific optimizations for different use cases
  • Upskilling on DevOps and Infrastructure Management to make smarter choices

Looking Forward: Strategic Considerations for AI Infrastructure

The roundtable discussion suggested several forward-looking considerations for organizations navigating the cloud vs. on-premises decision:

  • Think Beyond Initial Implementation: Infrastructure decisions should account not just for current needs but anticipated growth. The scalability of different approaches becomes increasingly important as AI usage expands across the organization.
  • Consider the Full Ecosystem: The availability of skilled personnel to manage different infrastructure types is often overlooked. Organizations frequently migrate to cloud solutions due to skill gaps in managing on-premises infrastructure rather than technical necessity.
  • Balance Technical and Business Drivers: Technical considerations like latency, security, and scalability must be weighed against business factors like cost predictability, control, and compliance requirements.
  • Prepare for Evolution: Technology and market conditions will continue to evolve. Businesses need adaptive strategies that can scale over time, supporting long-term growth rather than just immediate needs.
Conclusion

The Nasscom roundtable provided a fascinating glimpse into how India's most innovative startups are navigating the complex decision landscape of AI infrastructure. Far from being a simple technical choice, the cloud vs. on-premises decision emerges as a strategic business consideration that touches every aspect of an organization's operations and customer relationships.

What became abundantly clear is that there is no universally correct answer. The optimal solution depends on an organization's specific needs, risk tolerance, existing investments, and strategic vision. However, the trend toward hybrid approaches—combining the strengths of both paradigms—offers a promising path forward for many.

As AI continues to transform business operations, the ability to make thoughtful, strategic infrastructure decisions will increasingly separate market leaders from followers. The organizations that succeed will be those that align their infrastructure strategy not just with their technical requirements but with their broader business vision.

About the Authors:
  • Praveen Mokkapati

    Director,
    Nasscom AI

  • Pragya Kashyap

    Intern,
    Nasscom AI

Keywords: #HybridComputing, #CloudComputing, #AIInnovation, #AIStartups, #AIComputing, #ArtificialIntelligence