Director,
Nasscom AI
The rise of Agentic AI marks a pivotal shift from static models to dynamic, goal-driven systems capable of autonomous decision-making. As enterprises begin moving beyond experimentation, the next set of questions loom large: Can we trust agents in production? Are our systems ready to handle scale, complexity, and unpredictability? What infrastructure and governance guardrails must evolve for agents to thrive safely in the enterprise stack?
To unpack these critical questions, we brought together leading AI startup founders, system architects, and enterprise innovators for an exclusive roundtable on Deploying Agentic AI at Scale. This session explored the full stack required to make agentic AI reliable, cost-efficient, and secure from GPU infrastructure and cloud-hybrid strategies to real-time observability, API governance, and workflow orchestration frameworks. We had dived into integration hurdles, scaling bottlenecks, and the delicate balance between automation and human-in-the-loop collaboration.
As autonomous agents move closer to powering mission-critical workflows, this conversation surface the lessons, architectural trade-offs, and tooling gaps that early adopters are encountering and how we can collectively shape the future of agent-based architectures in the enterprise.
Agentic AI is sparking real buzz in the AI world—startups and product leaders are genuinely excited about what it could mean for the future, but there’s also a healthy dose of scepticism as they try to separate the hype from what’s possible right now. During the discussion, we dug into what’s working, what’s still stumbling, and what the future could look like as agentic systems move from lab demo to enterprise backbone. Here’s a candid look at their shared insights, surprises, and practical lessons.

Participants at the roundtable included founders and technical leaders from early-stage AI startups. Each person brought unique use cases—ranging from agent-powered customer operations in agritech, legal, healthcare, and logistics, to those working on core AI infrastructure and agent orchestration platforms.
A shared sentiment emerged: agentic AI encompasses much more than “AI that acts.” It ranges from simple workflow bots to sophisticated, goal-seeking systems that must operate within dynamic, unpredictable enterprise contexts.

Investors and accelerators pointed to India’s fast-evolving talent pool and the country’s increasingly pivotal role in shaping AI adoption patterns globally.
Agentic AI is showing impressive technical gains. Early deployments are:
- Speeding up research and data collection, especially in processes that once took teams of people.
- Reducing human resource overhead (some teams now run support desks with half as many people).
- Enabling smarter automation in document processing, customer queries, and operations.
However, the group agreed: impressive technical performance doesn’t always translate into customer satisfaction. Business users and end customers expect not just accuracy and speed, but also contextual understanding, empathy, and an ability to handle edge cases gracefully.
A notable anecdote: One startup deployed a voice-based agent in rural agritech support. The system delivered correct information at record speed yet lacked cultural nuance and empathy—the “sympathy and understanding” that farmers valued. The team ultimately reintroduced human agents for the most sensitive conversations. This wasn’t a technical failure, but a reminder that success sometimes hinges on ethnographic and emotional intelligence.
As agentic AI systems move from prototypes to full-scale deployment, participants flagged several core challenges:
Data Integration: The Ongoing Battle
- Legacy Enterprise Resource Planning (ERP) systems and closed, fragmented databases make integration difficult and time-consuming.
- Some founders described “creative workarounds” when direct API access was unavailable, such as batch data syncs or temporary database connectors—strategies which rarely scale smoothly as businesses grow.
The Cost Conundrum
- Pricing remains “more art than science.” Large Language Model (LLM) API providers charge per-token or query, but most enterprise clients demand fixed per-seat or outcome-based pricing.
- This mismatch sometimes leads to “negative deployments,” where usage and costs spike unexpectedly thereby undermining client trust and startup margins.
- Practical responses include prepaid credits, usage caps, “fair usage” clauses, and ongoing cost monitoring, but the group admitted that the economics are not yet fully solved.
The Relentless Focus on Security
- Especially in finance and healthcare, startup teams reported CIOs being extremely cautious about opening up their databases or integrating with cloud-native systems.
- Concerns included adversarial attacks, unconventional prompt injections, and compliance headaches with evolving regulatory standards.
As agentic AI systems take on more critical roles, ensuring transparency, reliability, and safety is no longer optional—it's essential. Organizations are embracing continuous observability practices, including weekly trace analysis and real-time dashboards, to monitor agent decisions and user interactions. These tools help surface anomalies, identify patterns, and prevent errors.
As agentic systems become more central, maintaining oversight and explaining decisions is paramount:
- Weekly trace analysis, continuous observability, and reporting are now industry best practices. Teams use dashboards to track agent decisions and user interactions, diagnosing where errors and surprises occur.
- Implementing internal “red teaming”—putting systems through adversarial tests—reveals edge cases before going live. Many are also developing “meta-agents” that can monitor and course-correct the rest of their AI stack.
- Founders stressed that, despite this sophistication, human-in-the-loop is essential. Every week, manual quality checks are conducted before customer delivery, and customer-facing workflows always allow for human override or escalation.
“Robust monitoring and human oversight—these are not features, they’re requirements,” one participant emphasized.
Monetizing agentic AI solutions is a work in progress.

Deploying agentic AI systems globally introduces a range of complex technical and cultural challenges. In many regions, particularly those with underrepresented languages like Portuguese or Japanese, teams face a shortage of robust language data. This often requires developing tailored language models, scaling up annotation efforts, or limiting availability to specific geographies.
However, language is only part of the equation. Cultural context plays a major role in how agents must communicate, reason, and offer support. Successful international expansion means aligning with local social norms, user expectations, and regulatory standards.
Scaling agentic AI beyond one geography quickly uncovers fresh obstacles:
- Limited local language data (like Portuguese and Japanese) forces teams to build custom language models, extend their labeling efforts, or geofence deployments.
- Hidden cultural barriers: agents must adapt not only their outputs but also how they interact, reason, and provide support based on local customs, legal frameworks, and expectations.
- Compliance with local data sovereignty and privacy laws (which may differ drastically by region) increases overhead and complexity.
Monetizing agentic AI solutions remains an evolving challenge that demands ongoing trial and error.
Providers must strike a delicate balance: customers want cost predictability and simplicity, while providers contend with fluctuating infrastructure costs and variable model usage.
Getting paid for agentic AI solutions takes real experimentation:
- Solution providers must walk a tightrope—clients push for predictability, while providers juggle unpredictable backend costs.
- Models on trial include:
- Outcome-based contracts
- Prepaid credits or tiered plans
- Credits for “fair use” with overage fees
- Regular, negotiated cost reviews
Most agreed that no “one size fits all” has emerged. What works at the pilot stage can break down at scale—or vice versa. Startups are learning to be flexible, transparent about cost structures, and to iterate business models in lockstep with their clients.
open-source tools are increasingly viewed as transformative. The rise of open models and accessible low-code platforms is expected to broaden participation in building agentic AI—empowering teams beyond traditional software engineering roles. This shift is accelerating both innovation and deployment across industries.

Participants highlighted a trend towards modular, flexible architectures that allow agentic systems to plug into multiple cloud providers, swap models as needs change and respond quickly to tech advances.
An emergent insight: open-source tooling is seen as a game-changer. Many expect open models and low-code tools to soon democratize agentic AI development. This allows non-traditional teams (even those outside tech) to shape and operate agentic workflows, speeding up adoption and innovation.
At the same time, founders recognize the risk of vendor lock-in and advocate for portable code, open standards, and investing in interoperable tools from day one.
Startups and ecosystem leaders alike underscored the need for strong networks:
- Academic and research collaborations offer access to deep technical talent, domain knowledge, and new compliance playbooks.
- Cloud providers and platform partners provide cloud credits, engineering mentorship, and roadmaps—helping startups punch above their weight but also requiring teams to remain vigilant about dependency.
- Active communities, conferences, and open demos help teams learn from each other’s real-world failures and successes—accelerating shared growth across the ecosystem.
The roundtable surfaced several key strategies for scaling agentic AI effectively. Participants emphasized the importance of focusing on vertical solutions where domain-specific data, compliance needs, and local context provide a competitive edge. Systems should be designed for ongoing human-AI collaboration, with transparency and explainability built in. Security, observability, and compliance must be treated as core components—not afterthoughts. Planning for global deployment from the outset is crucial, accounting for language, infrastructure, and regulatory differences. Flexible, transparent pricing models are essential, ideally co-developed with customers. Finally, embracing open-source tools and ecosystem partnerships helps teams stay nimble in a rapidly evolving landscape.
After wide-ranging debate, the roundtable coalesced around a few key strategies:
- Focus on deeply vertical solutions where data, compliance, and local nuance can create a defensible edge.
- Design for constant human-AI collaboration, making it easy for operators to “step in” and for users to understand what the agent did (and why).
- Build observability, security, and compliance into your foundations. Treat them as part of the product—not as add-ons.
- Plan for multi-language, multi-geography rollouts from day one, adapting for both technical and regulatory difference.
- Embrace flexible pricing and be transparent with customers about both capability and cost uncertainties. Co-create new models where possible.
- Lean on open source and ecosystem partnerships to stay nimble and ahead of rapid shifts.
Despite sometimes daunting complexity, optimism is strong. Founders see agentic AI as a tool to superpower human teams—augmenting rather than replacing expertise, judgment, and empathy. The vision for the future is one where agentic systems progress from reactive helpers to proactive partners, capable of learning from every interaction and adapting to changing business realities.
The journey is far from linear. With each deployment, teams collect new lessons—about data, culture, pricing, reliability, and collaboration. The leaders at this roundtable left aligned: the winners in agentic AI won’t just have better technology—they’ll have the most resilient, creative, and learning-focused teams.
As agentic AI migrates from hype to habit, its heartbeat will be the steady rhythm of experimentation, collective wisdom, and the tireless curiosity to solve real-world problems—together with humans, every step of the way.
Keywords: Agentic AI, Enterprise AI adoption, AI orchestration frameworks, Human-in-the-loop AI, Agent-based architecture, AI observability, AI governance, Open-source AI tools