Abstract

With Artificial General Intelligence (AGI) as the buzzword of 2025, we try to explore the gap between AI’s current capabilities and the promise of AGI. Despite rapid progress in large language models (LLMs), the authors of this article highlight that core limitations— especially the lack of continual learning— prevent today’s systems from becoming truly general-purpose. They caution against overinterpreting flashy demos and stress that the road from prototype to productivity is steeper than it appears. Yet, they acknowledge the remarkable strides in reasoning and problem-solving. The blog urges a balanced approach: invest in safe, scalable AI infrastructure while staying realistic about near-term expectations. AGI isn’t imminent—but it’s inevitable.

The question is how do we responsibly build toward it without overhyping the 'now'?

Introduction

Things take longer to happen than you think they will, and then they happen faster than you thought they could.
— Rudiger Dornbusch

In an age where AI breakthroughs are announced faster than the world can process them, one of the most important questions of our time remains deeply uncertain: When will artificial general intelligence (AGI) arrive?

Depending on whom you ask, we’re either two or twenty years away. But as of mid-2025, we find ourselves leaning toward the latter and not because we doubt the magic of LLMs, but because we, the developers, have spent a hundred hours trying to put them to practical use.

Image 2

The Illusion of Readiness

Despite the hype, LLMs today are not general-purpose workers. Yes, they can co-author essays, summarize transcripts, and generate passable responses in natural language. But they lack the most essential human trait that underpins all productive work: the ability to learn on the job.

When a user tries to get an LLM to co-edit podcast transcripts or extract clips for social media, he/she often finds him/herself giving the same feedback repeatedly. These are language-in, language-out tasks supposedly right in the LLM’s wheelhouse. But there’s no way to tell the model: “Hey, remember how we improved this last time? Let’s build on that.”

Humans learn by iteration, context, and reflection. Machines, at least today’s LLMs, do not. Each session starts from scratch. There’s no continuity, no retained understanding of a user’s style, preferences, or evolving goals. The result is a tool that is useful yet only up to a certain point. Beyond that, we still hire people!

Image 3
Why Continual Learning is the True Bottleneck?

Imagine trying to teach a child to play violin, but each mistake resets them to zero and hands the sheet music to another kid who only reads notes from the last attempt. That’s what using today’s LLMs feels like.

Reinforcement learning, prompt engineering, and clever memory workarounds (like Claude’s compacting summaries) offer band-aids, but they don’t fundamentally solve the problem. What is missing is a native, organic loop of continual learning, an ability to absorb high-level feedback, self-correct, and improve incrementally over time.

Without that loop, most AI systems will continue to be static performers – good at isolated tasks, but brittle in long-horizon, evolving workflows. This is why, despite their significant potential, we remain sceptical of claims that ‘AI will automate most white-collar jobs in the next five years’.

Don’t Mistake Demos for Deployment!

This brings us to the next big promise autonomous computer use. AI researchers predict that by 2026, we’ll be able to say to our smart devices — “Do my taxes!”; and watch an agent sort receipts, email vendors, and file TDS for us. While that’s a seductive vision, many of us bet against it.

The answer relies on how we deploy these AI systems. Because scaling up from simple demos to complex and multi-hour workflows require more than just smarter models – it demands reliable memory power, visual context cues, long-horizon reasonings, and an entire new category of data accumulated over time – we don’t currently have these aspects. The internet gave us an ocean of text to train on. We don’t have anything close to that for real-world UI interactions.

Moreover, even simple algorithmic innovations like DeepSeek’s math-focused Reinforcement Learning (RL) loops, have taken years to engineer and refine. What we are talking about with regards to computer use— image processing, tool manipulation, and state tracking– is exponentially harder.

We’re not in the AGI era yet. We’re in the ‘GPT-2 moment’ for computer use agents.

Image 4
The Quiet Miracle: Reasoning is Real

That said, we can’t lose sight of just how staggering current progress is. The latest models—like o3 or Gemini 2.5—aren’t just parroting information. They’re reasoning. They’re reflecting. They’re breaking down complex problems into simpler ones, reconsidering paths, and adjusting midstream. If you give Claude Code a vague product spec, it can zero-shot a working app. And that is spooky.

So yes, even if AGI isn’t imminent, something astonishing is already happening. We’re seeing the first glimmers of ‘baby general intelligence’ in action tools that think in structured ways, even if they can’t yet learn like we do.

A Decade of Divergence

Where does this leave us? We believe that a continuous learning is crucial for any AI system; in the absence of which its learning curve tends to plateau with regards to the authenticity. However, once AI systems can learn on-the-job, we won’t just see gradual improvement; rather we will witness a steep change. Imagine teaching one model a new skill, and every version of that model across the globe knows about the same instantaneously. At that point, AI stops acting like isolated tools and starts behaving more like a vast, interconnected brain where knowledge spreads instantly, and collective learning becomes exponential. Is this achievable in next two years from now? May be not. But it seems plausible by 2032.

What should we do? An ideal way out will be to prepare for both timelines and invest in alignment research and safe deployment frameworks. At the same time, one must temper their expectations of what current models can replace.

Because AGI might not happen tomorrow. But when it does, it will feel like it happened overnight!

About the Authors:
  • M. Chockalingam

    Director- Technology,
    Nasscom AI

  • Ankit Bose

    Head,
    Nasscom AI

Keywords: #AGI #ArtificialGeneralIntelligence #ArtificialIntelligence #LLMs #LargeLanguageModels #AIAgents #AITools