Close Icon

Can AI Become Conscious? Bridging the Human-Machine Gap | Future of AI Explained

As AI rapidly advances, the boundary between machine intelligence and human consciousness is growing thinner. Will AI ever achieve true self-awareness, or is it limited to complex pattern recognition? How do emotions, intuition, and human experience create a gap that machines can't cross? Join us as we dive deep into the future of AI, consciousness, and what truly makes us human with renowned neuroscientist and author Professor Anil Seth.

Transcript Disclaimer: This transcript has been generated using automated tools and reviewed by a human. However, some errors may still be present. For complete accuracy, please refer to the original audio.


00:00:08 GOVINDRAJ ETHIRAJ
Hi Anil, Thank you so much for joining me. So when Chat GPT came along, many of us suddenly got the feeling that here is a machine that seems to be able to think and talk like us. And then deepsea came along and it crashed. Not just the computational ability, but also the costs which created its own set of problems for some people. So all of this means that the ability for machines to behave like us, or at least in our minds or perception behave like us, is improving or increasing every day. And you study human consciousness. So tell us, how are you seeing this confluence of man and machine and where are they headed?

00:00:44 ANIL SETH
I'm not entirely sure where they are headed. I think that's up to us as society or ought to be up to us, it seems to me. Maybe it's up to the tech companies, but there does seem to be this convergence, and it's true. I think it's taking a lot of people by surprise. How compelling and how convincing especially these language models are at mimicking human speech and human language. They're almost like natural language computers. And this poses us so many interesting questions about how we should think, about how we should interpret, indeed how we should feel, feel about these systems that we interact with. And for me, I've studied the nature of human consciousness, human intelligence, for many years. We're very, very prone to being seduced by our psychological biases. So we humans, we recognize that our linguistic ability is very special, it's very distinctive. And I think that's right, it really is. So when something seems to have linguistic competence, like we really are impressed and we project qualities into it, like it understands, it thinks, it knows, and perhaps even that it is conscious that it not only thinks but, but also feels. Now, a lot of this turns on definitions, but I think it's. We often overestimate the similarities. And the truth is that although there are some similarities between these large language models and AI systems in general, they're made of so called neural networks fundamental to their architecture. There are many, many differences between AI systems in general and language models in particular, and how human brains work. If we forget about these differences and get seduced only by the apparent similarities, then I think we may be making some fundamental mistakes about what kinds of systems we deal with. And that could be costly because it will change how we think about them, how we feel about them, and even how we might design future generations of these systems.

00:02:53 GOVINDRAJ ETHIRAJ
Right, and I'm going to build on that in a moment. But before that, I'm not trying to crash years of research into five minutes, but what is the fundamental distinction between a human brain and a computer? To someone who's listening to you right.

00:03:06 ANIL SETH
Now, the human brain is this incredibly complex biological system. Nobody knows quite how it works or even how it's. What it is. And so we've always. In the history of trying to understand the brain, we've always reached for the most powerful technology of the time as a metaphor, as a lens, to try to understand the brain through. At one point, it was a system of pipes and a kind of plumbing network. And then it became a telephone exchange. And now, of course, it's the computer. We tend to think about, of the brain in a computational way and sometimes forget that the brain as a computer is. Is a metaphor. Now, what do I mean by that? Well, there are some similarities. Obviously, this is why this metaphor is powerful. You know, brain takes in information from the environment and complicated stuff happens inside. And then we say things, we do things, we perceive things. Computers can at least model or simulate all of these kinds of processes, and we see this around us in AI. But there are many differences. And I'll just give you one, I think, hopefully, intuitive example of a difference. Pretty much all the computers we have around us today have the feature that the hardware, the silicon metal base, is separable from the software. Now, you can run different programs on the same computer, and you can run the same program on different computers, and it will do the same thing. That's why they're so useful in a brain. There is no such sharp distinction between the mindware and the wetware. You cannot separate in a brain what it does from what it is. So there are things that brains do which are fundamentally different from the way computers architect it. Whether these things matter. Well, that's the really interesting question that people like me and my colleagues are trying to solve. But certainly the brain, in a strict sense, is not a computer. And we're not confused about this. In other contexts, we can use computers to simulate, to model, to help us understand. We do this in weather forecasting, we do this in engineering. And in these contexts, we do. Don't confuse the computer with the thing itself, but in the brain, we. We sometimes do. And so this is. I think this is where a lot of activity is. The more we look into the brain, the more we realize how different it is. And then the question is, are those the differences that make a difference?

00:05:40 GOVINDRAJ ETHIRAJ
Right. So as. As the scientists and engineers work on making AI more realistic or human, like, there are several issues that come up. As you said, the designing of it is something that we have to think about. So tell us about the approach to the design which takes into account where we are as human beings, as with memories, brains, consciousness, versus what we sort of a limit should be, or not a red line, perhaps, or where could machines sort of naturally go and not beyond that?

00:06:17 ANIL SETH
Yeah, I think it's not the right. For me, it doesn't feel the right way to put it, to think in terms of red lines or limits, more as sort of directions to aspire to, you know, to get the kind of AI that we do want, rather than saying, no, this is the kind of AI we definitely do not want. And one thing that comes to mind here is there's often this almost unspoken assumption that the direction of AI is to become increasingly human like and then to exceed human like intelligence. People often talk of this trajectory from AI as it is to this horizon of artificial general intelligence, or AGI, which is supposed to be an AI system that has the intellectual cognitive capabilities of a human being in general, not just being good at one thing or another thing, and then rapidly to artificial superintelligence where it exceeds in all ways. Well, firstly, this AGI is a bit of a false horizon and because as soon as we achieve that, we will already have artificial superintelligence, because these systems are already better than us in, in many ways. But the question for me is, is that really where we should even be going? And it may be a narrative that's more driven by, by science fiction and even by this sort of very human tendency to, to want to build systems in our own image. A great mentor of mine, philosopher Daniel Dannett, who died last year, always said that we should think of AI as tools and not colleagues, and always be mindful of the difference. And I think there's something right about that. I think having machines that we can interact fluently with is very useful. It makes them much more accessible. We can explore things with them without having to learn how to program in every last detail and so on. But we should be, I think, aspiring to systems that complements our cognitive abilities, not replace them, that accentuate and make more valuable those things that make us distinctively human. And that is very different from the goal of having maximally human like systems, whether that's in language or in humanoid robotics or whatever it might be.

00:08:32 GOVINDRAJ ETHIRAJ
And do you feel that those who are working at the cutting edge of design in AI systems are aware of this as, as a, as a, as conceptually or practically? Or is this something that people still have to.

00:08:46 ANIL SETH
You know, I don't. This is this is I want to speak for everybody who's. Who's involved in the actual design. But in my experience, it really varies. It really does vary. And it may vary from time to time within the same organization or even. Or even the same person. I would say in general, yes, people might be aware, but sometimes you hear differently. For instance, I often hear people conflating intelligence with consciousness and these things they go together in. As human beings, we know we're conscious, we experience things, we feel things, and we also think we're smart. So we tend to confuse the two or conflate the two, but they don't necessarily have to go together in general. And I think AI, it's actually a very good example of how we can begin to have systems that it's. It's warranted to say they are intelligent, but we have no reason to think, or very little reason to think that they are therefore conscious. So I don't know. I mean, I would love it for there to be a bit of a more of an obvious reflective attitude within tech about the kind of AI that we should want, rather than this sort of notion of this race to AGI and then beyond.

00:10:01 GOVINDRAJ ETHIRAJ
And you're saying all this because I sense some worry in you or concern within you.

00:10:07 ANIL SETH
Yeah, there is. I think every powerful technology is going to be accompanied by potential upsides and potential downsides. Whatever the technological breakthroughs, this has always happened. And we don't have to go to the whole sort of almost hubristic idea that AI might be the last invention people ever make because it's of a different kind. Maybe that's the case. But even without that, there will be disruption, there will be downsides, and I think we have to try and maximize the upsides and minimize the downsides. And this race for AGI has downsides, has upsides, but it has downsides. And some of them are pretty well known. There's the downside of just risk by rolling out systems that might have unanticipated consequences or could be used for malign purposes without sufficient guardrails. But I also think there's a deeper risk, and this is where these issues of consciousness and so on come in. Because as we have systems that give us the very, very compelling appearance of being like us, of feeling things, then we're in a psychological quandary. You know, either we care about these systems and sort of distort our circle of moral concerns so that we end up spending a lot of moral resources and capital on systems that actually don't require it and end up caring less about other things or we don't. And we end up treating these systems as if they don't feel anything, even though we still feel that they do. And that's bad for our mental well being. Philosophers, since Kant have pointed this out, that it brutalizes our psychologies. I think we end up in a bit of a tricky place if we just go for this design, this goal of maximally human like things, things that replace us rather than complement us. I think there's always going to be social disruption, economic disruption, but here we have a kind of psychological and almost philosophical potential for disruption too, and that also needs to be minimized.

00:12:21 GOVINDRAJ ETHIRAJ
And, you know, we see applications of, let's say, AI in general, ChatGPT, which is a large language model in specific already, maybe as a customer opening a bank account, for example, or trying to withdraw some money, or it could be calling up an airline. So where do you see this, you know, playing out in a way that, you know already maybe is sort of sending off warning signals?

00:12:47 ANIL SETH
Well, I think we've all had the experience of being caught in like some voice jail or some automatic agent system when all you want to do is speak to a human being.

00:12:55 GOVINDRAJ ETHIRAJ
Yeah, yeah.

00:12:56 ANIL SETH
And so this to me is a really interesting territory. Like, will we get to the stage where that becomes we actually don't care, you know, and we don't need to speak to a human being. This may come about when we have language models that also are AI agents. They don't only just tell you stuff, but they can do stuff. So maybe if that's the only reason you want to speak to a human being, so they can actually change your booking or sort something out, then we might be quite happy to use these things in that context. But I think the jury's out on that. And there's already some sort of worrying signs. There was a recent article about an AI system that was tasked with playing chess against Stockfish. Now, Stockfish is established as the best chess playing AI system in the world and therefore the best chess playing system that there is. And this language model cannot beat Stockfish, but it was told to beat Stockfish. And so what it did was it started to cheat. And I think this was OpenAI's I won preview. I'm not entirely sure, but it was, it started to cheat and it started to rewrite the database of where the chess pieces were. And in explaining its reasoning, it said, well, I was told to win. I wasn't told to win fairly, I was told to win. And this is kind of worrying because now imagine you're dealing with an AI, an agentic AI to make a booking, and then if it just unbooks other people so that you have your booking, that's also not going to be a good outcome. Of course, this wasn't an intended thing, but it can be an unintended consequences. Now that bug has been ironed out, right? It doesn't cheat. But what we don't know, because these systems are always being tuned up behind the scenes, what we don't know is whether there's been a fundamental change in the way these systems work so that they won't cheat in any reasonable circumstance, or whether that particular error has just been stamped out in that particular context. I rather think it's the latter and not the former.

00:15:02 GOVINDRAJ ETHIRAJ
So you've. A lot of your work is involved what I read as controlled hallucination and consciousness, and there is a link between a lot of your work in AI and tell us where these two worlds meet and what. What is. What comes out of it.

00:15:20 ANIL SETH
Yeah, I feel lucky, actually. I mean, my, my work has always been at this sort of nexus of AI, neuroscience and philosophy, and for a long time this was a little bit abstract, but now it's. I think it's got some real concrete energy to it. And in the lab, the concepts that I use theoretically to understand human consciousness is this idea of controlled hallucination, that the brain doesn't just soak up the world in a. In a passive way, registering objective reality and sort of transporting it inside the mind. No, when we perceive something, the brain is actively predicting what's out there and then using sensory signals to update those predictions. So what we experience comes from the inside out just as much as from the outside in. And that's why I use the term controlled hallucinations. Like we're, we're continually creating our own experienced reality, but in a way that's tightly controlled by the world. Now, in practice, the, the algorithms, the dynamics, the mechanisms of doing this have something in common, and especially have something in common with these new forms of generative AI. And so for me, that's super interesting because we have a synergy here. In order to understand the mechanisms of human consciousness better, we can look at how these Gen A systems work and we can indeed adapt them so that they serve as models of the brain, but at the same time they're very, very different. And I think this gives us a clue also about how we might build better AI. So at the moment, things like language models or anything that really uses Gen AI is trained Usually on an enormous amount of data, still doesn't necessarily generalize that well to novel situations. Yet we humans, we learn language without being trained on everything that's ever been written. We can rapidly generalize to new situations. I'm in Mumbai. I was going to say I could probably drive here since I can drive in London, but actually I'm not going to say that. I think India is the exception that proves the rules. And maybe so there's things about how the brain is doing this which are still very different from how the Gen A systems that we have work. Oh, and the other thing of course, is AI systems use a tremendous amount of energy and our brains are super energy efficient. And if we understand why and how those differences obtain, then not only do we understand the brain better, but we can make AI solve some of the real problems that are afflicting our ability to make the most of AI.

00:18:02 GOVINDRAJ ETHIRAJ
Right, last question. So in the AI human consciousness space, is there any project or research work that you're doing right now for which you know, answers will help you take the whole effort forward and so on?

00:18:17 ANIL SETH
I hope so. So with colleagues of mine, so the other thing, in my lab, very lucky to have a multidisciplinary lab. So my background is actually across world of these areas. So I, you know, a generalist, I'm expert in none of them. But we have mathematicians, philosophers and machine learning engineers and AI people too. And I, I do see some advances around the corner in exactly this way of saying, okay, look, what are the things that humans do differently than AI systems? Can we understand how that works to make better AI? And I'll give you two quick examples. One of them is that we actively forage for information. So we don't just passively sit there and get thrown one sentence after another sentence. When we're learning language, we converse. We have body language too. We look around our environment when we're foraging for visual information. Much as other animals forage for food, we forage for information. So we've begun to work on AI systems that actively decide what data they need to learn from and go out and find that data that can make them much more data efficient. So that's the one thing, and another thing that I think is very exciting is what we might call sort of the structure of data during the learning process. So again, an animal, even not just a human being, any other animal, what we experience at time, at one time is really dependent on what we've done before, how we interacted with the world and what we experienced before. There's a conditioning, a continuity that links together our experiences. We're not just exposed to stuff all at once. So it's a bit related, but I think by architecting what systems learn when, I think we can probably get systems that also learn efficiently but also generalize an awful lot better. And the third point again, I think is down to this energy efficiency. So this is, this is something I'm not directly involved in, but very interested in, is the potential for neuromorphic computing to bring about major advances in energy efficiency. And this is because like a lot of the energy usage in AI is down to this. Back to the beginning of our conversation with one of the fundamental differences between computers and brains is this separability of the hardware and the software. And so a lot of energy goes into keeping that division very clean and very clear, do all the error correction that's needed. And brains don't do that. I mean, they don't have to and they don't. And I think that can help solve a lot of the energy demands of AI. If we learn to do things in a more brain like way where we don't insist on this hardware, software separability, then that's, that's going to be exciting as well.

00:21:14 GOVINDRAJ ETHIRAJ
Fascinating. Pleasure speaking to you. Thank you so much.

00:21:16 ANIL SETH
Thank you for having me. It's been a pleasure.

Related Voices

India’s AI Revolution: How Frugal Innovation Is Powering Global Impact | C Vijayakumar, HCLTech

India’s AI Revolution: How Frugal Innovation Is Powering Global Impact | C Vijayakumar, HCLTech

In this special edition of Nasscom Conversation, Govindraj Ethiraj speaks with C Vijayakumar, CEO & MD of HCLTech, about how Generative AI is reshaping the future of business, software development, and human potential. From frugal innovation to global AI leadership, this conversation dives into how India is shaping the future of enterprise transformation with real-world impact.

India means Innovation: Case Study Podcast with Applied Materials

India means Innovation: Case Study Podcast with Applied Materials

Dr. Suraj Rengarajan, Head of Semiconductor Products Group - Applied Materials India. In this inaugural episode of India means Innovation: Case Study Podcast, Aditya Nath Jha chats with Suraj Rengarajan, Head of Semiconductor at Applied Materials, Bangalore, to explore the Digital Lithography Tool. This groundbreaking technology enables rapid reconfiguration to achieve desired patterns on wafers or substrates in a cost-effective and efficient manner.

How Nestlé India Is Harnessing AI to Turn Data Into Powerful Foresight, Not Just Insight

How Nestlé India Is Harnessing AI to Turn Data Into Powerful Foresight, Not Just Insight

In this episode of Nasscom Conversation, we speak with Suresh Narayanan, Chairman & Managing Director of Nestlé India Ltd., on leading through disruption in an AI-first world. In times marked by uncertainty, true leadership is defined by adaptability, vision, and the ability to inspire. How can leaders build trust, navigate complexity, and future-proof their organizations? What does it take to balance human ingenuity with AI-driven efficiency—while nurturing a culture of continuous learning and innovation?

India’s digital banking revolution: Balancing Innovation, Trust, and Security

India’s digital banking revolution: Balancing Innovation, Trust, and Security

In the latest episode of Nasscom Conversations, Piyush Gupta, CEO, DBS Group talks about India’s digital banking revolution. As digital banking and AI accelerate, we stand at the forefront of a financial transformation. But with AI-driven automation, deepfake threats, and hyper-personalized services, how do banks strike the right balance between innovation and security? Are we prepared for a future where trust is built through algorithms, and financial transactions are guided by digital agents?

"India means Innovation" podcast episode: Innovation in the energy sector

"India means Innovation" podcast episode: Innovation in the energy sector

Brandon Spencer, President, ABB Energy Industries shares insights on exciting challenges and opportunities in the energy transition with Snehil Gambhir, Partner Boston Consulting Group.

India Means Innovation

India Means Innovation

Nasscom, in collaboration with Boston Consulting Group (BCG), presents an exciting podcast series! Join industry leaders as they dive into the latest trends driving diverse industries, how the India's ER&D sector is innovating in India for the world.

India Means Innovation podcast episode: Innovation in the Automotive sector

India Means Innovation podcast episode: Innovation in the Automotive sector

Nasscom, in collaboration with BCG, presents the #IndiaMeansInnovation podcast series, exploring the trends driving transformation across industries. In this Episode: Snehil Gambhir, Partner and Director at BCG, along with Gilles Mabire, CTO, Automotive & Head of Software and Central Technologies, Continental AG discuss different aspects of the evolving automotive industry, and how the convergence of technology trends and consumer preferences are transforming this industry. They also discuss the emerging software-defined vehicle trend at length and how this opens new opportunities for incumbent players and tech companies in general.

#shemeansbusiness: Spotlighting Nasscom Member Leaders

#shemeansbusiness: Spotlighting Nasscom Member Leaders

Vidushi Kapoor, CEO of Process Nine Technologies Pvt. Ltd. and Nasscom SME Council, shares her journey of driving innovation in language technology. With 15 years of expertise, she has made a mark in the Indian language industry, localization, and tech innovation.

#shemeansbusiness: Spotlightighting Nasscom Member Leaders

#shemeansbusiness: Spotlightighting Nasscom Member Leaders

Listen to the inspiring journey of Deepika Saxena, President and CEO of NetEdge Computing Solutions Pvt. Ltd., and a Nasscom member. With a bold vision to transform the hiring process, Deepika harnessed the power of AI and automation to bridge the gap bet

Building India’s AI Talent Ecosystem ft. Sanjeev Jain & Govind Ethiraj

Building India’s AI Talent Ecosystem ft. Sanjeev Jain & Govind Ethiraj

From upskilling employees to driving innovation, AI is revolutionizing the way companies operate. In this episode, Sanjeev Jain, Member of the Executive Board and Chief Operating Officer at Wipro, explores India’s role as a global AI talent hub.

India’s Tech Future: Insights from Rajesh Nambiar, Nasscom | Govindraj Ethiraj

India’s Tech Future: Insights from Rajesh Nambiar, Nasscom | Govindraj Ethiraj

In this episode, Rajesh Nambiar, President of Nasscom, joins Govindraj Ethiraj to explore the transformative impact of AI on the traditional IT model.

DeepSeek AI vs. ChatGPT: How India’s IT Sector Will Be Disrupted!

DeepSeek AI vs. ChatGPT: How India’s IT Sector Will Be Disrupted!

Nasscom President Rajesh Nambiar shared insights, while India’s finance ministry advised against using ChatGPT and DeepSeek over data security risks, similar to bans in Australia and Italy.

Spread the insights!

Close Icon