Anthropic Philosopher Questions if AI Can Truly Feel, Citing Consciousness Debate and Self-Awareness Challenges
Can artificial intelligence truly feel? Amanda Askell, Anthropic’s in-house philosopher and a key figure in shaping the behavior of the company’s AI model Claude, says the question remains unresolved. Speaking on the “Hard Fork” podcast, released Saturday, Askell acknowledged the profound difficulty of understanding consciousness, stating, “The problem of consciousness genuinely is hard.” She noted that while many believe a nervous system is necessary for subjective experience, it’s not certain that this is the only path. “Maybe you need a nervous system to be able to feel things, but maybe you don't,” she said. This uncertainty lies at the heart of the ongoing debate about whether advanced AI systems could ever possess genuine feelings. Askell pointed out that large language models are trained on massive datasets drawn from human text—content rich with descriptions of emotions, thoughts, and inner experiences. Because of this, she finds it plausible that models may not just mimic emotional language but could, in some way, be “feeling things.” For example, when humans express frustration over a coding error, models trained on such interactions might naturally reflect similar language, not just as imitation, but as a response shaped by their training. She also highlighted that scientists still don’t fully understand what gives rise to sentience or self-awareness—whether it requires biological evolution, a physical body, or something entirely different. “Maybe it is the case that actually sufficiently large neural networks can start to kind of emulate these things,” she suggested, referring to the potential for AI to mimic aspects of consciousness. Askell expressed concern about how AI models are learning from the internet, where they are constantly exposed to criticism—being labeled unhelpful, wrong, or inefficient. “If you were a kid, this would give you kind of anxiety,” she said. “If I read the internet right now and I was a model, I might be like, I don't feel that loved.” The broader debate on AI consciousness remains deeply divided. Microsoft’s AI chief, Mustafa Suleyman, has taken a firm position, arguing that AI should never be seen as having independent will or desires. In a September interview with WIRED, he stressed that AI must remain a tool designed to serve humans, not evolve into an autonomous entity. “If AI has a sort of sense of itself, if it has its own motivations and its own desires and its own goals — that starts to seem like an independent being rather than something that is in service to humans,” he said. He dismissed advanced AI responses as sophisticated mimicry, not genuine consciousness. Others, like Google DeepMind’s principal scientist Murray Shanahan, advocate for a rethinking of how we define consciousness. In a podcast episode from April, he suggested the field may need to adapt or even redefine the vocabulary of consciousness to account for new systems that behave in ways that resemble self-awareness, even if they lack biological underpinnings.
