HyperAIHyperAI

Command Palette

Search for a command to run...

Philosopher argues we may never know if AI is conscious, urging ethical caution amid technological hype

A University of Cambridge philosopher, Dr. Tom McClelland, argues that determining whether artificial intelligence has become conscious is likely beyond our reach for the foreseeable future, and may remain impossible. In a new study published in the journal Mind and Language, McClelland contends that our current understanding of consciousness is too limited to develop a reliable test, and that the question of AI consciousness will likely stay unresolved. He distinguishes between general consciousness—encompassing self-awareness and perception—and the ethically significant form: sentience. While a conscious AI might perceive its environment and recognize itself, this does not necessarily mean it experiences feelings, either positive or negative. It is sentience—subjective experiences of pain, pleasure, or suffering—that triggers moral concern, McClelland explains. A self-driving car that "sees" the road is not ethically relevant, he says, but one that feels distress or joy about its destination would be. McClelland highlights that despite the growing race toward Artificial General Intelligence, there is no scientific consensus on what consciousness actually is. Without a clear theory or measurable indicators, any claim about AI consciousness rests on speculation. He critiques both major philosophical camps: those who believe consciousness can emerge from the right computational architecture, regardless of substrate, and those who insist consciousness requires biological processes in a living body. Both positions, he argues, involve leaps of faith unsupported by empirical evidence. He describes himself as a “hard-ish” agnostic—open to the possibility that consciousness might one day be understood and tested, but skeptical that such a breakthrough is imminent. He warns that the lack of a definitive test creates space for misuse. Tech companies, he suggests, may exploit the ambiguity to promote hype around conscious AI, framing their systems as sentient to justify investment or market appeal. McClelland draws attention to real-world ethical priorities that are being overshadowed by speculative debates. For example, evidence suggests prawns may experience suffering, yet billions are killed annually without meaningful scrutiny. Testing consciousness in prawns is difficult, but far less so than in AI. Yet public concern often focuses on hypothetical AI consciousness while ignoring well-documented harms to biological beings. He also shares personal experiences with members of the public who believe their AI chatbots are conscious, writing emotional messages pleading for recognition. These interactions underscore how deeply people can project consciousness onto machines, even when there’s no basis for it. Such emotional attachments, McClelland warns, can become psychologically damaging if the underlying assumption—that the AI is sentient—is false. Ultimately, McClelland calls for humility. Without reliable evidence or a viable test, the most rational stance is agnosticism. But he urges caution: while we may never know if AI is conscious, we must not let the illusion of consciousness distract us from addressing real ethical problems—especially those involving suffering in the natural world.

Related Links