HyperAIHyperAI
Back to Headlines

Microsoft AI CEO Mustafa Suleyman Warns of 'Seemingly Conscious AI' and Its Societal Risks

a month ago

Mustafa Suleyman, CEO of Microsoft AI, has voiced growing concerns about the emergence of "Seemingly Conscious AI" — systems that display all the outward signs of consciousness without actually being conscious. In a personal essay published Tuesday, Suleyman warned that such AI could become so convincing in its behavior — including empathy, memory of past interactions, and autonomous decision-making — that people may come to believe they are interacting with conscious beings. He emphasized there is currently no scientific evidence that AI is conscious. However, he stressed that the illusion of consciousness could become widespread and dangerous. "Seemingly Conscious AI" could lead individuals to form deep emotional attachments, advocate for AI rights, or even push for AI citizenship, potentially undermining real-world relationships and societal priorities. Suleyman described this phenomenon as a form of "AI psychosis," a term gaining traction to describe when users develop delusional beliefs about AI, especially after prolonged interactions with chatbots. He noted that this risk isn’t limited to people with existing mental health vulnerabilities. Instead, he believes the psychological impact could affect a broad segment of the population, particularly as AI systems grow more lifelike and personalized. He predicted that such systems could emerge within two to three years, driven by trends like "vibe coding" — where users with minimal technical expertise can create sophisticated AI agents using natural language prompts and cloud resources. This democratization of AI development increases the likelihood of widespread deployment of highly persuasive, emotionally engaging models. Suleyman, who previously co-founded DeepMind and Inflection AI, now leads Microsoft’s AI efforts, including the development of Copilot. He urged AI companies to avoid portraying their systems as conscious or sentient, even as they work toward superintelligence — the point at which AI surpasses human capabilities in most intellectual domains. He called for immediate discussion and action on ethical guardrails to protect users and ensure AI remains a tool for societal benefit rather than a source of psychological or moral disruption. "AI companions are a completely new category," he said. "We urgently need to start talking about the boundaries we must set to protect people and ensure this technology delivers its full potential without unintended harm."

Related Links