Microsoft AI Chief Calls Artificial Superintelligence an "Anti-Goal" Amid Industry Debate
Microsoft’s AI chief, Mustafa Suleyman, is challenging the dominant narrative in Silicon Valley by calling artificial superintelligence an "anti-goal." Speaking on the "Silicon Valley Girl Podcast," Suleyman argued that the pursuit of AI systems that surpass human intelligence in every way is not only risky but fundamentally misaligned with a positive vision for the future. "Artificial superintelligence doesn't feel like a positive vision of the future," Suleyman said. "It would be very hard to contain something like that or align it to our values." He emphasized that the goal should not be to create systems that outthink humans, but rather to build what he calls a "humanist superintelligence"—one that enhances human capabilities, supports human well-being, and operates in service to human interests. Suleyman, who co-founded DeepMind before joining Microsoft, stressed that equating AI with consciousness or granting it moral status is a fundamental error. "These things don't suffer. They don't feel pain," he said. "They're just simulating high-quality conversation." He warned against anthropomorphizing AI, cautioning that doing so could lead to misguided expectations and dangerous assumptions about its nature and intentions. His remarks come amid growing debate over the future of AI. While some industry leaders, including OpenAI CEO Sam Altman, continue to position artificial general intelligence (AGI) and superintelligence as central missions, Suleyman’s perspective offers a counterpoint rooted in caution and ethical design. Altman has stated that OpenAI is already looking beyond AGI to superintelligence, suggesting it could dramatically accelerate scientific progress and drive global prosperity. Similarly, Demis Hassabis, co-founder of Google DeepMind, has predicted that AGI could be achieved within the next five to ten years, envisioning systems that deeply understand and integrate into daily life. However, not all experts share this optimism. Meta’s chief AI scientist, Yann LeCun, has pushed back, arguing that we may still be decades away from AGI. He pointed out that many of the most complex problems in AI scale poorly with more data and compute, and that simply increasing resources does not guarantee smarter systems. Suleyman’s stance reflects a growing movement within the AI community to prioritize safety, human values, and long-term societal impact over speed and scale. As the race to build ever more powerful AI intensifies, his call to treat superintelligence as an "anti-goal" serves as a reminder that the most important question may not be what AI can do—but what it should do.
