HyperAI
Back to Headlines

Geoffrey Hinton says most tech leaders ignore AI risks — but one stands out

6 days ago

Geoffrey Hinton, often called the “Godfather of AI” for his pioneering work on neural networks, has criticized the lack of transparency among tech leaders regarding the risks of artificial intelligence. In a recent appearance on the One Decision podcast, Hinton stated that while most executives at major tech companies recognize the potential dangers of AI, they fail to address them adequately. “Many of the people in big companies, I think, are downplaying the risk publicly,” he said, emphasizing that their actions do not align with their awareness. Hinton singled out Demis Hassabis, CEO of Google’s DeepMind, as an exception. “Demis Hassabis really does understand the risks and wants to do something about it,” Hinton remarked. Hassabis, who co-founded DeepMind in 2010 and sold it to Google in 2014 for $650 million, has long advocated for AI safety. At the time of the acquisition, Google agreed to establish an AI ethics board, a commitment Hassabis has consistently pushed for. Despite his leadership role in Google’s AI ambitions, he has also called for global governance frameworks to regulate the technology, warning that agentic systems—AI capable of autonomous decision-making—could become “out of control” in the long term. Hinton, who spent over a decade at Google before resigning to speak more freely about AI risks, criticized other industry figures as “oligarchs” who prioritize power and profit over safety. He specifically named Elon Musk and Mark Zuckerberg, though neither responded to requests for comment. His remarks reflect growing concerns within the AI community about the concentration of influence among a few corporate leaders. Hassabis’s stance has drawn scrutiny, with recent protests outside DeepMind’s London headquarters demanding greater transparency in AI development. Hinton, who left Google in 2023 to focus on safety research, has previously stated that the company encouraged him to remain involved in addressing risks, though he felt his concerns were not prioritized. The conversation highlights tensions in the AI sector as companies race to develop advanced systems. While DeepMind and other labs have taken steps to address ethical challenges, critics argue that broader industry efforts remain insufficient. Hinton’s comments underscore the need for accountability, as the technology’s societal impact becomes increasingly complex. His critique extends to the broader culture of tech leadership, where he suggests public statements often contradict private actions. By calling out figures like Musk and Zuckerberg, Hinton implies a lack of trust in their ability to manage AI’s risks responsibly. Meanwhile, Hassabis’s approach—emphasizing collaboration with academics and advocating for regulatory oversight—stands out as a counterpoint. The discussion comes amid rising public and regulatory pressure to ensure AI development is both innovative and safe. As AI systems grow more powerful, the balance between progress and responsibility remains a critical challenge for the industry. Hinton’s remarks serve as a reminder of the stakes involved, urging leaders to confront risks rather than downplay them.

Related Links