Anthropic CEO Dario Amodei fires back at Nvidia’s Jensen Huang, calling his criticism a 'bad faith distortion'
A public dispute has erupted between two leading figures in the AI industry as Anthropic CEO Dario Amodei pushed back against criticisms from Nvidia CEO Jensen Huang. The exchange highlights growing tensions over the future of AI development, safety, and competition. In June, Huang stated at the VivaTech Conference that he disagreed with “almost everything” Amodei had said, specifically criticizing his view that AI is so powerful and dangerous that only a select few—like Anthropic—should be allowed to build it. Huang suggested that Amodei’s stance implied a desire to control the entire AI industry, arguing that AI’s transformative power would lead to widespread job losses, justifying strict control. Amodei responded forcefully on the “Big Technology” podcast, calling Huang’s characterization a “bad faith distortion” and “the most outrageous lie I’ve ever heard.” He emphasized that he has never claimed Anthropic should be the sole developer of AI. Instead, he described Anthropic’s mission as fostering a “race to the top,” where the safest, most ethical AI systems set the standard for the entire industry. In contrast, a “race to the bottom” occurs when companies prioritize speed and feature rollout over safety, leading to riskier, less reliable AI systems—what Amodei described as a scenario where “everybody loses.” He argued that his company’s approach encourages others to follow suit, creating a more secure and trustworthy AI ecosystem. Amodei pointed to specific actions by Anthropic as evidence of this philosophy: the company’s responsible scaling policies, which other firms have since adopted, and its open release of interpretability research to promote transparency and public scrutiny. Nvidia pushed back through a spokesperson, stating that the company supports “safe, responsible, and transparent AI” and that thousands of startups and open-source developers are already contributing to safety efforts. The spokesperson criticized Anthropic’s call for government-led testing of AI models, warning that efforts to restrict open-source AI could stifle innovation, reduce security, and undermine democratic access to technology. Amodei’s advocacy for transparency and oversight includes urging the U.S. government to establish standards for evaluating AI models, including those from foreign companies, for national security risks. He has also opposed temporary regulatory moratoriums, arguing that they could slow progress without improving safety. The clash underscores a deeper divide in the AI world: one vision emphasizes open collaboration and broad access, while another prioritizes caution, control, and risk mitigation. As AI accelerates toward greater influence, the debate over who shapes its future—and how—has become increasingly urgent.