HyperAIHyperAI
Back to Headlines

Anthropic's cofounder says 'dumb questions' are driving AI breakthroughs, from coding to scaling laws

a month ago

Anthropic’s cofounder Jared Kaplan emphasized the importance of asking “naive or basic questions” to drive progress in artificial intelligence during a recent Y Combinator event. At the gathering, Kaplan, the company’s chief science officer, described how fundamental inquiries—often dismissed as overly simplistic—can reveal critical insights and reshape the field. “It’s really asking very naive, dumb questions that get you very far,” he stated, highlighting that many core challenges in AI remain unresolved despite its rapid growth. Kaplan recounted how, during the 2010s, the tech industry fixated on “big data” as the key to AI success. Instead of accepting this premise, he questioned its validity: “How big does the data need to be? How much does it actually help?” This line of thinking led to the development of scaling laws, a framework that established a measurable relationship between model size, computational resources, and AI performance. “We got really lucky. We found that there’s actually something very, very, very precise and surprising underlying AI training,” he explained. He attributed this discovery to his habit of posing what he called “the dumbest possible question,” a practice he noted aligns with his background as a physicist, where challenging assumptions is central to problem-solving. The approach of simplifying complex problems has proven pivotal for Anthropic’s advancements, particularly in AI-assisted coding. The company’s Claude Sonnet 3.5 model, launched in June 2024, has been praised for its ability to generate high-quality, human-like code. Quinn Slack of Sourcegraph, a software development platform, called the model a “game-changer,” noting its superior performance in writing extended code. “If you’re not moving at that speed, you’re gonna die,” he said, underscoring the urgency of innovation in the competitive AI landscape. Ben Mann, another Anthropic cofounder, described the process of improving AI coding capabilities as largely iterative, relying on trial and error and real-world feedback. “Sometimes you just won’t know and you have to try stuff—and with code, that’s easy because we can just do it in a loop,” he said. Elad Gil, an AI investor and co-host of the No Priors Podcast, echoed this sentiment, emphasizing the value of measurable outcomes in coding. “With coding, you actually have a direct output that you can measure: You can run the code, you can test the code,” he said, adding that this provides a clear “utility function” to optimize. Business Insider’s Alistair Barr highlighted Anthropic’s potential valuation nearing $100 billion, citing its ability to attract significant revenue from enterprises seeking access to its AI models. He credited the company’s breakthroughs to techniques like Reinforcement Learning from Human Feedback (RLHF) and Constitutional AI, which prioritize alignment with human values and iterative improvement. Kaplan’s remarks reflect a broader philosophy in AI research: that progress often stems from revisiting foundational assumptions rather than chasing incremental innovations. By focusing on seemingly obvious questions, he argued, researchers can uncover patterns and principles that guide the field forward. “It allows you to ask: What does it really mean to move the needle?” he said. Despite these insights, Anthropic has not publicly commented on the Y Combinator discussion, and the company’s cofounders have not provided further details on their strategies. However, their emphasis on simplicity and measurable feedback underscores a key trend in AI development: the need to balance ambitious goals with practical, data-driven experimentation. As the industry races to build more capable systems, Kaplan’s approach suggests that sometimes the most impactful discoveries come from asking what others overlook.

Related Links

Anthropic's cofounder says 'dumb questions' are driving AI breakthroughs, from coding to scaling laws | Headlines | HyperAI