Anthropic Researchers Argue for AI Skills Over More Agents to Boost Real-World Performance
Anthropic researchers are challenging the prevailing trend in AI development, arguing that the industry doesn’t need more AI agents—instead, it needs "skills" that give agents real expertise and reusable workflows. At the AI Engineering Code Summit last month, researchers Barry Zhang and Mahesh Murag presented a new vision for how AI can become more effective in practical, real-world settings. Zhang noted that while the industry has long assumed that agents for different domains—like finance, healthcare, or legal work—would need to be built from the ground up, the underlying agent architecture is far more universal than previously thought. Rather than creating a new agent for every task, he proposed a single, general-purpose agent powered by a library of specialized skills. These "skills" are essentially organized collections of files that package procedural knowledge, making it easy for agents to perform tasks consistently and efficiently. In simple terms, they function like digital playbooks—folders containing the steps, rules, and context an agent needs to complete a job correctly. Zhang emphasized that current AI agents, despite their advanced capabilities, still lack true expertise and often miss critical context. Skills help bridge that gap by embedding domain-specific knowledge and best practices. Murag shared that since the launch of the skills feature, thousands of them have been created by users across various fields, including accounting, legal, and recruiting—many by people without technical backgrounds. He noted that Fortune 100 companies are already using these skills to teach AI agents about their internal processes, turning them into tools that reflect organizational standards and workflows. The push for AI agents has been a major focus in tech, with leaders like OpenAI’s Sam Altman suggesting that agents are already taking on tasks typically done by junior employees. At the Snowflake Summit 2025, Altman described a future where humans act as supervisors, assigning work to agents, reviewing outputs, and refining results—much like managing a team of junior staff. He also predicted that agents could one day help discover new knowledge or solve complex business problems. Microsoft’s Asha Sharma echoed this, suggesting that AI agents could eventually flatten corporate hierarchies, reducing the need for multiple layers of management. “The whole kind of organizational construct might start to look different in a few years,” she said on Lenny’s Podcast in August. However, not all industry voices are convinced. Guido Appenzeller, a partner at a16z, has warned that the term “agent” is being overused and misapplied. In a recent podcast, he criticized some startups for simply adding a chat interface to a language model and rebranding it as an agent to justify higher prices. “There’s a marketing angle to agents,” he said, suggesting that the hype may be outpacing real progress.
