Claude’s Rise: Can Anthropic Sustain Its AI Coding Momentum With Opus 4.6 Amid Growing Competition?
Claude has surged into the spotlight, capturing the attention of developers, executives, and tech enthusiasts alike. Since the holidays, Anthropic’s Claude Code has become a go-to tool for building everything from medical imaging dashboards to AI-powered T-shirt design contests, fueling a wave of excitement across industries. The momentum began with the release of Opus 4.5, which many describe as a turning point—enabling the AI to handle complex, long-horizon tasks with minimal supervision. This shift transformed Claude from a helpful assistant into a capable agent that can autonomously execute multi-step workflows, from coding to data analysis. Data from Caliber shows a 13-point spike in Anthropic’s word-of-mouth exposure between late December and mid-January, while OpenAI’s dipped slightly. By November 2025, Claude Code had crossed $1 billion in revenue. The company is now reportedly in talks to raise up to $20 billion at a $350 billion valuation, reflecting strong investor confidence. Now, with the launch of Opus 4.6, Anthropic aims to solidify its lead. The new model is described as a direct upgrade, offering faster performance and enhanced precision for agentic tasks. Dianne Na Penn, Anthropic’s head of research product management, says Opus 4.6 can “think” longer and deeper on complex problems—moving beyond simple task execution to true autonomous reasoning. The impact has been profound. Boris Cherny, creator of Claude Code, used the tool to build itself while on a houseboat in Copenhagen, deploying dozens of AI agents to generate over 300 code pull requests in a single month—his most productive period yet. At Anthropic, 70% to 90% of code is now written with Claude Code, and 90% of that code is generated by the tool itself. The shift wasn’t sudden. Over the past year, Anthropic expanded its enterprise customer base eightfold, with many clients generating over $1 million in annual revenue. It also outperformed rivals in Scale AI’s “Model of the Year” awards, winning in categories like “best agentic model.” Part of the success can be traced to a holiday promotion that doubled rate limits, drawing in casual users who were impressed by the results. But the real catalyst was Opus 4.5, which delivered a noticeable leap in capability. Engineers like Josh Albrecht of Imbue and Allie K. Miller of Open Machine described the experience as transformative—going from hand-holding the AI to simply stating a goal and watching it execute. Still, maintaining this momentum is no guarantee. OpenAI has responded with its standalone Codex app for Apple devices, and competitors like OpenHands and OpenCode are emerging as open-source alternatives, especially for cost-conscious teams. Security concerns also linger: Sonar’s research found Opus 4.5 had more subtle vulnerabilities than OpenAI’s GPT-5.2, though fewer than Gemini 3 Pro. Anthropic says it’s addressing these in Opus 4.6 through improved monitoring and evaluation. Beyond performance, brand perception plays a role. While OpenAI faces scrutiny over political donations and controversial features like Sora and Grok, Anthropic has cultivated a reputation as a more stable, professional player. Users appreciate its focus on productivity and reliability—without the noise or brand risks. As the AI race intensifies, Anthropic’s challenge is clear: keep delivering on the promise of autonomous agents while managing trust, security, and competition. For now, many users have found a workflow that works—and they’re not eager to leave.
