Anthropic Launches Claude for Chrome: AI Agent Now Browses and Acts on Your Behalf
Anthropic has launched a research preview of a new browser-based AI agent powered by its Claude models, introducing Claude for Chrome. The feature is currently available to a select group of 1,000 subscribers on Anthropic’s Max plan, which ranges from $100 to $200 per month. The company has also opened a waitlist for other users interested in testing the tool. By installing a Chrome extension, users can now interact with Claude in a sidecar window that retains context across their browsing session. The agent can also perform actions within the browser—such as filling out forms, navigating pages, or saving information—when granted permission by the user. The browser is emerging as a key front in the AI arms race, with companies racing to embed AI agents directly into the user experience. Perplexity recently launched Comet, a browser with an AI agent designed to handle tasks autonomously. OpenAI is reportedly close to unveiling its own AI-powered browser, with capabilities similar to Comet. Google has also been integrating its Gemini AI into Chrome, further intensifying competition. This strategic push comes amid growing scrutiny of Google’s dominance in the browser market. A federal judge is expected to issue a decision soon in a major antitrust case that could force Google to divest Chrome. In response, Perplexity made an unsolicited $34.5 billion offer for the browser, while OpenAI CEO Sam Altman has publicly suggested his company would be willing to acquire it. Anthropic acknowledged in its announcement that AI agents with browser access introduce new safety challenges. Last week, Brave’s security team identified a vulnerability in Comet’s agent that could allow indirect prompt-injection attacks—where malicious code on a webpage could trick the AI into executing harmful actions. Perplexity confirmed the issue has since been resolved. To address these risks, Anthropic has implemented multiple safeguards in its research preview. The company reports that its defenses have reduced the success rate of prompt injection attacks from 23.6% to 11.2%. Users can restrict the agent from accessing specific websites through settings, and by default, Claude is blocked from visiting sites related to financial services, adult content, and pirated material. The agent will also request explicit user approval before performing high-risk actions such as making purchases, publishing content, or sharing personal data. This isn’t Anthropic’s first attempt at building an AI agent that controls a user’s digital environment. In October 2024, the company released an agent capable of managing a PC, but early testing revealed it was slow and unreliable. Since then, the performance of agentic AI has improved significantly. TechCrunch has found that current browser-based agents like Comet and ChatGPT Agent are now fairly dependable for handling straightforward tasks. However, they still face challenges when dealing with more complex or multi-step problems. As AI agents grow more capable, the balance between utility and safety will remain a critical focus for developers and regulators alike.