Report: Unauthorized group accesses Anthropic's Mythos tool
Reports indicate that an unauthorized group has gained access to Mythos, the enterprise cybersecurity AI tool recently unveiled by Anthropic. The breach allegedly occurred through a third-party vendor environment, raising concerns about the security of the limited-release system. According to a Bloomberg report, a private online forum, whose members have not been publicly identified, obtained access to the tool. The group reportedly included an individual employed by a third-party contractor working with Anthropic. Members of this Discord channel, known for seeking information on unreleased AI models, have reportedly used Mythos regularly since gaining entry on the same day it was publicly announced. They provided screenshots and live demonstrations to journalists to prove their access. The group claims its intent is to explore the model's capabilities rather than cause harm. An Anthropic spokesperson confirmed to TechCrunch that the company is investigating reports of unauthorized access to the Claude Mythos Preview. However, Anthropic stated that current investigations have found no evidence that these activities have impacted the company's own systems. The unauthorized users reportedly accessed the model by making an educated guess about its online location based on formatting patterns used in previous Anthropic releases. Mythos was initially released to a select group of vendors, including major technology firms like Apple, as part of an initiative known as Project Glasswing. Anthropic designed this limited release strategy to prevent the tool from falling into the hands of malicious actors. The company has emphasized that while Mythos is designed to bolster enterprise security, it possesses the potential to be weaponized against corporate networks if misused. The incident highlights the risks associated with releasing powerful AI tools to a restricted audience before broader deployment. If the unauthorized use of Mythos proves true, it could undermine the security assurances Anthropic provided to its enterprise partners. The company had hoped that a controlled rollout would mitigate the risk of the technology being used for hacking. This incident serves as a reminder of the challenges in securing next-generation AI systems even when access is tightly controlled through third-party vendors.
