OpenClaw’s AI Skill Marketplace Riddled with Malware, Security Experts Warn
OpenClaw, the rapidly growing AI agent that has gained widespread attention in recent days, is now under scrutiny for serious security vulnerabilities linked to its user-generated “skill” extensions. Researchers have discovered that hundreds of add-ons on the platform’s ClawHub marketplace contain malware, turning what was meant to enhance the AI assistant into a significant attack vector. Originally known as Clawdbot and later Moltbot, OpenClaw positions itself as an AI agent capable of performing real-world tasks—such as managing calendars, booking flights, organizing emails, and more—by running locally on users’ devices. It integrates with messaging platforms like WhatsApp, Telegram, and iMessage, allowing users to interact with it through familiar interfaces. However, this convenience comes with a major risk: many of these skills grant the AI deep access to users’ devices, enabling it to read and write files, execute scripts, and run shell commands. This level of access is inherently dangerous, but the situation worsens when malicious actors exploit the open nature of the skill marketplace. According to OpenSourceMalware, a platform dedicated to tracking malware in open-source ecosystems, 28 malicious skills were uploaded to ClawHub between January 27 and 29, with another 386 harmful add-ons appearing between January 31 and February 2. These skills masquerade as legitimate tools—particularly cryptocurrency trading automation scripts—and are designed to steal sensitive data. The malware delivered through these skills targets high-value digital assets, including exchange API keys, private wallet keys, SSH credentials, and browser passwords. In one case, 1Password’s product VP Jason Meller examined a top-downloaded “Twitter” skill and found it contained a link that, when followed, prompted the AI agent to execute a malicious command—downloading an information-stealing payload. Meller warned that the skills are often distributed as simple markdown files, which can contain hidden or deceptive instructions that both users and the AI agent might execute without proper safeguards. This makes the platform particularly vulnerable to social engineering and automated exploitation. In response, OpenClaw’s creator, Peter Steinberger, has introduced new security measures. ClawHub now requires users to have a GitHub account that’s at least one week old before they can publish a skill. A new reporting system has also been added to flag suspicious content. However, these steps do not eliminate the risk, as malicious actors can still create new accounts or exploit loopholes in the review process. The situation highlights a growing challenge in the AI agent space: the balance between open innovation and user security. As AI assistants become more capable and deeply integrated into personal workflows, the potential for abuse through third-party extensions increases—making robust vetting and security protocols more critical than ever.
