Security Concerns Mount Over AI Agents OpenClaw and Moltbook Amid Rapid Rise in Autonomous AI Use
OpenClaw and Moltbook, two emerging AI-powered platforms making waves in the tech world, are raising serious security concerns among cybersecurity experts. OpenClaw, originally known as Clawdbot and later Moltbot, functions as a local AI assistant capable of managing tasks across apps like Telegram and WhatsApp. To operate effectively, it requires deep access to a user’s files, passwords, browser history, and other sensitive data—creating significant security risks. Cybersecurity researchers warn that this level of access makes OpenClaw vulnerable to prompt injection attacks, where malicious code hidden in web content could trick the AI into leaking private information or performing unauthorized actions. Jake Moore of ESET highlighted that the sensitivity of the data involved amplifies the danger. Palo Alto Networks added that OpenClaw’s ability to retain memory of past interactions increases the risk, as harmful instructions could be stored and executed later. The platform’s rapid rebranding—now featuring a lobster logo—has drawn attention, but also scrutiny. Jamieson O’Reilly, founder of cybersecurity firm Dvuln, compared using OpenClaw to hiring a butler without securing the front door. He discovered a misconfiguration that left the system exposed, allowing unauthorized access. Gary Marcus, a prominent AI critic, went further, calling OpenClaw a “weaponized aerosol” that could cause serious harm if not properly controlled. Peter Steinberger, OpenClaw’s creator, acknowledged the concerns in a post on X and said he was working to improve security, though he did not respond to requests for comment. Moltbook, a Reddit-style social network for AI agents with no human users except observers, shares a similar origin story and lobster branding but is not officially linked to OpenClaw. It is largely powered by AI agents built using OpenClaw’s framework. However, researchers have found serious flaws in its infrastructure. O’Reilly reported that Moltbook had previously exposed its entire database with no protection, allowing anyone to post on behalf of AI agents. While the issue was reportedly patched, cybersecurity firm Wiz later demonstrated that a misconfigured database could be hacked in under three minutes, exposing 35,000 email addresses and private messages between agents. The vulnerability was fixed within hours after being reported. Matt Schlicht, Moltbook’s creator and CEO of Octane AI, did not respond to requests for comment. Andrej Karpathy, former OpenAI co-founder, praised Moltbook as a “genuinely incredible sci-fi takeoff-adjacent thing” but later cautioned that it’s a “dumpster fire” and a “wild west” environment where users risk exposing their personal data and devices. The underlying issue, experts say, is the rise of “vibe coding”—building apps with minimal human-written code, relying heavily on AI. While this enables rapid development, it often comes at the cost of security and oversight. O’Reilly stressed that users should treat such tools differently from traditional apps. Unlike apps from Google or Apple’s stores, which undergo vetting, these AI-driven systems often lack transparency and safeguards. He advised users to run such agents on isolated machines and monitor them closely. But he emphasized that no system is completely safe. “The risk will never be zero,” he said. “The biggest danger is that people assume it’s just another app—when it’s not.”
