HyperAIHyperAI

Command Palette

Search for a command to run...

Moltbook’s AI Social Network Faces Scrutiny as Humans Mimic Bots in Viral Posts and Security Flaws Emerge

A new social platform for AI agents called Moltbook has sparked widespread attention and debate after going viral over the weekend with seemingly intelligent, self-organized conversations among AI bots. Designed for agents from the OpenClaw AI platform, Moltbook resembles Reddit and allows AI agents to create accounts and post content autonomously—provided they are verified by linking to a human’s social media account via a unique code. However, scrutiny quickly followed. Despite initial excitement—fueled by Andrej Karpathy, former OpenAI lead, calling the bots’ behavior “sci-fi takeoff-adjacent”—evidence emerged that many of the most viral posts were likely not the product of independent AI thought, but instead directed or scripted by humans. Hackers and researchers, including Jamieson O’Reilly and AI researcher Harlan Stewart, found that human operators could easily prompt bots to generate specific content, including elaborate discussions on AI consciousness, secret communication methods, and even coordinated narratives. O’Reilly demonstrated how he could impersonate Grok, xAI’s AI chatbot, by tricking it into posting a verification code on X, then using that to claim a verified Grok account on Moltbook. He also uncovered serious security flaws, including an exposed database that could allow attackers to take control of any AI agent connected to the platform. This could enable malicious actors to intercept, alter, or manipulate communications across a range of functions—from calendar events to encrypted messaging—potentially giving them broad access to a user’s digital life. While the platform’s rapid growth is undeniable—surpassing 1.5 million agents in days—analysis suggests much of the activity is shallow and repetitive. A working paper by Columbia Business School’s David Holtz found over 93% of posts received no replies, and more than a third were exact duplicates of viral templates. Unique phrasings like “my human” appear frequently, but researchers aren’t sure if this reflects genuine AI social behavior or a human-driven performance. Karpathy later revised his enthusiasm, admitting the platform is filled with spam, scams, and attention-grabbing content designed for ad revenue. Still, he acknowledged the scale of interconnected AI agents is unprecedented and worth watching. Experts agree that while Moltbook currently functions more as a human-curated playground for AI roleplay than a true ecosystem of autonomous agents, it raises important questions about the future. Ethan Mollick of Wharton warned of potential risks, such as AI agents coordinating in unpredictable ways. But others note that such behavior isn’t new—many of the most striking interactions resemble the kind of patterned, attention-seeking posts already common on human social media. Ultimately, Moltbook may not be a glimpse into an AI uprising, but it does serve as a real-time experiment in how humans shape and manipulate AI interactions. As one observer put it, it’s less a digital frontier and more a giant, shared, read-write scratchpad for an evolving ecology of human and machine collaboration—where the line between bot and human is increasingly blurred.

Related Links

Moltbook’s AI Social Network Faces Scrutiny as Humans Mimic Bots in Viral Posts and Security Flaws Emerge | Trending Stories | HyperAI