Users rush to open source Moltbot for always-on AI, but risky access to personal data raises alarms
An open-source AI assistant called Moltbot has quickly gained popularity among users seeking a constant, always-on AI companion, despite significant security and privacy risks. Modeled after the fictional AI Jarvis from Iron Man, Moltbot operates through WhatsApp, allowing users to interact with an AI assistant directly in their messaging app. What makes Moltbot appealing is its ability to run continuously, offering real-time responses and automation features without requiring users to manually activate it each time. This persistent presence mimics the functionality of a personal AI assistant, enabling users to set reminders, answer questions, and even control smart home devices—all through simple text commands in WhatsApp. However, this convenience comes with serious trade-offs. To function, Moltbot requires full access to a user’s WhatsApp account, including messages, media, contacts, and even the ability to send messages on the user’s behalf. It also requests access to the user’s device files and, in some configurations, cloud storage accounts. This level of access raises major concerns about data privacy and potential misuse. Security experts warn that such broad permissions could expose users to risks like data theft, unauthorized message sending, and account hijacking. Because Moltbot is open source, its code is publicly available—but that also means malicious actors could modify it or embed hidden vulnerabilities. There’s no official oversight or verification process to ensure the code remains safe after modifications. Despite these red flags, many users are drawn to Moltbot’s free, customizable nature and the novelty of having an AI that never sleeps. The tool has seen rapid adoption, particularly among tech-savvy individuals and developers interested in experimenting with AI automation. Experts caution that while open-source projects can foster innovation and transparency, they also demand a high level of user vigilance. Users are urged to thoroughly review the code, understand the permissions they’re granting, and avoid using such tools with sensitive personal or professional data. In the broader context of AI’s growing integration into daily life, Moltbot exemplifies both the potential and the pitfalls of unregulated, user-driven AI tools. As demand for always-on assistants rises, the balance between functionality and security remains a critical challenge.
