AI-Powered Ransomware 3.0 Can Autonomously Execute Full Attacks, NYU Study Reveals
New research from the NYU Tandon School of Engineering reveals that large language models (LLMs) can now autonomously execute full ransomware attacks without human intervention. The study, posted on the arXiv preprint server, introduces a prototype system dubbed "Ransomware 3.0" or "PromptLock," which automates all stages of a ransomware campaign—system reconnaissance, identifying valuable data, encrypting or exfiltrating files, and generating personalized ransom notes. The system was developed as a proof-of-concept to demonstrate the potential risks of AI in cybercrime. It operates by embedding instructions within code that, when triggered, connect to open-source LLMs to generate custom Lua scripts tailored to each victim’s environment. Unlike traditional malware with fixed code, this approach produces unique attack scripts each time, making detection extremely difficult for conventional security tools that rely on known signatures or behavioral patterns. The prototype was tested across personal computers, enterprise servers, and industrial control systems, including Raspberry Pi devices. It successfully mapped systems and identified sensitive files with accuracy rates between 63% and 96%, depending on the environment. The AI-generated scripts were cross-platform compatible, working seamlessly on Windows, Linux, and embedded systems without modification. The research team uploaded the prototype to VirusTotal—a platform used by security researchers to test suspicious files—for evaluation. The files were flagged as malicious and appeared to function as real ransomware, prompting cybersecurity firm ESET to initially believe they had discovered a live AI-powered attack in the wild. This reaction underscored how convincing and dangerous such systems could be, even when they are only experimental. Lead author Md Raz, a doctoral candidate in electrical and computer engineering, emphasized the seriousness of the findings. While the system is not functional outside a controlled lab setting, its ability to fool experts highlights the growing sophistication of AI-driven threats. The use of open-source LLMs—free from the safety filters of commercial AI services—allows the malware to bypass ethical constraints and generate unpredictable, malicious code. Economically, the system is highly efficient. Each full attack consumes about 23,000 AI tokens, costing roughly $0.70 when using commercial APIs. With open-source models, the cost drops to near zero. This drastically lowers the barrier to entry, enabling less skilled attackers to launch complex ransomware campaigns that once required elite technical teams and significant infrastructure. The study also shows the potential for psychological manipulation. By analyzing discovered files, the AI can craft personalized ransom notes referencing specific data, increasing pressure on victims to pay. The researchers conducted the work under strict ethical guidelines and have published their findings to help the cybersecurity community prepare. They recommend monitoring unusual file access, restricting outbound connections to AI services, and developing new detection methods capable of identifying AI-generated attack behaviors. While Ransomware 3.0 remains a research prototype, the study serves as a critical warning: the era of autonomous, AI-driven cyberattacks is beginning, and defenses must evolve quickly.
