AI Security Gap: Experts Warn Traditional Cyber Teams Are Unprepared for AI's Unique Risks
An AI security researcher warns that most companies are ill-equipped to handle the unique security challenges posed by artificial intelligence, despite having traditional cybersecurity teams in place. Sander Schulhoff, a pioneer in prompt engineering and an expert on AI system vulnerabilities, said on a recent episode of "Lenny's Podcast" that the current cybersecurity workforce is not trained to anticipate or respond to how AI systems actually fail. Schulhoff, who runs a prompt engineering platform and organizes AI red-teaming hackathons, emphasized a fundamental disconnect between how traditional cybersecurity teams operate and how AI systems behave. Unlike conventional software, which can be patched by fixing specific bugs, AI models don’t fail in predictable, code-level ways. "You can patch a bug, but you can't patch a brain," he said, highlighting the challenge of addressing emergent behaviors in large language models. He explained that while cybersecurity professionals are trained to identify and fix known vulnerabilities, they often overlook the risk of adversarial manipulation—such as tricking an AI into generating harmful code or providing false information through carefully crafted inputs. "There's this disconnect about how AI works compared to classical cybersecurity," Schulhoff noted. "A team might review an AI system for technical flaws but never ask, 'What if someone manipulates it with a clever prompt?'" The real danger, he said, lies in the fact that AI systems can be influenced not through code, but through language. A malicious user could exploit this by using subtle, indirect instructions to bypass safeguards. Schulhoff stressed that the ideal security professional of the future would have dual expertise in both AI and cybersecurity—someone who knows how to contain a model’s output, for instance, by running it in a secure container to prevent it from affecting the broader system. He also criticized the growing number of AI security startups that claim to offer comprehensive protection through automated guardrails and red-teaming tools. "That's a complete lie," he said, arguing that the complexity and variability of AI manipulation make it impossible to build a one-size-fits-all solution. He predicted a market correction in which many of these companies will see their revenue dry up as organizations realize the tools don’t deliver on their promises. The AI security market has seen a surge in investment, with major players like Google making high-profile moves. In March, Google acquired cybersecurity firm Wiz for $32 billion, a deal driven by the need to strengthen cloud security amid the rise of AI and increasingly complex multi-cloud environments. Google CEO Sundar Pichai noted that AI is introducing "new risks" in a landscape where organizations are increasingly relying on distributed, hybrid systems. As concerns grow, a wave of startups has emerged, offering tools to monitor, test, and secure AI systems. But Schulhoff’s warning underscores a critical gap: the need for deeper expertise, not just more tools. The future of AI security, he believes, lies in people who understand both the technology and the human element of risk.
