Microsoft’s NLWeb ‘Agentic Web’ Protocol Hit by Critical Path‑Traversal Flaw
Microsoft’s ambitious effort to reshape the web with AI has run into a serious setback after researchers uncovered a critical security flaw in its newly launched NLWeb protocol. Just months after Microsoft unveiled NLWeb at its Build conference as a foundational technology for the “Agentic Web”—a vision where websites and apps can understand and respond to natural language queries like ChatGPT—security experts have exposed a glaring vulnerability that could allow attackers to access sensitive system data. The flaw, a classic path traversal vulnerability, enables remote users to read protected files on a server simply by visiting a maliciously crafted URL. This includes critical configuration files and, most alarmingly, API keys for major AI models such as OpenAI’s GPT-4 and Google’s Gemini. The implications are severe: an attacker could gain access to the core cognitive infrastructure of AI agents, effectively stealing their ability to think, reason, and act. The vulnerability was discovered by Aonan Guan, a senior cloud security engineer at Wyze, and Lei Wang, who reported it to Microsoft on May 28—just weeks after NLWeb’s public debut. Microsoft issued a patch on July 1, but has not assigned a CVE (Common Vulnerabilities and Exposures) identifier to the issue, a standard practice that helps organizations track and respond to security risks. Guan and Wang have urged Microsoft to issue a CVE to increase awareness and ensure proper remediation, especially since NLWeb is already being deployed by early adopters like Shopify, Snowflake, and TripAdvisor. In a statement, Microsoft spokesperson Ben Hope said the issue was responsibly reported and that the open-source repository has been updated. He added that Microsoft does not use the affected code in its own products and that customers using the repository are automatically protected. However, Guan stressed that users must manually pull and deploy a new version of the software to close the gap. Until then, any public-facing NLWeb deployment remains at risk. The stakes are especially high because the compromised files often contain .env files—environment variables that store API keys and other secrets. While exposing such files in traditional web apps is a serious concern, Guan emphasized that in the context of AI agents, the consequences are far more dire. “These files contain the agent’s cognitive engine,” he said. “An attacker doesn’t just steal a credential—they steal the agent’s mind. That could lead to massive API abuse, financial loss, or even the creation of a malicious clone that acts on behalf of the original.” The incident comes amid Microsoft’s broader push to integrate AI deeply into its products, including native support for the Model Context Protocol (MCP) in Windows. Security researchers have already raised red flags about MCP’s potential risks, and the NLWeb flaw suggests that rapid innovation may be outpacing robust security testing. This case underscores a growing challenge: as AI systems become more autonomous and interconnected, the consequences of basic vulnerabilities multiply. The lesson is clear—speed in AI deployment must not come at the expense of foundational security. For Microsoft’s vision of an intelligent web to succeed, trust must be earned through rigorous, proactive security practices.
