HyperAIHyperAI
Back to Headlines

Google Launches AI Bug Bounty Program Offering Up to $30,000 for Critical Security Flaws in AI Systems

6 days ago

Google has launched a new bug bounty program focused exclusively on identifying security vulnerabilities in its AI-powered products. The initiative rewards researchers for uncovering flaws that enable malicious or unintended behavior in systems powered by large language models and generative AI. The program targets what Google defines as “rogue actions”—security exploits that use AI to cause real-world harm or bypass safeguards. Examples include crafting a prompt that tricks Google Home into unlocking a door, or a prompt injection attack that extracts and sends a user’s entire email history to an attacker’s account. Another previously disclosed flaw allowed a malicious Google Calendar event to trigger smart home devices, such as opening shutters or turning off lights. Google emphasizes that the program does not reward issues related to AI-generated content, such as hallucinations, hate speech, or copyright violations. These types of problems should be reported through the product’s built-in feedback channels so Google’s AI safety teams can analyze the model’s behavior and implement broader safety improvements. The highest reward—$20,000—is available for discovering critical vulnerabilities in Google’s flagship AI products, including Search, Gemini Apps, and core Workspace tools like Gmail and Drive. Additional bonuses for report quality and novelty can increase the payout to a maximum of $30,000. Lower-tier issues, such as attempts to steal model parameters or exploits on secondary products like Jules or NotebookLM, receive smaller payouts. In addition to the bounty program, Google introduced CodeMender, an AI agent designed to automatically identify and patch security flaws in open-source code. According to the company, CodeMender has already helped resolve 72 verified security fixes across open-source projects, with each recommendation reviewed by a human researcher before implementation. The launch underscores Google’s growing focus on securing AI systems as they become more deeply embedded in everyday applications. By incentivizing expert researchers to find and report dangerous exploits, Google aims to strengthen the safety and integrity of its AI infrastructure ahead of increasing competition in the space.

Related Links