HyperAIHyperAI

Command Palette

Search for a command to run...

Google API Keys Now Expose Gemini Data: Security Flaw Lets Attackers Access Sensitive Info via Public Keys

Google has long advised developers that API keys—such as those used for Google Maps, Firebase, and other public services—are not secrets and can safely be embedded in client-side code. This guidance was based on the assumption that these keys served only as project identifiers for billing and usage tracking, and could be restricted through mechanisms like HTTP referrer allow-lists. However, with the introduction of the Gemini API, that assumption no longer holds. When the Generative Language API (Gemini) is enabled in a Google Cloud project, any existing API key in that project—regardless of its original purpose—gains silent access to sensitive Gemini endpoints. This includes the ability to read uploaded files, access cached data, and consume AI compute resources, all without any warning, confirmation, or notification to the key’s owner. This creates a dangerous scenario known as retroactive privilege escalation. A developer might have deployed a Maps API key in a public website years ago, following Google’s own instructions. Later, when a team member enables the Gemini API in the same project, that same public key becomes a credential capable of accessing private AI data and services—without any change to the key itself or any alert to the developer. The root issue lies in Google Cloud’s use of a single key format (AIza...) for both public identifiers and sensitive authentication. By default, new API keys are unrestricted and can access any enabled API in the project, including Gemini. This insecure default behavior (CWE-1188) combined with incorrect privilege assignment (CWE-269) means that keys designed for benign, public use can now unlock powerful AI capabilities. Truffle Security discovered 2,863 live Google API keys in public web pages—originally intended for services like Maps—that now authenticate to Gemini. These included keys from major financial institutions, security firms, recruiting platforms, and even Google itself. In one case, a key embedded in a Google product’s public website, deployed since 2023, was used to successfully access the Gemini API, despite never being intended for AI use. The vulnerability was reported to Google on November 21, 2025. Initially dismissed as "intended behavior," the report gained traction after Truffle Security provided evidence from Google’s own infrastructure. Google reclassified the issue as a critical bug, confirmed it was actively working on a fix, and began restricting exposed keys from accessing Gemini. However, a permanent architectural fix has not yet been publicly released. For developers and organizations using Google Cloud, immediate action is required. First, check all projects for the enabled Generative Language API. Then, audit every API key in those projects to ensure none have unintended access to Gemini. If any key with Gemini access is publicly exposed—embedded in JavaScript, in public repositories, or visible in web source code—rotate it immediately. Tools like TruffleHog can help scan codebases and web assets to detect live keys with Gemini access. The broader lesson is clear: as AI systems are layered onto legacy infrastructure, old assumptions about security no longer apply. Public identifiers can become secret credentials overnight. Organizations must treat all API keys with greater scrutiny, especially in environments where new AI services are enabled.

Related Links

Google API Keys Now Expose Gemini Data: Security Flaw Lets Attackers Access Sensitive Info via Public Keys | Trending Stories | HyperAI