Google Gemini Now Verifies AI-Generated Videos Using SynthID Watermark
Google has expanded the AI verification capabilities of its Gemini app to include videos created or edited using its own AI tools. Users can now upload a video and ask Gemini, “Was this generated using Google AI?” The app will analyze both the visuals and audio for Google’s proprietary SynthID watermark, which is embedded during the AI generation or editing process. Unlike a simple yes-or-no answer, Gemini provides detailed feedback, identifying specific timestamps where the SynthID watermark appears in the video or audio. This feature builds on a similar tool Google introduced in November for images, which also only works on content made or modified with Google’s AI. Google describes SynthID as an “imperceptible” watermark, meaning it’s designed to be invisible to the human eye and ear while still being detectable by systems like Gemini. However, the effectiveness of such watermarks remains uncertain, as they can sometimes be removed or altered. This was a key issue for OpenAI when it launched its Sora video model, which generated AI videos that were difficult to trace due to the absence of robust, widely adopted detection mechanisms. While Google’s Nano Banano AI image generation model includes C2PA metadata—part of a broader industry effort to tag AI content—there is still no universal standard for identifying AI-generated media across social platforms. This lack of coordination allows deepfakes and manipulated content to circulate undetected. Gemini’s video verification tool supports files up to 100 MB and 90 seconds in length. The feature is available in all languages and regions where the Gemini app is accessible, making it a widely deployable tool for users concerned about the authenticity of digital content.
