Google Unveils Disco Browsing, Enhanced Gemini Audio, Deep Research Agent, and Smarter Virtual Try-On for Shoppers
In December, Google unveiled several new AI advancements designed to enhance everyday digital experiences. One of the key introductions was Disco, a new browsing experiment from Google Labs aimed at simplifying complex online tasks. Frustrated by managing dozens of open tabs while researching or planning, users can now turn their scattered browser sessions into focused, interactive tools. At the heart of Disco is GenTabs, an experimental feature that proactively synthesizes open tabs and chat history to create custom, dynamic web applications—streamlining workflows and making it easier to get things done. Google also upgraded its Gemini audio models, launching the new Gemini 2.5 Flash Native Audio. This version is built to handle complex, natural conversations with improved accuracy, responsiveness, and support for multi-step instructions. The model is now available in AI Studio, Vertex AI, Gemini Live, and for the first time, Search Live, enabling more fluid and intelligent voice interactions. A new live speech translation beta in the Google Translate app is also now available, offering real-time translation across 70+ languages. The feature preserves the original speaker’s intonation, rhythm, and pacing, allowing for more natural, context-aware communication—making global conversations more authentic and accessible. Another major update was the release of a more powerful version of the Gemini Deep Research agent, now accessible to developers via the Interactions API. This allows developers to integrate advanced research capabilities—such as navigating complex topics, gathering insights from multiple sources, and synthesizing findings—directly into their own applications using a Gemini API key from Google AI Studio. To support transparency and benchmarking, Google open-sourced the DeepSearchQA benchmark, a new standard for evaluating how well research agents perform on real-world web tasks. The company also highlighted real-world use cases, including mobile-first AI tools for the visually impaired and assistive technologies that support greater independence for people with cognitive disabilities. Finally, Google introduced an updated virtual try-on experience for shoppers in the U.S. The new version allows users to try on clothing with just a single selfie, thanks to the Nano Banana technology. The system generates a realistic, full-body digital avatar, which can be customized with preferred studio-style images and clothing sizes. Once set, shoppers can instantly see how they’d look in billions of products from Google’s Shopping Graph—making online fashion shopping more personalized, efficient, and engaging.
