Adobe Unveils AI Assistant for Design Editing and Expands Google Cloud Partnership
Adobe has unveiled a major expansion of its generative AI capabilities at its annual MAX conference, introducing a new conversational AI assistant for its cloud-based design platform, Adobe Express. The feature, now in public beta, functions as a chatbot-style creative agent that allows users of all skill levels to make design changes through natural language prompts. Activated via a toggle in the top-left corner of the Express web app, the AI assistant replaces the standard interface with a text input box, enabling users to create or edit visual content by simply describing their needs—such as “a fall-themed wedding invitation” or “a retro poster for a school science fair.” The system uses Adobe’s Firefly AI models and draws from its vast libraries of fonts, stock images, and generative assets to fulfill requests, even making complex edits like changing backgrounds, fonts, or specific design elements while preserving the rest of the layout. The AI assistant is designed to be hybrid, meaning users can switch between AI-driven automation and manual editing at any time, ensuring full creative control. It can also perform multi-step tasks, such as resizing designs, converting static images into animations, or reformatting content for different platforms. Adobe’s chief technology officer, Ely Greenfield, described the tool as a “capable teammate” that handles time-consuming, distracting tasks, allowing users to focus on their core creative vision. This launch is part of Adobe’s broader strategy to integrate AI assistants across its Creative Cloud ecosystem. A similar AI-powered editor is already in private beta for Photoshop, and Adobe plans to extend the functionality to other apps like Premiere Pro and Lightroom. The company aims to eventually enable these AI assistants to work together seamlessly across platforms, learn from user behavior, and adapt to individual creative styles over time. The new AI features are powered by an expanded partnership with Google Cloud, announced during MAX. Through this collaboration, Adobe customers will gain access to Google’s latest AI models—including Gemini 2.5 Flash, Flux.1 Kontext, and others—within Adobe apps like Firefly, Express, Photoshop, and Premiere Pro. Enterprise users will also be able to leverage these models through Adobe GenStudio and Firefly Foundry, which allows them to customize AI models using their own data to generate on-brand content at scale, with strong data privacy guarantees. In addition to the Express AI assistant, Adobe is rolling out new generative AI tools for video editing. Photoshop’s Generative Fill now supports third-party models, giving users more control and variety in image editing. Premiere Pro and Lightroom are also getting AI tools to automate complex video and photo editing tasks, such as object removal, content generation, and style transfer. Adobe is also showcasing experimental projects during its “Sneaks” preview, including innovations like Project Perfect Blend, which inspired the current Harmonize feature in Photoshop. The company is also working on integrating its AI tools with third-party platforms, such as making the Express AI assistant available via ChatGPT. The announcements underscore Adobe’s commitment to making AI a collaborative, creative force—empowering users to work faster, more intuitively, and with greater flexibility. As the company continues to merge human creativity with AI intelligence, the focus remains on maintaining user control, protecting intellectual property, and pushing the boundaries of what’s possible in digital design.
