Get Started with Visual Generative AI on NVIDIA RTX PCs Using ComfyUI and Models Like FLUX.2 and LTX-2
Getting started with visual generative AI on NVIDIA RTX PCs offers creators powerful, local control over AI workflows without relying on cloud services. With tools like ComfyUI and models such as FLUX.2 and LTX-2, users can generate high-quality images and videos directly on their machines, reducing latency and improving privacy. Begin by visiting comfy.org to download and install ComfyUI for Windows. Launch the application and use the starter template to generate your first image. Experiment with prompts to explore creative possibilities. For example, try: “Cinematic closeup of a vintage race car in the rain, neon reflections on wet asphalt, high contrast, 35mm photography.” Keep prompts clear and concise—focusing on subject, setting, style, and mood—rather than long, complex narratives. To enhance image quality, use the FLUX.2-Dev Text to Image template from ComfyUI’s “All Templates” section. This workflow includes the necessary nodes for generating images. The model weights—large files (over 30GB for FLUX.2)—will be downloaded automatically from Hugging Face. Once downloaded, save the workflow using the hamburger menu (top-left) and choose “Save.” This preserves your setup for future use. For video generation, try the LTX-2 Image to Video template. This model creates controllable, storyboard-style videos from a single image and a text prompt. Use an image generated in FLUX.2-Dev as input, then add a detailed, present-tense prompt describing the scene, action, camera movement, lighting, and audio. For best results, structure your prompt with shot types (wide, medium, close-up), camera motions (dolly in, pan, tilt), and atmospheric details like fog, rain, or golden hour lighting. When working with LTX-2, be mindful of VRAM usage. Higher resolution, frame rates, or longer video lengths increase memory demands. NVIDIA and ComfyUI have optimized weight streaming, allowing parts of the model to offload to system memory when GPU VRAM is full—though this may slow performance. Adjust settings to balance quality and speed based on your GPU. To combine workflows, create a custom setup that uses FLUX.2-Dev to generate an image, then feed it into the LTX-2 template for video creation. Save this new workflow under a new name with a combined prompt for both image and video. For advanced users, explore 3D-guided generative AI using NVIDIA’s Blueprint. This enables more precise control by integrating 3D scenes and assets into image and video pipelines—ideal for production-level work. Stay updated on the latest advancements. At CES 2026, NVIDIA unveiled 4K AI video generation acceleration on RTX PCs, enhanced ComfyUI support, and performance boosts across LTX-2, Llama.cpp, Ollama, and other tools. FLUX.2 [klein] models now offer faster, more efficient generation using NVFP4 and NVFP8 precision, enabling high performance across a wide range of RTX GPUs. Project G-Assist has also improved with a new “Reasoning Mode” that enhances accuracy and allows multi-command execution. It now controls G-SYNC monitors, CORSAIR peripherals, and PC components via iCUE, with upcoming support for Elgato Stream Decks. Developers can use a new Cursor-based plug-in builder for faster integration. Join the community on the Stable Diffusion subreddit and ComfyUI Discord for help, inspiration, and shared workflows. Follow NVIDIA on social media and subscribe to the RTX AI PC newsletter for ongoing updates.
