HyperAIHyperAI
a month ago

Heeding the Inner Voice: Aligning ControlNet Training via Intermediate Features Feedback

Nina Konovalova, Maxim Nikolaev, Andrey Kuznetsov, Aibek Alanov
Heeding the Inner Voice: Aligning ControlNet Training via Intermediate
  Features Feedback
Abstract

Despite significant progress in text-to-image diffusion models, achievingprecise spatial control over generated outputs remains challenging. ControlNetaddresses this by introducing an auxiliary conditioning module, whileControlNet++ further refines alignment through a cycle consistency loss appliedonly to the final denoising steps. However, this approach neglects intermediategeneration stages, limiting its effectiveness. We propose InnerControl, atraining strategy that enforces spatial consistency across all diffusion steps.Our method trains lightweight convolutional probes to reconstruct input controlsignals (e.g., edges, depth) from intermediate UNet features at every denoisingstep. These probes efficiently extract signals even from highly noisy latents,enabling pseudo ground truth controls for training. By minimizing thediscrepancy between predicted and target conditions throughout the entirediffusion process, our alignment loss improves both control fidelity andgeneration quality. Combined with established techniques like ControlNet++,InnerControl achieves state-of-the-art performance across diverse conditioningmethods (e.g., edges, depth).

Heeding the Inner Voice: Aligning ControlNet Training via Intermediate Features Feedback | Latest Papers | HyperAI