HyperAIHyperAI

Command Palette

Search for a command to run...

8 days ago

Sample By Step, Optimize By Chunk: Chunk-Level GRPO For Text-to-Image Generation

Yifu Luo Penghui Du Bo Li Sinan Du Tiantian Zhang Yongzhe Chang Kai Wu Kun Gai Xueqian Wang

Sample By Step, Optimize By Chunk: Chunk-Level GRPO For Text-to-Image
  Generation

Abstract

Group Relative Policy Optimization (GRPO) has shown strong potential for flow-matching-based text-to-image (T2I) generation, but it faces two key limitations: inaccurate advantage attribution, and the neglect of temporal dynamics of generation. In this work, we argue that shifting the optimization paradigm from the step level to the chunk level can effectively alleviate these issues. Building on this idea, we propose Chunk-GRPO, the first chunk-level GRPO-based approach for T2I generation. The insight is to group consecutive steps into coherent 'chunk's that capture the intrinsic temporal dynamics of flow matching, and to optimize policies at the chunk level. In addition, we introduce an optional weighted sampling strategy to further enhance performance. Extensive experiments show that ChunkGRPO achieves superior results in both preference alignment and image quality, highlighting the promise of chunk-level optimization for GRPO-based methods.

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Sample By Step, Optimize By Chunk: Chunk-Level GRPO For Text-to-Image Generation | Papers | HyperAI