HyperAIHyperAI

Command Palette

Search for a command to run...

Console

الكيميائي: فتح الكفاءة في تدريب نماذج التوليد النصي-الصوري من خلال اختيار بيانات الميتا-مُشتق

Kaixin Ding Yang Zhou Xi Chen Miao Yang Jiarong Ou Rui Chen Xin Tao Hengshuang Zhao

Abstract

أحدث التطورات في نماذج التوليد النصية إلى الصورة (T2I)، مثل Imagen وStable Diffusion وFLUX، أدت إلى تحسينات ملحوظة في الجودة البصرية. ومع ذلك، يظل أداء هذه النماذج محدودًا جوهريًا بجودة بيانات التدريب. غالبًا ما تحتوي مجموعات البيانات الصورية المستخرجة من الإنترنت أو المُصنّعة بشكل اصطناعي على عينات منخفضة الجودة أو متكررة، مما يؤدي إلى تراجع الدقة البصرية، وعدم استقرار عملية التدريب، وفقدان الكفاءة الحسابية. وبالتالي، أصبح اختيار البيانات الفعّال أمرًا بالغ الأهمية لتحسين كفاءة البيانات. تعتمد الأساليب الحالية على عمليات تصفية يدوية مكلفة أو على تقنيات تقييم استرشادية تعتمد على خصائص ذات بعد واحد في عمليات تنقية بيانات النص إلى الصورة. وعلى الرغم من استكشاف أساليب تعتمد على التعلم المتعدد (meta-learning) في النماذج اللغوية الكبيرة (LLM)، إلا أنه لم يتم حتى الآن تكييف هذه الأساليب لتطبيقات الوسائط البصرية. ولذلك، نقترح إطارًا جديدًا يُدعى Alchemist، وهو إطار قائمة على التدرجات المتعددة (meta-gradient) لاختيار مجموعة مناسبة من أزواج البيانات النصية والصورية على نطاق واسع. تعتمد طريقة العمل لدينا على تعلّم تلقائي لتقييم تأثير كل عينة من خلال تحسين نموذج التدريب بشكل تكراري من منظور مركّز على البيانات. يتكون Alchemist من مرحلتين رئيسيتين: تقييم البيانات وتصفية البيانات. نُدرّب نموذجًا خفيفًا (rater) لتقدير تأثير كل عينة بناءً على معلومات التدرج، مع تحسينات تُستمد من مفهوم التماسك متعدد الحدود (multi-granularity perception). ثم نستخدم استراتيجية Shift-Gsampling لاختيار مجموعات معلوماتية مفيدة لتدريب النموذج بكفاءة. يُعد Alchemist أول إطار تلقائي وقابل للتوسع وقائم على التدرجات المتعددة لاختيار البيانات في تدريب نماذج النص إلى الصورة. أظهرت التجارب على مجموعات بيانات مصطنعة ومستخرجة من الإنترنت أن Alchemist يُحسّن باستمرار من الجودة البصرية والأداء في المهام اللاحقة. كما أظهرت النتائج أن تدريب النموذج على 50% فقط من البيانات المختارة بواسطة Alchemist يمكن أن يتفوق على تدريبه على المجموعة الكاملة من البيانات.

One-sentence Summary

Researchers from The University of Hong Kong, South China University of Technology, and Kuaishou Technology's Kling Team propose Alchemist, a meta-gradient-based framework for efficient Text-to-Image training that automatically selects high-impact data subsets. Unlike prior heuristic or manual methods, it employs a gradient-informed rater with multi-granularity perception and optimized sampling to identify informative samples, enabling models trained on just 50% of Alchemist-selected data to surpass full-dataset performance in visual fidelity and efficiency.

Key Contributions

  • Text-to-Image models like Stable Diffusion face performance bottlenecks due to low-quality or redundant samples in web-crawled training data, which degrade visual fidelity and cause unstable training; existing data selection methods rely on costly manual curation or single-dimensional heuristics that fail to optimize for downstream model performance.
  • Alchemist introduces a meta-gradient-based framework that automatically rates data samples using gradient-informed multi-granularity perception and employs a shift-Gaussian sampling strategy to prioritize mid-to-late scored samples, which exhibit more informative gradient dynamics and avoid overfitting from top-ranked plain samples.
  • Validated on synthetic and web-crawled datasets, Alchemist-selected subsets (e.g., 50% of data) consistently outperform full-dataset training in visual quality and model performance, with empirical evidence showing optimal data lies in mid-to-late score ranges that balance learnability and diversity.

Introduction

The authors address data selection for text-to-image (T2I) model training, where efficiently identifying high-quality text-image pairs from large datasets is critical for reducing computational costs and improving model performance. Prior approaches typically use Top-K pruning—retaining only the highest-rated samples—but this often causes rapid overfitting due to uninformative, low-gradient samples in the top tier, while ignoring more dynamically valuable mid-to-late range data. The authors demonstrate that top-ranked samples exhibit minimal gradient changes during training, contributing little to learning, whereas mid-to-late range samples drive effective model updates but are discarded by conventional methods. Their key contribution is the pruning-based shift-Gaussian sampling (Shift-Gsample) strategy: it first discards the top n% of samples to avoid overfitting, then applies Gaussian sampling centered in the mid-to-late percentile range to balance data informativeness and diversity. This approach selectively retains detailed yet learnable samples, filters out plain or chaotic data, and achieves superior performance by aligning with human intuition for robust T2I training.

Method

The authors leverage a meta-gradient-based framework called Alchemist to enable data-efficient training of Text-to-Image (T2I) models by automatically selecting high-value subsets from large-scale text-image pairs. The overall pipeline consists of two principal stages: data rating and data pruning, which together form a scalable, model-aware data curation system. Refer to the framework diagram for a high-level overview of the workflow.

In the data rating stage, a lightweight rater network parameterized by μ\muμ is trained to assign a continuous weight Wxi(μ)[0,1]W_{x_i}(\mu) \in [0,1]Wxi(μ)[0,1] to each training sample xix_ixi. This weight reflects the sample’s influence on the downstream model’s validation performance. The rater is optimized via a bilevel formulation: the inner loop updates the proxy T2I model θ\thetaθ using a weighted loss over the training set, while the outer loop adjusts μ\muμ to minimize the validation loss. To avoid the computational burden of full inner-loop optimization, the authors adopt a meta-gradient approximation. During training, a reference proxy model θ^\hat{\theta}θ^ is warmed up using standard training data, while the primary model θ\thetaθ is updated using a combination of validation gradients and weighted training gradients:

θk+1=θkβk(gval(θk)+gtrain(θk,μk))\theta_{k+1} = \theta_k - \beta_k \left( g_{\mathrm{val}}(\theta_k) + g_{\mathrm{train}}(\theta_k, \mu_k) \right)θk+1=θkβk(gval(θk)+gtrain(θk,μk))

where gtrain(θk,μk)=xiDtrainWxi(μk)θL(θk;xi)g_{\mathrm{train}}(\theta_k, \mu_k) = \sum_{x_i \in \mathcal{D}_{\mathrm{train}}} W_{x_i}(\mu_k) \nabla_\theta \mathcal{L}(\theta_k; x_i)gtrain(θk,μk)=xiDtrainWxi(μk)θL(θk;xi). The rater’s parameters are then updated using an approximate gradient derived from the difference in loss between the primary and reference models:

μk+1=μkαkL(θk;xi)μWxi(μk)\mu_{k+1} = \mu_k - \alpha_k \mathcal{L}(\theta_k; x_i) \nabla_\mu W_{x_i}(\mu_k)μk+1=μkαkL(θk;xi)μWxi(μk)

To stabilize training, weights are normalized per batch via softmax:

Wxi=exp(W^xi)jexp(W^xj)W_{x_i} = \frac{\exp(\hat{W}_{x_i})}{\sum_j \exp(\hat{W}_{x_j})}Wxi=jexp(W^xj)exp(W^xi)

To account for batch-level variability and enhance robustness, the rater incorporates multi-granularity perception. It includes two parallel MLP modules: an Instance MLP that processes individual sample features and a Group MLP that computes a batch-level weight from pooled statistics (mean and variance) of the batch. The final weight for each sample is the product of its instance weight and batch weight, enabling the rater to capture both local distinctiveness and global context.

In the data pruning stage, the authors introduce the Shift-Gsample strategy to select a subset of the rated data. This strategy prioritizes samples from the middle-to-late region of the rating distribution—those that are neither too easy (low gradient impact) nor too hard (outliers or noisy)—but are sufficiently informative and learnable. As shown in the figure below, this approach outperforms random sampling, top-K selection, and block-based methods in terms of both sample count and downstream FID performance.

The selected dataset is then used to train the target T2I model, achieving comparable or superior performance with significantly fewer training samples—often as little as 50% of the original corpus—while accelerating convergence and improving visual fidelity.

Experiment

  • Alchemist data selection: 50% subset matched full dataset performance on MJHQ-30K and GenEval benchmarks, surpassing random sampling
  • 20% Alchemist-selected data matched 50% random data performance, demonstrating significant data efficiency gains
  • Achieved 2.33× faster training at 20% retention and 5× faster at 50% retention while matching random sampling results
  • Consistently outperformed baselines across STAR (from-scratch) and FLUX-mini (LoRA fine-tuning) models
  • Generalized to HPDv3-2M and Flux-reason-6M datasets, surpassing random selection at 20% and 50% retention rates

The authors use a Shift-Gsample pruning strategy with a Group-MLP to select informative data, achieving the lowest FID and highest CLIP-Score among compared methods on 6M image-text pairs. Results show that incorporating group-level information further improves performance over sample-level selection alone.

The authors use Alchemist to select subsets of HPDv3-2M and Flux-reason-6M datasets, achieving lower FID and higher CLIP-Score than random sampling at both 20% and 50% retention. Results show that even with half the data, Alchemist-selected subsets outperform randomly sampled ones, confirming its effectiveness across diverse data domains.

The authors use Alchemist to select a 50% subset of the LAION dataset, achieving better FID and CLIP-Score than training on the full dataset while matching its training time. Results show that even a smaller 20% subset (Ours-small) trained in less than half the time still outperforms several heuristic-based selection methods on GenEval. Alchemist’s selected data consistently improves efficiency and performance compared to random sampling and other image quality metrics.

The authors use Alchemist to select training data for STAR and FLUX-mini models, showing consistent performance gains over random sampling across model scales and data sizes. Results show that using 6M Alchemist-selected images improves FID and CLIP-Score compared to both smaller and larger random subsets, and similar gains hold for FLUX-mini with 3B parameters. The method demonstrates scalability, as larger models and different architectures benefit from the same selected data without additional rater training.


Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing

Hyper Newsletters

اشترك في آخر تحديثاتنا
سنرسل لك أحدث التحديثات الأسبوعية إلى بريدك الإلكتروني في الساعة التاسعة من صباح كل يوم اثنين
مدعوم بواسطة MailChimp