HyperAIHyperAI

Command Palette

Search for a command to run...

Sur le non-découplage du fine-tuning supervisé et de l'apprentissage par renforcement dans le post-entraînement

Xueyan Niu Bo Bai Wei Han Weixi Zhang

Abstract

La post-formation des grands modèles linguistiques repose habituellement sur une alternance entre une mise au point supervisée (SFT, supervised fine-tuning) et un apprentissage par renforcement (RL, reinforcement learning). Ces deux méthodes ont des objectifs distincts : la SFT vise à minimiser la perte d’entropie croisée entre les sorties du modèle et les réponses d’experts, tandis que le RL cherche à maximiser des signaux de récompense issus de préférences humaines ou de vérificateurs basés sur des règles. Les modèles modernes de raisonnement ont largement adopté cette pratique d’alternance entre SFT et RL. Toutefois, aucune justification théorique n’existe actuellement quant à la possibilité de les découpler. Nous démontrons qu’un tel découplage est impossible, quelle que soit l’ordre adopté : (1) dans le cas SFT puis RL, le RL augmente la perte SFT sous l’hypothèse d’optimalité SFT ; (2) dans le cas RL puis SFT, la SFT réduit la récompense atteinte par le RL. Des expériences menées sur Qwen3-0.6B confirment la dégradation prédite, validant ainsi que la SFT et le RL ne peuvent être séparés sans perte de performance antérieure lors de la post-formation.

One-sentence Summary

The authors from Huawei's Central Research Institute propose that supervised fine-tuning (SFT) and reinforcement learning (RL) in post-training of large language models cannot be decoupled without performance degradation, as SFT undermines RL rewards and RL worsens SFT loss, with experiments on Qwen3-0.6B confirming this fundamental trade-off in alternating training pipelines.

Key Contributions

  • Post-training of large language models typically alternates supervised fine-tuning (SFT) and reinforcement learning (RL), but this work proves theoretically that these stages cannot be decoupled: SFT minimizes cross-entropy loss on expert responses, while RL maximizes reward signals, leading to conflicting objectives that prevent independent optimization.

  • Theoretical analysis shows that in the SFT-then-RL pipeline, RL training increases the SFT loss even when SFT is optimal, and in the RL-then-SFT pipeline, subsequent SFT reduces the reward achieved by the previously optimized RL model, demonstrating a fundamental incompatibility between the two stages.

  • Experiments on Qwen3-0.6B confirm both theoretical predictions: RL degrades SFT performance (increased cross-entropy loss), and SFT following RL leads to reward degradation, validating that SFT and RL must be treated as an integrated optimization process rather than separate steps.

Introduction

The authors investigate the interplay between supervised fine-tuning (SFT) and reinforcement learning (RL) in post-training large language models, a common practice in modern reasoning models like DeepSeek-R1 and Qwen3. While SFT aligns model outputs with expert responses by minimizing cross-entropy loss, RL optimizes for human preferences or rule-based rewards, often leading to conflicting objectives. Prior work has shown inconsistent empirical results, with some observing performance gains from alternating SFT and RL, while others report catastrophic forgetting or limited synergy. The key limitation is the lack of theoretical understanding of whether these stages can be decoupled without performance degradation. The authors prove that decoupling is fundamentally impossible: performing SFT after RL increases the SFT loss, and performing RL after SFT reduces the reward achieved. Experiments on Qwen3-0.6B validate these findings, showing that interleaving SFT and RL is necessary to preserve prior performance, highlighting a critical constraint in current LLM post-training pipelines.

Method

The authors investigate the interaction between supervised fine-tuning (SFT) and reinforcement learning (RL) in post-training pipelines for language models, focusing on the non-decoupling of these stages. The overall framework begins with a pretrained base model, which undergoes either SFT or RL as a post-training step, followed by the other, before reaching test-time computation. The two primary sequential strategies are SFT-then-RL and RL-then-SFT, which are analyzed to determine if the stages can be treated as independent optimizations.

In the SFT stage, the pretrained model pθp_{\theta}pθ is adapted to task-specific knowledge using labeled data DSFT\mathcal{D}_{\text{SFT}}DSFT. The objective is to minimize the negative log-likelihood, which is equivalent to the cross-entropy loss for next-token prediction. This is formalized as LSFT(pθ)=(x,y)DSFTj=1ylogpθ(yjx,y<j)\mathcal{L}_{\text{SFT}}(p_{\theta}) = -\sum_{(\pmb{x}, \pmb{y}) \in \mathcal{D}_{\text{SFT}}} \sum_{j=1}^{|\pmb{y}|} \log p_{\theta}(\pmb{y}_j \mid \pmb{x}, \pmb{y}_{<j})LSFT(pθ)=(x,y)DSFTj=1ylogpθ(yjx,y<j). The resulting model pθSFTp_{\theta_{\text{SFT}}}pθSFT is optimized to generate outputs y\pmb{y}y given a prompt x\pmb{x}x, and this process is effective for in-distribution tasks.

The RL stage, typically used for aligning models with human preferences, treats the language model as a policy pθp_{\theta}pθ. It aims to maximize the expected reward rG(x,y)r_G(\pmb{x}, \pmb{y})rG(x,y) over the output distribution. When the ground truth reward is not available, a proxy reward model r(,)r(\cdot, \cdot)r(,) is trained on preference data DRL\mathcal{D}_{\text{RL}}DRL, which consists of prompt-response pairs with positive and negative responses. The policy is updated using a policy gradient objective, such as PPO, which maximizes the expected reward while regularizing against drift from a reference model πref\pi_{\text{ref}}πref. The objective is IRL(θ)=ExpDRL,ypθ(x)[r(x,y)]βExpDRL[DKL(pθ(x)πref(x))]\mathcal{I}_{\text{RL}}(\theta) = \mathbb{E}_{\pmb{x} \sim p_{\mathcal{D}_{\text{RL}}}, \pmb{y} \sim p_{\theta}(\cdot \mid \pmb{x})} [r(\pmb{x}, \pmb{y})] - \beta \mathbb{E}_{\pmb{x} \sim p_{\mathcal{D}_{\text{RL}}}} [D_{\text{KL}}(p_{\theta}(\cdot \mid \pmb{x}) \Vert \pi_{\text{ref}}(\cdot \mid \pmb{x}))]IRL(θ)=ExpDRL,ypθ(x)[r(x,y)]βExpDRL[DKL(pθ(x)πref(x))]. The closed-form solution for the updated policy is pθRL(2)(yx)=1Zβ(x)πref(yx)exp(r(x,y)/β)p_{\theta_{\text{RL}}^{(2)}}(\pmb{y} \mid \pmb{x}) = \frac{1}{Z_{\beta}(\pmb{x})} \pi_{\text{ref}}(\pmb{y} \mid \pmb{x}) \exp(r(\pmb{x}, \pmb{y}) / \beta)pθRL(2)(yx)=Zβ(x)1πref(yx)exp(r(x,y)/β).

The analysis reveals that the two stages are fundamentally coupled. In the SFT-then-RL pipeline, even if the SFT stage has converged, the subsequent RL phase inevitably degrades the SFT loss. This is because the RL update, which maximizes reward, shifts the model's output distribution away from the SFT-optimized distribution, leading to a non-trivial increase in the SFT loss. Conversely, in the RL-then-SFT pipeline, the SFT stage, which aims to fit the SFT data, can create a persistent performance gap that decreases the reward achieved by the RL stage. This is shown by the fact that any SFT update from an RL policy cannot increase the expected reward by more than a constant controlled by the distribution shift, and under stronger assumptions, it can lead to a measurable reward deficit. Therefore, the authors conclude that SFT and RL cannot be decoupled and should be treated as a single joint optimization problem.

Experiment

  • SFT-then-RL experiment: Fine-tuning the Qwen3-0.6B model on a CoLA-style SFT dataset followed by GRPO-based RL leads to a sharp increase in cross-entropy loss, exceeding the base model’s loss, validating Theorem 3.1 on non-decoupling.
  • RL-then-SFT experiment: Applying SFT after RL on the same dataset causes a significant drop in mean@1 reward from 0.385 (≈69.5% accuracy) to 0.343 (≈67.2% accuracy) under robust evaluation, confirming Theorem 4.1 and demonstrating performance degradation due to objective mismatch.
  • Both pipelines show performance deterioration in the second stage, empirically validating the inherent coupling between SFT and RL, with results consistent across both orders of training.

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing

HyperAI Newsletters

Abonnez-vous à nos dernières mises à jour
Nous vous enverrons les dernières mises à jour de la semaine dans votre boîte de réception à neuf heures chaque lundi matin
Propulsé par MailChimp