Command Palette
Search for a command to run...
iFSQ:1行のコードで画像生成向けFSQを改善する
iFSQ:1行のコードで画像生成向けFSQを改善する
Abstract
画像生成分野は現在、離散トークン上で動作する自己回帰(AR)モデルと連続的な潜在空間を用いる拡散モデルに二分されている。この分断は、VQ-VAEとVAEの違いに由来し、統一的なモデル化や公正なベンチマーク評価を阻害している。有限スカラー量子化(FSQ)は理論的にこの隔たりを橋渡しする可能性を示すが、従来のFSQには重大な欠陥がある:等間隔量子化により活性化の崩壊(activation collapse)が生じる。この不一致は、再構成の忠実度と情報効率の間でトレードオフを強いられる。本研究では、元のFSQにおける活性化関数を分布一致マッピングに単純に置き換えることで、このジレンマを解決する。これをiFSQと呼ぶ。この単純な戦略はわずか1行のコードで実現可能でありながら、数学的に最適なビン利用と再構成精度の両方を保証する。iFSQを制御されたベンチマークとして用いることで、以下の2つの重要な知見を得た:(1)離散表現と連続表現の最適な均衡は、次元あたり約4ビットの領域に存在する。(2)同一の再構成制約下では、ARモデルは初期段階で急速に収束するが、拡散モデルはより高い性能上限に達する。これは、厳密な逐次的順序付けが生成品質の上限を制限している可能性を示唆している。さらに、本研究ではARモデルに表現一致(REPA)を適応し、LlamaGen-REPAを構築した。コードは https://github.com/Tencent-Hunyuan/iFSQ で公開されている。
One-sentence Summary
Researchers from Peking University and Tencent Hunyuan propose iFSQ, a refined scalar quantization that fixes activation collapse via distribution-matching, enabling fair AR-diffusion benchmarking; they reveal 4 bits/dim as optimal and show diffusion models surpass AR in ceiling performance, extending insights to LlamaGen-REPA.
Key Contributions
- We introduce iFSQ, a one-line-code enhancement to Finite Scalar Quantization that replaces the tanh activation with a distribution-matching function, ensuring uniform bin utilization and optimal reconstruction fidelity by mapping Gaussian latents to a uniform prior.
- Using iFSQ as a unified tokenizer, we establish a controlled benchmark revealing that 4 bits per dimension strikes the optimal balance between discrete and continuous representations, and that diffusion models outperform autoregressive models in final quality despite slower convergence.
- We adapt Representation Alignment (REPA) to autoregressive image generation, creating LlamaGen-REPA with stronger regularization (λ=2.0), demonstrating improved performance while confirming the benefit of feature alignment across generative paradigms.
Introduction
The authors leverage Finite Scalar Quantization (FSQ) to unify autoregressive and diffusion image generation under a single tokenizer, addressing the long-standing fragmentation caused by VQ-VAEs and VAEs. Vanilla FSQ suffers from activation collapse due to equal-interval quantization mismatching the Gaussian-like distribution of neural activations, forcing a trade-off between reconstruction fidelity and bin utilization. Their main contribution is iFSQ — a one-line code modification replacing tanh with a distribution-matching activation — which enforces uniform prior while preserving equal intervals, achieving optimal fidelity and efficiency simultaneously. This enables fair benchmarking, revealing that 4 bits per dimension is the sweet spot and that diffusion models outperform AR models in peak quality despite slower convergence. They further extend this by adapting Representation Alignment to AR models, creating LlamaGen-REPA.

Method
The authors leverage a quantization-based framework for visual tokenization that bridges the gap between continuous and discrete representation paradigms in generative models. The core of this approach is built upon Finite Scalar Quantization (FSQ), which enables a discrete tokenization process without the need for an explicit, learnable codebook. This design allows for efficient and stable tokenization suitable for both autoregressive and diffusion-based generation tasks. The tokenizer architecture consists of an encoder that compresses input images x∈RH×W×3 into a lower-resolution latent representation z∈Rh×w×d, typically through an 8× or 16× downsampling process. The decoder then reconstructs the image from the latent space, forming a complete compression-decompression pipeline.
For diffusion models, the quantized latent zquant is used directly as input to the diffusion process. The quantization is performed by first applying a bounding function f:R→[−1,1], commonly the hyperbolic tangent function, to constrain the latent values. The quantization resolution is defined by L=2K+1 levels per channel, where K determines the number of quantization levels. The continuous latent z is mapped to discrete integer indices q∈{0,…,L−1}d via element-wise rounding, as defined by the equation:
qj=round(2L−1⋅(f(zj)+1))This operation maps the range [−1,1] to the integer set {0,…,L−1}. The quantized indices are then mapped back to the continuous space for diffusion models using:
zquant,j=(qj−2L−1)⋅L−12This step introduces a lossy compression, where zquant≈z, preserving the structural properties of the continuous latent space while enabling discrete tokenization.
For autoregressive models, the quantized indices q are converted into a single scalar token index I through a bijective base-L expansion:
I=j=1∑dqj⋅Ld−jThis transformation ensures a unique mapping from the d-dimensional quantized vector to a scalar index, enabling the use of autoregressive models that predict tokens sequentially. The implicit codebook size is ∣C∣=Ld, which grows exponentially with the dimensionality d, but avoids the memory and stability issues associated with learnable codebooks in VQ-VAE.
The authors introduce a modification to the standard FSQ pipeline by replacing the tanh activation with a scaled sigmoid function to achieve a more uniform distribution of quantized values. Specifically, the bounding function is replaced as follows:
z=2⋅sigmoid(1.6⋅z)−1This change improves the uniformity of the transformed distribution, which is critical for maintaining the quality of the quantized representation. The quantization process is further refined using a straight-through estimator to handle gradient flow during training. The rounded latent zrounded is computed as:
zrounded=round(zscaled)where zscaled=z⋅halfWidth, and halfWidth=(L−1)/2. The estimator is then applied as:
z^=zrounded−zscaled.detach+zscaledThis allows gradients to pass through the rounding operation during backpropagation. Finally, for diffusion models, the quantized latent is normalized by dividing by the half-width:
zq=z^/halfWidthThis normalization ensures that the quantized latent remains within the [−1,1] range, consistent with the input distribution.
Experiment
- Optimized iFSQ via α=1.6 in sigmoid-based activation achieves near-uniform output distribution, minimizing RMSE and KS metrics vs. tanh (α=2.0), improving reconstruction fidelity and entropy utilization.
- On ImageNet, iFSQ (α=1.6) outperforms FSQ in PSNR, SSIM, and LPIPS; trends replicate on COCO, confirming scalability.
- For diffusion generation (DiT), iFSQ at 4 bits achieves gFID 12.76 (vs. AE’s 13.78) with 3× higher compression (96 vs. 24); performance plateaus beyond 4 bits.
- For autoregressive generation (LlamaGen), iFSQ outperforms VQ at same latent dimension and lower bit rate; 4-bit iFSQ matches AE, with performance peaking at 4 bits.
- iFSQ enables fair comparison of diffusion vs. AR models: diffusion converges slower but surpasses AR in FID at higher compute; AR models exhibit strong sequential constraint limitations.
- iFSQ scales favorably: at 2 bits, double latent dim surpasses AE; at 7–8 bits, matches or exceeds AE; outperforms VQ across quantization levels and dimensions.
- REPA alignment at 1/3 network depth (e.g., layer 8/24) optimizes semantic acquisition in LlamaGen; λ=2.0 yields best FID, differing from DiT’s optimal λ.
- Compression ratio scaling (fig. 10) shows linear performance trend with optimal knee at ~48× compression (4 bits); VQ aligns closely with iFSQ trend, validating its hybrid discrete-continuous nature.
The authors compare various tokenizers, including VQ and AE variants, and find that AE-f8 achieves the highest PSNR and SSIM while also yielding the lowest LPIPS and rFID scores, indicating superior reconstruction quality. Among the VQ-based tokenizers, VQ-f8 outperforms VQ-f16 in all metrics, suggesting that higher quantization levels improve reconstruction performance.

Results show that iFSQ at 2 bits achieves the lowest gFID of 26.02, outperforming VQ-VAE and FSQ under the same settings. The performance degrades as the bit rate increases beyond 2 bits, with iFSQ at 4 bits achieving a gFID of 28.07, indicating that lower quantization levels are optimal for this configuration.

Results show that iFSQ with α = 1.6 achieves the best performance across PSNR, SSIM, and LPIPS, outperforming the original FSQ (α = 2.0), while also minimizing RMSE and KS statistics, indicating a near-uniform distribution. The optimal setting at α = 1.6 balances fidelity and distributional alignment, leading to superior image reconstruction quality compared to both continuous and discrete baselines.

The authors use iFSQ to optimize the distribution of latent features in image generation models, showing that setting the activation parameter α to 1.6 achieves a near-uniform distribution, which improves reconstruction quality and generation performance. Results show that iFSQ at 4 bits matches or exceeds the performance of continuous AE and VQ-VAE across metrics like PSNR, SSIM, and FID, while maintaining a higher compression ratio and better training efficiency.

Results show that iFSQ with 4 bits achieves the best balance between reconstruction and generation performance, outperforming both AE and VQ-VAE across PSNR, SSIM, and LPIPS metrics while maintaining a significantly higher compression rate. The optimal performance at 4 bits aligns with the theoretical analysis, where the distribution of iFSQ's activations most closely approximates a uniform distribution, maximizing information entropy and minimizing activation collapse.
