Command Palette
Search for a command to run...
IndexCache : Accélération de l'attention éparses par réutilisation d'index inter-couches
IndexCache : Accélération de l'attention éparses par réutilisation d'index inter-couches
Yushi Bai Qian Dong Ting Jiang Xin Lv Zhengxiao Du Aohan Zeng Jie Tang Juanzi Li
Résumé
Les flux de travail agents à contexte long sont devenus un cas d'usage déterminant pour les grands modèles de langage, rendant l'efficacité de l'attention critique tant pour la vitesse d'inférence que pour le coût de service. L'attention parcimonieuse répond efficacement à ce défi, et DeepSeek Sparse Attention (DSA) constitue une solution représentative de niveau production : un indexeur léger et ultra-rapide sélectionne les k tokens les plus pertinents par requête, réduisant la complexité de l'attention cœur de O(L²) à O(Lk). Toutefois, l'indexeur lui-même conserve une complexité de O(L²) et doit s'exécuter indépendamment à chaque couche, bien que les sélections top-k résultantes soient hautement similaires d'une couche consécutive à l'autre. Nous présentons IndexCache, qui exploite cette redondance inter-couche en partitionnant les couches en un petit ensemble de couches « complètes » exécutant leurs propres indexeurs et en une majorité de couches « partagées » réutilisant simplement les indices top-k de la couche complète la plus proche. Nous proposons deux approches complémentaires pour déterminer et optimiser cette configuration. IndexCache sans entraînement applique un algorithme de recherche gloutonne qui sélectionne les couches conservant un indexeur en minimisant directement la perte de modélisation du langage sur un ensemble d'étalonnage, sans aucune mise à jour des poids. IndexCache sensible à l'entraînement introduit une fonction de perte de distillation multi-couche qui entraîne chaque indexeur conservé contre les distributions d'attention moyennes de toutes les couches qu'il dessert, permettant même à des motifs entrelacés simples d'atteindre la précision d'un indexeur complet. Les résultats expérimentaux sur un modèle DSA de 30 milliards de paramètres montrent qu'IndexCache peut éliminer 75 % des calculs d'indexeur avec une dégradation de qualité négligeable, réalisant jusqu'à un gain de vitesse de préremplissage de 1,82 fois et un gain de vitesse de décodage de 1,48 fois par rapport à DSA standard. Ces résultats positifs sont davantage confirmés par nos expériences préliminaires sur le modèle GLM-5 à échelle de production (figure 1).
One-sentence Summary
Researchers from Tsinghua University and Z.ai introduce IndexCache, a technique that optimizes DeepSeek Sparse Attention by exploiting cross-layer redundancy to share token indices. This approach eliminates up to 75% of indexer computations in long-context workflows, delivering significant inference speedups without requiring model retraining or degrading output quality.
Key Contributions
- Long-context agentic workflows rely on DeepSeek Sparse Attention to reduce core attention complexity, yet the required lightning indexer still incurs quadratic O(L2) cost at every layer, creating a significant bottleneck for inference speed and serving costs.
- IndexCache addresses this redundancy by partitioning layers into Full layers that compute indices and Shared layers that reuse the nearest Full layer's top-k selections, utilizing either a training-free greedy search or a training-aware multi-layer distillation loss to optimize the configuration.
- Experiments on a 30B DSA model demonstrate that IndexCache removes 75% of indexer computations with negligible quality degradation, achieving up to 1.82x prefetch and 1.48x decode speedups while maintaining performance across nine long-context and reasoning benchmarks.
Introduction
Large language models face a critical bottleneck in long-context inference due to the quadratic complexity of self-attention, which sparse mechanisms like DeepSeek Sparse Attention (DSA) address by selecting only the most relevant tokens. While DSA reduces core attention costs, its reliance on a lightweight indexer at every layer still incurs quadratic overhead that dominates latency during the prefill stage. The authors leverage the observation that token selection patterns remain highly stable across consecutive layers to introduce IndexCache, a method that eliminates up to 75% of indexer computations by reusing indices from a small subset of retained layers. They propose both a training-free approach using greedy layer selection and a training-aware strategy with multi-layer distillation to maintain model quality while achieving significant speedups in long-context scenarios.

Method
The authors leverage the observation that sparse attention indexers exhibit significant redundancy across consecutive layers to reduce computational overhead. In standard DeepSeek Sparse Attention, a lightweight lightning indexer scores all preceding tokens at every layer to select the top-k positions. While this reduces core attention complexity from O(L2) to O(Lk), the indexer itself retains O(L2) complexity. IndexCache addresses this by partitioning the N transformer layers into two categories: Full layers and Shared layers. Full layers retain their indexers to compute fresh top-k sets, while Shared layers skip the indexer forward pass and reuse the index set from the nearest preceding Full layer. This design allows the system to eliminate a large fraction of the total indexer cost with minimal architectural changes.
To determine the optimal configuration of Full and Shared layers without retraining, the authors propose a training-free greedy search algorithm. The process begins with all layers designated as Full. The algorithm iteratively evaluates the language modeling loss on a calibration set for each candidate layer conversion. At each step, the layer whose conversion to Shared status results in the lowest loss increase is selected. This data-driven approach identifies which indexers are expendable based on their intrinsic importance to the model's performance rather than relying on uniform interleaving patterns.
For models trained from scratch or via continued pre-training, a training-aware approach further optimizes the indexer parameters for cross-layer sharing. Standard training distills the indexer against the attention distribution of its own layer. IndexCache generalizes this by introducing a multi-layer distillation loss. This objective encourages the retained indexer to predict a top-k set that is jointly useful for itself and all subsequent Shared layers it serves. The loss function is defined as:
LmultiI=j=0∑mm+11t∑DKL(pt(ℓ+j)qt(ℓ)),where pt(ℓ+j) represents the aggregated attention distribution at layer ℓ+j and qt(ℓ) is the indexer's output distribution. Theoretical analysis shows that this multi-layer loss produces gradients equivalent to distilling against the averaged attention distribution of all served layers. This ensures the indexer learns a consensus top-k selection that covers important tokens across the entire group of layers.
Experimental evaluations on a 30B parameter model demonstrate the efficiency gains achieved by removing indexer computations. The method successfully eliminates up to 75% of indexer costs while maintaining comparable quality. Performance metrics regarding prefill time and decode throughput are summarized below.
The results confirm that IndexCache delivers significant speedups in both prefill and decode phases without degrading model capabilities.
Experiment
- End-to-end inference experiments demonstrate that IndexCache significantly accelerates both prefill latency and decode throughput for long-context scenarios, with speedups increasing as context length grows, while maintaining comparable performance on general reasoning tasks.
- Training-free IndexCache evaluations reveal that greedy-searched sharing patterns are essential for preserving long-context accuracy at aggressive retention ratios, whereas uniform interleaving causes substantial degradation; however, general reasoning capabilities remain robust across most configurations.
- Training-aware IndexCache results show that retraining the model to adapt to index sharing eliminates the sensitivity to specific patterns, allowing simple uniform interleaving to match full-indexer performance and confirming the effectiveness of cross-layer distillation.
- Scaling experiments on a 744B-parameter model validate that the trends observed in smaller models hold true, with searched patterns providing stable quality recovery even at high sparsity levels.
- Analysis of cross-layer index overlap confirms high redundancy between adjacent layers but reveals that local similarity metrics fail to identify optimal sharing patterns, necessitating end-to-end loss-based search to prevent cascading errors in deep networks.