Command Palette
Search for a command to run...
Flash-KMeans: Schnelles und speichereffizientes exaktes K-Means
Flash-KMeans: Schnelles und speichereffizientes exaktes K-Means
Zusammenfassung
Der k-Means-Algorithmus wurde historisch primär als Offline-Verarbeitungsprimitive positioniert, typischerweise zur Organisation von Datensätzen oder als Vorverarbeitung für Embeddings, und nicht als erstklassige Komponente in Online-Systemen. In dieser Arbeit betrachten wir diesen klassischen Algorithmus unter der Perspektive des modernen AI-Systemdesigns neu und ermöglichen k-Means als Online-Primitive. Wir zeigen auf, dass bestehende GPU-Implementierungen von k-Means nach wie vor fundamental durch Low-Level-Systembeschränkungen begrenzt sind und nicht durch die theoretische algorithmische Komplexität. Konkret leidet die Zuweisungsphase unter einem schweren I/O-Engpass, der durch die massive explizite Materialisierung der N×K-Distanzmatrix im High-Bandwidth-Speicher (HBM) verursacht wird. Gleichzeitig wird die Zentroid-Aktualisierungsphase erheblich durch hardwarebedingte atomare Schreibkonflikte beeinträchtigt, die durch unregelmäßige, scatter-artige Token-Aggregationen entstehen. Um diese Leistungslücke zu schließen, stellen wir Flash-KMeans vor, eine I/O-bewusste und konfliktfreie k-Means-Implementierung für moderne GPU-Workloads. Flash-KMeans führt zwei kernige Innovationen auf Kernel-Ebene ein: (1) FlashAssign, das die Distanzberechnung mit einer Online-Armin-Operation fusioniert und somit die zwischengespeicherte Materialisierung im Speicher vollständig umgeht; (2) Sort-Inverse-Update, das eine inverse Abbildung explizit konstruiert, um hochkonfliktbehaftete atomare Scatter-Operationen in bandbreitenoptimierte, segmentweise lokalisierte Reduktionen zu transformieren. Darüber hinaus integrieren wir Algorithmen-System-Co-Designs, darunter Chunked-Stream-Overlap sowie cache-bewusste Kompilierungsheuristiken, um eine praktische Einsatzfähigkeit sicherzustellen. Umfassende Evaluierungen auf NVIDIA H200-GPUs zeigen, dass Flash-KMeans im Vergleich zu den besten Baseline-Methoden eine End-to-End-Beschleunigung von bis zu 17,9-fach erreicht und dabei branchenstandardisierte Bibliotheken wie cuML und FAISS um Faktoren von 33 bzw. über 200 übertrifft.
One-sentence Summary
Researchers from UC Berkeley, MIT, and UT Austin propose flash-kmeans, an IO-aware GPU implementation that eliminates distance matrix materialization and atomic contention via FlashAssign and sort-inverse update, delivering up to 17.9x speedup for scalable online clustering in modern AI workloads.
Key Contributions
- Existing GPU implementations of k-means are hindered by severe IO bottlenecks from materializing massive distance matrices and hardware-level atomic contention during centroid updates, preventing their use as efficient online primitives.
- Flash-KMeans addresses these issues with two core kernel innovations: FlashAssign, which fuses distance computation with online argmin to bypass intermediate memory storage, and sort-inverse update, which transforms high-contention atomic scatters into localized reductions.
- Evaluations on NVIDIA H200 GPUs show up to a 17.9x end-to-end speedup over best baselines and over 200x improvement compared to FAISS, while enabling seamless out-of-core execution on up to one billion points.
Introduction
K-means clustering is evolving from an offline data processing tool into a critical online primitive for modern AI systems, including vector quantization, sparse routing in large language models, and generative video pipelines. Despite this shift, existing GPU implementations fail to meet latency requirements because they remain bottlenecked by hardware constraints rather than algorithmic complexity. Prior approaches suffer from severe memory bandwidth waste due to the explicit materialization of massive distance matrices and suffer from hardware-level serialization caused by atomic write contention during centroid updates. To address these issues, the authors introduce Flash-KMeans, an exact and IO-aware implementation that fuses distance computation with online argmin to bypass intermediate memory storage and replaces irregular atomic scatters with a sort-inverse update strategy for efficient aggregation. This system-level redesign eliminates key bottlenecks, delivering up to 17.9 times end-to-end speedup over baselines while enabling scalable execution on datasets exceeding one billion points.
Method
The authors introduce flash-kmeans, a highly optimized implementation designed to overcome the severe memory and synchronization bottlenecks inherent in standard GPU-based k-means clustering. The methodology focuses on restructuring the execution dataflow to eliminate IO overheads and resolve write-side contention without altering the underlying mathematical objective.
FlashAssign: Materialization-Free Assignment
To address the memory wall caused by materializing the massive distance matrix D∈RN×K, the authors propose FlashAssign. This module fuses the distance computation and row-wise reduction into a single streaming procedure. Instead of writing the full distance matrix to High Bandwidth Memory (HBM) and reading it back, FlashAssign maintains running states for the minimum distance and corresponding centroid index directly in registers.
The process utilizes an online argmin update. For each data point, the kernel scans centroids in tiles. It computes local distances on-chip, identifies the local minimum within the tile, and compares it with the running minimum to update the global assignment. This approach ensures that the N×K distance matrix is never explicitly constructed in memory.

As illustrated in the framework diagram above, the algorithm loops over centroid tiles. For each point Xi, it computes distances against a centroid block Cj, finds the local minimum, and updates the global minimum index. By employing two-dimensional tiling and asynchronous prefetching, the kernel hides memory latency while ensuring that the IO complexity is reduced from O(NK) to O(Nd+Kd).
Sort-Inverse Update: Low-Contention Aggregation
In the centroid update stage, standard implementations suffer from severe atomic write contention because multiple threads frequently attempt to update the same centroid simultaneously using scatter-style atomic additions. To resolve this, the authors propose the sort-inverse update strategy.
The core idea is to transform the token-to-cluster update into a cluster-to-token gather operation. The system first applies an argsort operation to the assignment vector a to obtain a permutation index. This reorders the tokens such that identical cluster IDs are grouped into contiguous segments.

The figure below contrasts the standard scatter-style update with the proposed sort-inverse approach. In the standard method (a), tokens are scattered irregularly, causing conflicts across multiple blocks. In the sort-inverse method (b), tokens are sorted by cluster ID, creating contiguous segments. This allows each Cooperative Thread Array (CTA) to process a chunk of the sorted sequence, gathering features from the original matrix and accumulating partial sums entirely in fast on-chip memory. Global atomic operations are only issued at segment boundaries.
This reorganization drastically reduces the number of atomic operations from O(Nd) to O((K+⌈N/BN⌉)d). As shown in the execution timeline (c), this eliminates the frequent stalls caused by atomic lock contention, enabling contention-free memory writes and significantly accelerating the reduction phase.
Algorithm-System Co-design
To ensure deployability in real systems, flash-kmeans incorporates several system-level optimizations. For large-scale data that exceeds GPU memory, the authors implement a chunked stream overlap design. This partitions data into chunks and uses CUDA streams to coordinate asynchronous host-to-device transfers with computation, following a double-buffer streaming pattern. Additionally, a cache-aware compile heuristic is employed to select high-quality kernel configurations based on hardware characteristics and problem shape, minimizing the time-to-first-run overhead typically associated with exhaustive tuning.
Experiment
- Efficiency evaluations demonstrate that flash-kmeans consistently outperforms optimized baselines across diverse workloads, achieving up to 17.9× speedup in compute-intensive scenarios and 15.3× in highly batched settings while preventing out-of-memory failures in memory-intensive regimes.
- Kernel-level analysis confirms that custom FlashAssign and Sort-Inverse Update modules effectively eliminate distance matrix materialization and atomic contention bottlenecks, delivering up to 21.2× and 6.3× speedups respectively.
- Large-scale out-of-core experiments validate that the system successfully processes datasets up to one billion points by bounding peak memory usage, resulting in 6.3× to 10.5× faster iteration times compared to the most robust existing baseline.
- Algorithm-system co-design tests show that a cache-aware compile heuristic reduces configuration search time by up to 175× compared to exhaustive tuning while maintaining near-optimal runtime performance with negligible degradation.