HyperAI

CASS: Nvidia to AMD Transpilation with Data, Models, and Benchmark

Ahmed Heakl, Sarim Hashmi, Gustavo Bertolo Stahl, Seung Hun Eddie Han, Salman Khan, Abdulrahman Mahmoud
Release Date: 6/5/2025
CASS: Nvidia to AMD Transpilation with Data, Models, and Benchmark
Abstract

We introduce CASS, the first large-scale dataset and model suite forcross-architecture GPU code transpilation, targeting both source-level (CUDAleftrightarrow HIP) and assembly-level (Nvidia SASS leftrightarrow AMDRDNA3) translation. The dataset comprises 70k verified code pairs across hostand device, addressing a critical gap in low-level GPU code portability.Leveraging this resource, we train the CASS family of domain-specific languagemodels, achieving 95% source translation accuracy and 37.5% assemblytranslation accuracy, substantially outperforming commercial baselines such asGPT-4o, Claude, and Hipify. Our generated code matches native performance inover 85% of test cases, preserving runtime and memory behavior. To supportrigorous evaluation, we introduce CASS-Bench, a curated benchmark spanning 16GPU domains with ground-truth execution. All data, models, and evaluation toolsare released as open source to foster progress in GPU compiler tooling, binarycompatibility, and LLM-guided hardware translation. Dataset and benchmark areonhttps://huggingface.co/datasets/MBZUAI/cass{blue{HuggingFace}},with code athttps://github.com/GustavoStahl/CASS{blue{GitHub}}.