HyperAIHyperAI

vLLM + Open WebUI Deploy NVIDIA-Nemotron-Nano-9B-v2

1. Tutorial Introduction

NVIDIA-Nemotron-Nano-9B-v2 is a lightweight large language model launched by the NVIDIA team on August 19, 2025. As a hybrid architecture optimized version of the Nemotron series, this model innovatively combines Mamba's efficient long sequence processing with Transformer's strong semantic modeling capabilities, achieving 128K ultra-long context support with only 9 billion (9B) parameters. Its inference efficiency and task performance on edge computing devices (such as RTX 4090-class GPUs) are comparable to cutting-edge models of the same parameter scale, marking a major breakthrough in the field of lightweight deployment and long text understanding of large language models. The relevant paper results are "NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model".

This tutorial uses a single RTX A6000 card as the resource.

2. Project Examples

3. Operation steps

1. After starting the container, click the API address to enter the Web interface

2. After entering the webpage, you can start a conversation with the model

If "Model" is not displayed, it means the model is being initialized. Since the model is large, please wait about 2-3 minutes and refresh the page.

How to use

4. Discussion

🖌️ If you see a high-quality project, please leave a message in the background to recommend it! In addition, we have also established a tutorial exchange group. Welcome friends to scan the QR code and remark [SD Tutorial] to join the group to discuss various technical issues and share application effects↓

Citation Information

The citation information for this project is as follows:

@misc{nvidia2025nvidianemotronnano2,
      title={NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model},
      author={NVIDIA},
      year={2025},
      eprint={2508.14444},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2508.14444},
}