vLLM + Open WebUI Deployment gemma-3-270m-it
1. Tutorial Introduction
gemma-3-270m-it is a Gemma 3 series lightweight instruction fine-tuning model launched by Google on March 12, 2025. It is built based on 270M (270 million) parameters and focuses on efficient conversational interaction and lightweight deployment. The model is lightweight and efficient, requiring only 1GB + video memory on a single card to run, making it suitable for edge devices and low-resource scenarios; it supports multi-round conversations, and is specially fine-tuned for daily questions and answers and simple task instructions, focusing on text generation and understanding (does not support multimodal input such as images), and supports 32K tokens context windows, which can handle long text conversations. The related paper results are "Gemma 3 Technical Report".
This tutorial uses resources for a single RTX 4090 card.
2. Project Examples

3. Operation steps
1. After starting the container, click the API address to enter the Web interface

2. After entering the webpage, you can start a conversation with the model
If "Model" is not displayed, it means the model is being initialized. Since the model is large, please wait about 2-3 minutes and refresh the page.
How to use

4. Discussion
🖌️ If you see a high-quality project, please leave a message in the background to recommend it! In addition, we have also established a tutorial exchange group. Welcome friends to scan the QR code and remark [SD Tutorial] to join the group to discuss various technical issues and share application effects↓

Citation Information
The citation information for this project is as follows:
@article{gemma_2025,
title={Gemma 3},
url={https://arxiv.org/abs/2503.19786},
publisher={Google DeepMind},
author={Gemma Team},
year={2025}
}