HyperAI

One-click Deployment of MedGemma-4b-it Multimodal Medical AI Model

1. Tutorial Introduction

Build

MedGemma-4b-it is a multimodal medical AI model designed for the medical field developed by Google on May 21, 2025. It is an instruction-tuned version of the MedGemma suite, designed for joint analysis of medical images and text. It uses the SigLIP image encoder, which is specially pre-trained and uses data covering de-identified medical images, including chest X-rays, dermatology images, ophthalmology images, and histopathology sections. Its large language model component is trained based on a variety of medical data, covering radiological images, histopathology image patches, ophthalmology and dermatology images, and medical text.

This tutorial uses resources for a single RTX 4090 card.

2. Project Examples

3. Operation steps

1. After starting the container, click the API address to enter the Web interface

If "Model" is not displayed, it means the model is being initialized. Since the model is large, please wait about 3-4 minutes and refresh the page.

2. After entering the webpage, you can start a conversation with the model

How to use

4. Discussion

🖌️ If you see a high-quality project, please leave a message in the background to recommend it! In addition, we have also established a tutorial exchange group. Welcome friends to scan the QR code and remark [SD Tutorial] to join the group to discuss various technical issues and share application effects↓

Citation Information

The citation information for this project is as follows:

@misc{medgemma-hf,
    author = {Google},
    title = {MedGemma Hugging Face}
    howpublished = {\url{https://huggingface.co/collections/google/medgemma-release-680aade845f90bec6a3f60c4}},
    year = {2025},
    note = {Accessed: [Insert Date Accessed, e.g., 2025-05-20]}
}