HyperAI

TransPixeler: Text to RGBA Video

1. Tutorial Introduction

Build

TransPixeler is a text-to-video generation method published by the Chinese University of Hong Kong, the Hong Kong University of Science and Technology, and Adobe Research in 2025. This method retains the advantages of the original RGB model and achieves strong alignment between RGB and alpha channels with limited training data. It can effectively generate diverse and consistent RGBA videos, thus promoting the possibility of visual effects and interactive content creation.TransPixeler: Advancing Text-to-Video Generation with Transparency", has been CVPR 2025 accept.

This tutorial uses a single-card A6000 resource, and the text description currently only supports English.

2. Project Examples

3. Operation steps

1. After starting the container, click the API address to enter the Web interface

If "Bad Gateway" is displayed, it means the model is initializing. Since the model is large, please wait about 1-2 minutes and refresh the page.

2. After entering the webpage, you can start a conversation with the model

Parameter Description:

  • Seed:  The random number seed is used to control the randomness of the generation process. The same Seed value can generate the same results (provided that other parameters are the same), which is very important in reproducing the results.

How to use

4. Discussion

🖌️ If you see a high-quality project, please leave a message in the background to recommend it! In addition, we have also established a tutorial exchange group. Welcome friends to scan the QR code and remark [SD Tutorial] to join the group to discuss various technical issues and share application effects↓

Citation Information

Thanks to Github user xxxjjjyyy1  Deployment of this tutorial. The reference information of this project is as follows:

@misc{wang2025transpixeler,
      title={TransPixeler: Advancing Text-to-Video Generation with Transparency}, 
      author={Luozhou Wang and Yijun Li and Zhifei Chen and Jui-Hsien Wang and Zhifei Zhang and He Zhang and Zhe Lin and Ying-Cong Chen},
      year={2025},
      eprint={2501.03006},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2501.03006}, 
}