HyperAI

Online Tutorial | Qingdao Boy Jiao Enjun's Soul Travels Through Black Myth Wukong? MuseV + MuseTalk Create High-quality Digital People

特色图像

Using traditional digital human training programs to generate a high-quality digital human often requires a lot of time and computing resources, and also has high requirements for training materials. If you want to achieve a good lip shape consistency effect, it usually takes several hours or even longer.

The emergence of MuseV and MuseTalk has brought new breakthroughs in the field of digital humans. After using MuseV to generate digital human videos, MuseTalk is used to synchronize lip movements and audio, and complete digital human production can be achieved in just a few minutes.

"MuseV Unlimited Duration Virtual Human Video Generation Demo" and "MuseTalk High-Quality Lip Synchronization Model Demo" have been uploaded to the public tutorial module of OpenBayes.The environment has been built for everyone. You don’t need to enter any commands. You can start it immediately by cloning it with one click!

Tutorial address:

* MuseV:

https://go.hyper.ai/K8qz8

* MuseTalk:

https://go.hyper.ai/Ui0UA

In order to make everyone understand better,B station Up Master "Naonao Bunao nowsmon" recorded a detailed teaching video, welcome everyone to click three times~

https://www.bilibili.com/video/BV1fCWVeWEic/?vd_source=5e54209e1f8c68b7f1dc3df8aabf856c

I believe many readers have already experienced the recently popular "Black Myth: Wukong". Erlang Shen really makes people love and hate him. Many players also complained that his face is not handsome enough, so they replaced it with "National Yang Jian" Jiao Enjun.

Demo Run

Generate virtual human videos using MuseV

1. Log in to hyper.ai, search for "MuseV Unlimited Duration Virtual Human Video Generation Demo" on the "Tutorial" page, and click "Run this tutorial online".

2. After the page jumps, click "Clone" in the upper right corner to clone the tutorial into your own container.

3. Click "Next: Select Hashrate" in the lower right corner.

4. After the page jumps, select "NVIDIA RTX 4090" and "PyTorch" image, and click "Next: Review".New users can register using the invitation link below to get 4 hours of RTX 4090 + 5 hours of CPU free time!

HyperAI exclusive invitation link (copy and open in browser):https://openbayes.com/console/signup?r=6bJ0ljLFsFh_Vvej

5. After confirmation, click "Continue" and wait for resources to be allocated. The first cloning takes about 2 minutes. When the status changes to "Running", click the jump arrow next to "API Address" to jump to the Demo page.Please note that users must complete real-name authentication before using the API address access function.

If "Bad Gateway" is displayed when opening the API address, it means that the model has not been loaded yet. Wait 1-2 minutes and then open the API address again.

6. After opening the Demo, upload a picture and enter the prompt. The format of the prompt is quality word + character subject + action word, for example (masterpiece, best quality, highres:1), (1boy, solo:1), (eye blinks:1.6), (hair wave:1.3). After entering, click "Generate" and wait for a moment to generate the video.

Lip sync and audio with MuseTalk

1. Return to the Tutorial interface, open MuseTalk High-Quality Lip-Sync Model Demo, and click Run this tutorial online.

2. After the page jumps, click "Clone" in the upper right corner to clone the tutorial into your own container.

3. Click "Next: Select Hashrate" in the lower right corner.

4. After the page jumps, still select "NVIDIA RTX 4090" and "PyTorch" images, and click "Next: Review".

5. After confirmation, click "Continue" and wait for resources to be allocated. It takes about 2 minutes for the first cloning. After the status is displayed as "Running", click the jump arrow next to "API Address" to open Demo.

6. Enter the MuseTalk Demo page, upload the video we just made, then upload an audio clip, click "Generate", wait for a while, and you can see that the lip shape and audio of the character in the newly generated video are consistent.

We have established a "Stable Diffusion Tutorial Exchange Group". Welcome friends to join the group to discuss various technical issues and share application results~