Action-conditioned On-demand Motion Generation

We propose a novel framework, On-Demand MOtion Generation (ODMO), forgenerating realistic and diverse long-term 3D human motion sequencesconditioned only on action types with an additional capability ofcustomization. ODMO shows improvements over SOTA approaches on all traditionalmotion evaluation metrics when evaluated on three public datasets (HumanAct12,UESTC, and MoCap). Furthermore, we provide both qualitative evaluations andquantitative metrics demonstrating several first-known customizationcapabilities afforded by our framework, including mode discovery,interpolation, and trajectory customization. These capabilities significantlywiden the spectrum of potential applications of such motion generation models.The novel on-demand generative capabilities are enabled by innovations in boththe encoder and decoder architectures: (i) Encoder: Utilizing contrastivelearning in low-dimensional latent space to create a hierarchical embedding ofmotion sequences, where not only the codes of different action types formdifferent groups, but within an action type, codes of similar inherent patterns(motion styles) cluster together, making them readily discoverable; (ii)Decoder: Using a hierarchical decoding strategy where the motion trajectory isreconstructed first and then used to reconstruct the whole motion sequence.Such an architecture enables effective trajectory control. Our code is releasedon the Github page: https://github.com/roychowdhuryresearch/ODMO