HyperAIHyperAI
11 days ago

Motion Anything: Any to Motion Generation

Zeyu Zhang, Yiran Wang, Wei Mao, Danning Li, Rui Zhao, Biao Wu, Zirui Song, Bohan Zhuang, Ian Reid, Richard Hartley
Motion Anything: Any to Motion Generation
Abstract

Conditional motion generation has been extensively studied in computervision, yet two critical challenges remain. First, while masked autoregressivemethods have recently outperformed diffusion-based approaches, existing maskingmodels lack a mechanism to prioritize dynamic frames and body parts based ongiven conditions. Second, existing methods for different conditioningmodalities often fail to integrate multiple modalities effectively, limitingcontrol and coherence in generated motion. To address these challenges, wepropose Motion Anything, a multimodal motion generation framework thatintroduces an Attention-based Mask Modeling approach, enabling fine-grainedspatial and temporal control over key frames and actions. Our model adaptivelyencodes multimodal conditions, including text and music, improvingcontrollability. Additionally, we introduce Text-Music-Dance (TMD), a newmotion dataset consisting of 2,153 pairs of text, music, and dance, making ittwice the size of AIST++, thereby filling a critical gap in the community.Extensive experiments demonstrate that Motion Anything surpassesstate-of-the-art methods across multiple benchmarks, achieving a 15%improvement in FID on HumanML3D and showing consistent performance gains onAIST++ and TMD. See our project websitehttps://steve-zeyu-zhang.github.io/MotionAnything

Motion Anything: Any to Motion Generation | Latest Papers | HyperAI