EV-Action: Electromyography-Vision Multi-Modal Action Dataset

Multi-modal human action analysis is a critical and attractive researchtopic. However, the majority of the existing datasets only provide visualmodalities (i.e., RGB, depth and skeleton). To make up this, we introduce anew, large-scale EV-Action dataset in this work, which consists of RGB, depth,electromyography (EMG), and two skeleton modalities. Compared with theconventional datasets, EV-Action dataset has two major improvements: (1) wedeploy a motion capturing system to obtain high quality skeleton modality,which provides more comprehensive motion information including skeleton,trajectory, acceleration with higher accuracy, sampling frequency, and moreskeleton markers. (2) we introduce an EMG modality which is usually used as aneffective indicator in the biomechanics area, also it has yet to be wellexplored in motion related research. To the best of our knowledge, this is thefirst action dataset with EMG modality. The details of EV-Action dataset areclarified, meanwhile, a simple yet effective framework for EMG-based actionrecognition is proposed. Moreover, state-of-the-art baselines are applied toevaluate the effectiveness of all the modalities. The obtained result clearlyshows the validity of EMG modality in human action analysis tasks. We hope thisdataset can make significant contributions to human motion analysis, computervision, machine learning, biomechanics, and other interdisciplinary fields.