HyperAI

UTD-MHAD Human Action Recognition Dataset

特色图像

UTD stands for University of Texas at Dallas, and MHAD stands for Multimodal Human Action Dataset. This dataset consists of videos of 27 actions of 8 subjects. Each subject repeated an action 4 times, resulting in a total of 861 action sequences (3 action sequences were deleted due to damage). The dataset has four time-synchronized data modes, namely RGB video, depth video, skeleton position, inertial signals from Kinect cameras and wearable inertial sensors.

This dataset can be used to study fusion methods, similar to the method used in the dataset to combine depth camera data and inertial sensor data. It can also be used for multimodal research in the field of human action recognition.

UTD-MHAD.torrent
Seeding 1Downloading 1Completed 520Total Downloads 820
  • UTD-MHAD/
    • README.md
      1.41 KB
    • README.txt
      2.83 KB
      • data/
        • Depth.zip
          120.6 MB
        • Inertial.zip
          125.7 MB
        • RGB.zip
          1.15 GB
        • Sample_Code.zip
          1.15 GB
        • Skeleton.zip
          1.17 GB