HyperAI

DexGraspVLA Robot Grasping Dataset

Date

2 months ago

Size

7.29 GB

Organization

Publish URL

github.com

This dataset was created by the Psi-Robot team in 2025. The research background is based on the need for high success rate of dexterous grasping in cluttered scenes, especially achieving a success rate of more than 90% under unseen objects, lighting and background combinations. The related paper results are "DexGraspVLA: A Vision-Language-Action Framework Towards General Dexterous GraspingThis framework uses a pre-trained vision-language model as a high-level task planner and learns a diffusion-based strategy as a low-level action controller. Its innovation lies in using the basic model to achieve strong generalization capabilities and using diffusion-based imitation learning to obtain dexterous actions.

This is a small dataset with 51 samples of human demonstration data, which is useful for understanding the data and format, and running the code to experience the training process.

DexGraspVLA.torrent
Seeding 1Downloading 1Completed 23Total Downloads 32
  • DexGraspVLA/
    • README.md
      1.48 KB
    • README.txt
      2.95 KB
      • data/
        • DexGraspVLA-main.zip
          3.65 GB
          • DexGraspVLA-main/
            • DexGraspVLA-github.zip
              3.65 GB
            • grasp_demo_example.tar.gz
              7.29 GB