HyperAIHyperAI
2 months ago

Hamba: Single-view 3D Hand Reconstruction with Graph-guided Bi-Scanning Mamba

Dong, Haoye ; Chharia, Aviral ; Gou, Wenbo ; Carrasco, Francisco Vicente ; De la Torre, Fernando
Hamba: Single-view 3D Hand Reconstruction with Graph-guided Bi-Scanning
  Mamba
Abstract

3D Hand reconstruction from a single RGB image is challenging due to thearticulated motion, self-occlusion, and interaction with objects. Existing SOTAmethods employ attention-based transformers to learn the 3D hand pose andshape, yet they do not fully achieve robust and accurate performance, primarilydue to inefficiently modeling spatial relations between joints. To address thisproblem, we propose a novel graph-guided Mamba framework, named Hamba, whichbridges graph learning and state space modeling. Our core idea is toreformulate Mamba's scanning into graph-guided bidirectional scanning for 3Dreconstruction using a few effective tokens. This enables us to efficientlylearn the spatial relationships between joints for improving reconstructionperformance. Specifically, we design a Graph-guided State Space (GSS) blockthat learns the graph-structured relations and spatial sequences of joints anduses 88.5% fewer tokens than attention-based methods. Additionally, weintegrate the state space features and the global features using a fusionmodule. By utilizing the GSS block and the fusion module, Hamba effectivelyleverages the graph-guided state space features and jointly considers globaland local features to improve performance. Experiments on several benchmarksand in-the-wild tests demonstrate that Hamba significantly outperforms existingSOTAs, achieving the PA-MPVPE of 5.3mm and F@15mm of 0.992 on FreiHAND. At thetime of this paper's acceptance, Hamba holds the top position, Rank 1 in twoCompetition Leaderboards on 3D hand reconstruction. Project Website:https://humansensinglab.github.io/Hamba/

Hamba: Single-view 3D Hand Reconstruction with Graph-guided Bi-Scanning Mamba | Latest Papers | HyperAI