Two-hand Global 3D Pose Estimation Using Monocular RGB

We tackle the challenging task of estimating global 3D joint locations forboth hands via only monocular RGB input images. We propose a novel multi-stageconvolutional neural network based pipeline that accurately segments andlocates the hands despite occlusion between two hands and complex backgroundnoise and estimates the 2D and 3D canonical joint locations without any depthinformation. Global joint locations with respect to the camera origin arecomputed using the hand pose estimations and the actual length of the key bonewith a novel projection algorithm. To train the CNNs for this new task, weintroduce a large-scale synthetic 3D hand pose dataset. We demonstrate that oursystem outperforms previous works on 3D canonical hand pose estimationbenchmark datasets with RGB-only information. Additionally, we present thefirst work that achieves accurate global 3D hand tracking on both hands usingRGB-only inputs and provide extensive quantitative and qualitative evaluation.