End-to-end Recovery of Human Shape and Pose

We describe Human Mesh Recovery (HMR), an end-to-end framework forreconstructing a full 3D mesh of a human body from a single RGB image. Incontrast to most current methods that compute 2D or 3D joint locations, weproduce a richer and more useful mesh representation that is parameterized byshape and 3D joint angles. The main objective is to minimize the reprojectionloss of keypoints, which allow our model to be trained using images in-the-wildthat only have ground truth 2D annotations. However, the reprojection lossalone leaves the model highly under constrained. In this work we address thisproblem by introducing an adversary trained to tell whether a human bodyparameter is real or not using a large database of 3D human meshes. We showthat HMR can be trained with and without using any paired 2D-to-3D supervision.We do not rely on intermediate 2D keypoint detections and infer 3D pose andshape parameters directly from image pixels. Our model runs in real-time givena bounding box containing the person. We demonstrate our approach on variousimages in-the-wild and out-perform previous optimization based methods thatoutput 3D meshes and show competitive results on tasks such as 3D jointlocation estimation and part segmentation.