Multi-Angle Point Cloud-VAE: Unsupervised Feature Learning for 3D Point Clouds from Multiple Angles by Joint Self-Reconstruction and Half-to-Half Prediction

Unsupervised feature learning for point clouds has been vital for large-scalepoint cloud understanding. Recent deep learning based methods depend onlearning global geometry from self-reconstruction. However, these methods arestill suffering from ineffective learning of local geometry, whichsignificantly limits the discriminability of learned features. To resolve thisissue, we propose MAP-VAE to enable the learning of global and local geometryby jointly leveraging global and local self-supervision. To enable effectivelocal self-supervision, we introduce multi-angle analysis for point clouds. Ina multi-angle scenario, we first split a point cloud into a front half and aback half from each angle, and then, train MAP-VAE to learn to predict a backhalf sequence from the corresponding front half sequence. MAP-VAE performs thishalf-to-half prediction using RNN to simultaneously learn each local geometryand the spatial relationship among them. In addition, MAP-VAE also learnsglobal geometry via self-reconstruction, where we employ a variationalconstraint to facilitate novel shape generation. The outperforming results infour shape analysis tasks show that MAP-VAE can learn more discriminativeglobal or local features than the state-of-the-art methods.