Self-supervised Learning of Point Clouds via Orientation Estimation

Point clouds provide a compact and efficient representation of 3D shapes.While deep neural networks have achieved impressive results on point cloudlearning tasks, they require massive amounts of manually labeled data, whichcan be costly and time-consuming to collect. In this paper, we leverage 3Dself-supervision for learning downstream tasks on point clouds with fewerlabels. A point cloud can be rotated in infinitely many ways, which provides arich label-free source for self-supervision. We consider the auxiliary task ofpredicting rotations that in turn leads to useful features for other tasks suchas shape classification and 3D keypoint prediction. Using experiments onShapeNet and ModelNet, we demonstrate that our approach outperforms thestate-of-the-art. Moreover, features learned by our model are complementary toother self-supervised methods and combining them leads to further performanceimprovement.