Autoencoders as Cross-Modal Teachers: Can Pretrained 2D Image Transformers Help 3D Representation Learning?

The success of deep learning heavily relies on large-scale data withcomprehensive labels, which is more expensive and time-consuming to fetch in 3Dcompared to 2D images or natural languages. This promotes the potential ofutilizing models pretrained with data more than 3D as teachers for cross-modalknowledge transferring. In this paper, we revisit masked modeling in a unifiedfashion of knowledge distillation, and we show that foundational Transformerspretrained with 2D images or natural languages can help self-supervised 3Drepresentation learning through training Autoencoders as Cross-Modal Teachers(ACT). The pretrained Transformers are transferred as cross-modal 3D teachersusing discrete variational autoencoding self-supervision, during which theTransformers are frozen with prompt tuning for better knowledge inheritance.The latent features encoded by the 3D teachers are used as the target of maskedpoint modeling, wherein the dark knowledge is distilled to the 3D Transformerstudents as foundational geometry understanding. Our ACT pretrained 3D learnerachieves state-of-the-art generalization capacity across various downstreambenchmarks, e.g., 88.21% overall accuracy on ScanObjectNN. Codes have beenreleased at https://github.com/RunpeiDong/ACT.