ULIP: Learning a Unified Representation of Language, Images, and Point Clouds for 3D Understanding

The recognition capabilities of current state-of-the-art 3D models arelimited by datasets with a small number of annotated data and a pre-defined setof categories. In its 2D counterpart, recent advances have shown that similarproblems can be significantly alleviated by employing knowledge from othermodalities, such as language. Inspired by this, leveraging multimodalinformation for 3D modality could be promising to improve 3D understandingunder the restricted data regime, but this line of research is not wellstudied. Therefore, we introduce ULIP to learn a unified representation ofimages, texts, and 3D point clouds by pre-training with object triplets fromthe three modalities. To overcome the shortage of training triplets, ULIPleverages a pre-trained vision-language model that has already learned a commonvisual and textual space by training with massive image-text pairs. Then, ULIPlearns a 3D representation space aligned with the common image-text space,using a small number of automatically synthesized triplets. ULIP is agnostic to3D backbone networks and can easily be integrated into any 3D architecture.Experiments show that ULIP effectively improves the performance of multiplerecent 3D backbones by simply pre-training them on ShapeNet55 using ourframework, achieving state-of-the-art performance in both standard 3Dclassification and zero-shot 3D classification on ModelNet40 and ScanObjectNN.ULIP also improves the performance of PointMLP by around 3% in 3Dclassification on ScanObjectNN, and outperforms PointCLIP by 28.8% on top-1accuracy for zero-shot 3D classification on ModelNet40. Our code andpre-trained models are released at https://github.com/salesforce/ULIP.