HyperAIHyperAI
2 months ago

Masked Autoencoders for Point Cloud Self-supervised Learning

Pang, Yatian ; Wang, Wenxiao ; Tay, Francis E. H. ; Liu, Wei ; Tian, Yonghong ; Yuan, Li
Masked Autoencoders for Point Cloud Self-supervised Learning
Abstract

As a promising scheme of self-supervised learning, masked autoencoding hassignificantly advanced natural language processing and computer vision.Inspired by this, we propose a neat scheme of masked autoencoders for pointcloud self-supervised learning, addressing the challenges posed by pointcloud's properties, including leakage of location information and uneveninformation density. Concretely, we divide the input point cloud into irregularpoint patches and randomly mask them at a high ratio. Then, a standardTransformer based autoencoder, with an asymmetric design and a shifting masktokens operation, learns high-level latent features from unmasked pointpatches, aiming to reconstruct the masked point patches. Extensive experimentsshow that our approach is efficient during pre-training and generalizes well onvarious downstream tasks. Specifically, our pre-trained models achieve 85.18%accuracy on ScanObjectNN and 94.04% accuracy on ModelNet40, outperforming allthe other self-supervised learning methods. We show with our scheme, a simplearchitecture entirely based on standard Transformers can surpass dedicatedTransformer models from supervised learning. Our approach also advancesstate-of-the-art accuracies by 1.5%-2.3% in the few-shot object classification.Furthermore, our work inspires the feasibility of applying unifiedarchitectures from languages and images to the point cloud.

Masked Autoencoders for Point Cloud Self-supervised Learning | Latest Papers | HyperAI