Instance-aware Dynamic Prompt Tuning for Pre-trained Point Cloud Models

Pre-trained point cloud models have found extensive applications in 3Dunderstanding tasks like object classification and part segmentation. However,the prevailing strategy of full fine-tuning in downstream tasks leads to largeper-task storage overhead for model parameters, which limits the efficiencywhen applying large-scale pre-trained models. Inspired by the recent success ofvisual prompt tuning (VPT), this paper attempts to explore prompt tuning onpre-trained point cloud models, to pursue an elegant balance betweenperformance and parameter efficiency. We find while instance-agnostic staticprompting, e.g. VPT, shows some efficacy in downstream transfer, it isvulnerable to the distribution diversity caused by various types of noises inreal-world point cloud data. To conquer this limitation, we propose a novelInstance-aware Dynamic Prompt Tuning (IDPT) strategy for pre-trained pointcloud models. The essence of IDPT is to develop a dynamic prompt generationmodule to perceive semantic prior features of each point cloud instance andgenerate adaptive prompt tokens to enhance the model's robustness. Notably,extensive experiments demonstrate that IDPT outperforms full fine-tuning inmost tasks with a mere 7% of the trainable parameters, providing a promisingsolution to parameter-efficient learning for pre-trained point cloud models.Code is available at \url{https://github.com/zyh16143998882/ICCV23-IDPT}.