Parameter-Efficient Fine-Tuning in Spectral Domain for Point Cloud Learning

Recently, leveraging pre-training techniques to enhance point cloud modelshas become a hot research topic. However, existing approaches typically requirefull fine-tuning of pre-trained models to achieve satisfied performance ondownstream tasks, accompanying storage-intensive and computationally demanding.To address this issue, we propose a novel Parameter-Efficient Fine-Tuning(PEFT) method for point cloud, called PointGST (Point cloud Graph SpectralTuning). PointGST freezes the pre-trained model and introduces a lightweight,trainable Point Cloud Spectral Adapter (PCSA) to fine-tune parameters in thespectral domain. The core idea is built on two observations: 1) The innertokens from frozen models might present confusion in the spatial domain; 2)Task-specific intrinsic information is important for transferring the generalknowledge to the downstream task. Specifically, PointGST transfers the pointtokens from the spatial domain to the spectral domain, effectivelyde-correlating confusion among tokens via using orthogonal components forseparating. Moreover, the generated spectral basis involves intrinsicinformation about the downstream point clouds, enabling more targeted tuning.As a result, PointGST facilitates the efficient transfer of general knowledgeto downstream tasks while significantly reducing training costs. Extensiveexperiments on challenging point cloud datasets across various tasksdemonstrate that PointGST not only outperforms its fully fine-tuningcounterpart but also significantly reduces trainable parameters, making it apromising solution for efficient point cloud learning. It improves upon a solidbaseline by +2.28%, 1.16%, and 2.78%, resulting in 99.48%, 97.76%, and 96.18%on the ScanObjNN OBJ BG, OBJ OBLY, and PB T50 RS datasets, respectively. Thisadvancement establishes a new state-of-the-art, using only 0.67% of thetrainable parameters.