Context and Geometry Aware Voxel Transformer for Semantic Scene Completion

Vision-based Semantic Scene Completion (SSC) has gained much attention due toits widespread applications in various 3D perception tasks. Existingsparse-to-dense approaches typically employ shared context-independent queriesacross various input images, which fails to capture distinctions among them asthe focal regions of different inputs vary and may result in undirected featureaggregation of cross-attention. Additionally, the absence of depth informationmay lead to points projected onto the image plane sharing the same 2D positionor similar sampling points in the feature map, resulting in depth ambiguity. Inthis paper, we present a novel context and geometry aware voxel transformer. Itutilizes a context aware query generator to initialize context-dependentqueries tailored to individual input images, effectively capturing their uniquecharacteristics and aggregating information within the region of interest.Furthermore, it extend deformable cross-attention from 2D to 3D pixel space,enabling the differentiation of points with similar image coordinates based ontheir depth coordinates. Building upon this module, we introduce a neuralnetwork named CGFormer to achieve semantic scene completion. Simultaneously,CGFormer leverages multiple 3D representations (i.e., voxel and TPV) to boostthe semantic and geometric representation abilities of the transformed 3Dvolume from both local and global perspectives. Experimental resultsdemonstrate that CGFormer achieves state-of-the-art performance on theSemanticKITTI and SSCBench-KITTI-360 benchmarks, attaining a mIoU of 16.87 and20.05, as well as an IoU of 45.99 and 48.07, respectively. Remarkably, CGFormereven outperforms approaches employing temporal images as inputs or much largerimage backbone networks.