VoxFormer: Sparse Voxel Transformer for Camera-based 3D Semantic Scene Completion

Humans can easily imagine the complete 3D geometry of occluded objects andscenes. This appealing ability is vital for recognition and understanding. Toenable such capability in AI systems, we propose VoxFormer, a Transformer-basedsemantic scene completion framework that can output complete 3D volumetricsemantics from only 2D images. Our framework adopts a two-stage design where westart from a sparse set of visible and occupied voxel queries from depthestimation, followed by a densification stage that generates dense 3D voxelsfrom the sparse ones. A key idea of this design is that the visual features on2D images correspond only to the visible scene structures rather than theoccluded or empty spaces. Therefore, starting with the featurization andprediction of the visible structures is more reliable. Once we obtain the setof sparse queries, we apply a masked autoencoder design to propagate theinformation to all the voxels by self-attention. Experiments on SemanticKITTIshow that VoxFormer outperforms the state of the art with a relativeimprovement of 20.0% in geometry and 18.1% in semantics and reduces GPU memoryduring training to less than 16GB. Our code is available onhttps://github.com/NVlabs/VoxFormer.