3D-SIS: 3D Semantic Instance Segmentation of RGB-D Scans

We introduce 3D-SIS, a novel neural network architecture for 3D semanticinstance segmentation in commodity RGB-D scans. The core idea of our method isto jointly learn from both geometric and color signal, thus enabling accurateinstance predictions. Rather than operate solely on 2D frames, we observe thatmost computer vision applications have multi-view RGB-D input available, whichwe leverage to construct an approach for 3D instance segmentation thateffectively fuses together these multi-modal inputs. Our network leverageshigh-resolution RGB input by associating 2D images with the volumetric gridbased on the pose alignment of the 3D reconstruction. For each image, we firstextract 2D features for each pixel with a series of 2D convolutions; we thenbackproject the resulting feature vector to the associated voxel in the 3Dgrid. This combination of 2D and 3D feature learning allows significantlyhigher accuracy object detection and instance segmentation thanstate-of-the-art alternatives. We show results on both synthetic and real-worldpublic benchmarks, achieving an improvement in mAP of over 13 on real-worlddata.