ScanQA: 3D Question Answering for Spatial Scene Understanding

We propose a new 3D spatial understanding task of 3D Question Answering(3D-QA). In the 3D-QA task, models receive visual information from the entire3D scene of the rich RGB-D indoor scan and answer the given textual questionsabout the 3D scene. Unlike the 2D-question answering of VQA, the conventional2D-QA models suffer from problems with spatial understanding of objectalignment and directions and fail the object identification from the textualquestions in 3D-QA. We propose a baseline model for 3D-QA, named ScanQA model,where the model learns a fused descriptor from 3D object proposals and encodedsentence embeddings. This learned descriptor correlates the languageexpressions with the underlying geometric features of the 3D scan andfacilitates the regression of 3D bounding boxes to determine described objectsin textual questions and outputs correct answers. We collected human-editedquestion-answer pairs with free-form answers that are grounded to 3D objects ineach 3D scene. Our new ScanQA dataset contains over 40K question-answer pairsfrom the 800 indoor scenes drawn from the ScanNet dataset. To the best of ourknowledge, the proposed 3D-QA task is the first large-scale effort to performobject-grounded question-answering in 3D environments.