HyperAIHyperAI
2 months ago

3D-LLM: Injecting the 3D World into Large Language Models

Hong, Yining ; Zhen, Haoyu ; Chen, Peihao ; Zheng, Shuhong ; Du, Yilun ; Chen, Zhenfang ; Gan, Chuang
3D-LLM: Injecting the 3D World into Large Language Models
Abstract

Large language models (LLMs) and Vision-Language Models (VLMs) have beenproven to excel at multiple tasks, such as commonsense reasoning. Powerful asthese models can be, they are not grounded in the 3D physical world, whichinvolves richer concepts such as spatial relationships, affordances, physics,layout, and so on. In this work, we propose to inject the 3D world into largelanguage models and introduce a whole new family of 3D-LLMs. Specifically,3D-LLMs can take 3D point clouds and their features as input and perform adiverse set of 3D-related tasks, including captioning, dense captioning, 3Dquestion answering, task decomposition, 3D grounding, 3D-assisted dialog,navigation, and so on. Using three types of prompting mechanisms that wedesign, we are able to collect over 300k 3D-language data covering these tasks.To efficiently train 3D-LLMs, we first utilize a 3D feature extractor thatobtains 3D features from rendered multi- view images. Then, we use 2D VLMs asour backbones to train our 3D-LLMs. By introducing a 3D localization mechanism,3D-LLMs can better capture 3D spatial information. Experiments on ScanQA showthat our model outperforms state-of-the-art baselines by a large margin (e.g.,the BLEU-1 score surpasses state-of-the-art score by 9%). Furthermore,experiments on our held-in datasets for 3D captioning, task composition, and3D-assisted dialogue show that our model outperforms 2D VLMs. Qualitativeexamples also show that our model could perform more tasks beyond the scope ofexisting LLMs and VLMs. Project Page: : https://vis-www.cs.umass.edu/3dllm/.

3D-LLM: Injecting the 3D World into Large Language Models | Latest Papers | HyperAI