HyperAIHyperAI
2 months ago

ShapeLLM: Universal 3D Object Understanding for Embodied Interaction

Qi, Zekun ; Dong, Runpei ; Zhang, Shaochen ; Geng, Haoran ; Han, Chunrui ; Ge, Zheng ; Yi, Li ; Ma, Kaisheng
ShapeLLM: Universal 3D Object Understanding for Embodied Interaction
Abstract

This paper presents ShapeLLM, the first 3D Multimodal Large Language Model(LLM) designed for embodied interaction, exploring a universal 3D objectunderstanding with 3D point clouds and languages. ShapeLLM is built upon animproved 3D encoder by extending ReCon to ReCon++ that benefits from multi-viewimage distillation for enhanced geometry understanding. By utilizing ReCon++ asthe 3D point cloud input encoder for LLMs, ShapeLLM is trained on constructedinstruction-following data and tested on our newly human-curated benchmark, 3DMM-Vet. ReCon++ and ShapeLLM achieve state-of-the-art performance in 3Dgeometry understanding and language-unified 3D interaction tasks, such asembodied visual grounding. Project page: https://qizekun.github.io/shapellm/

ShapeLLM: Universal 3D Object Understanding for Embodied Interaction | Latest Papers | HyperAI