3D Object Captioning On Objaverse 1
Metrics
Sentence-BERT
Correctness
GPT-4
Hallucination
Precision
SimCSE
Results
Performance results of various models on this benchmark
Model Name | Sentence-BERT | Correctness | GPT-4 | Hallucination | Precision | SimCSE | Paper Title | Repository |
---|---|---|---|---|---|---|---|---|
MiniGPT-3D | 49.54 | 3.50 | 57.06 | 0.71 | 83.14 | 51.39 | MiniGPT-3D: Efficiently Aligning 3D Point Clouds with Large Language Models using 2D Priors | |
3D-LLM | 44.48 | 1.77 | 33.42 | 1.16 | 60.39 | 43.68 | 3D-LLM: Injecting the 3D World into Large Language Models | |
PointLLM-7B V1.2 | 47.47 | 3.04 | 44.85 | 0.66 | 82.14 | 48.55 | PointLLM: Empowering Large Language Models to Understand Point Clouds | |
PointLLM-13B V1.2 | 47.91 | 3.10 | 48.15 | 0.84 | 78.75 | 49.12 | PointLLM: Empowering Large Language Models to Understand Point Clouds | |
ShapeLLM-13B | 48.52 | - | 48.94 | - | - | 49.98 | ShapeLLM: Universal 3D Object Understanding for Embodied Interaction | |
ShapeLLM-7B | 48.20 | - | 46.92 | - | - | 49.23 | ShapeLLM: Universal 3D Object Understanding for Embodied Interaction |
0 of 6 row(s) selected.