3D Question Answering 3D Qa On Scanqa Test W
Metrics
BLEU-1
BLEU-4
CIDEr
Exact Match
METEOR
ROUGE
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | BLEU-1 | BLEU-4 | CIDEr | Exact Match | METEOR | ROUGE |
---|---|---|---|---|---|---|
scanqa-3d-question-answering-for-spatial | 31.56 | 12.04 | 67.29 | 23.45 | 13.55 | 34.34 |
scanqa-3d-question-answering-for-spatial | 27.85 | 7.46 | 57.56 | 20.56 | 11.97 | 30.68 |
3d-llm-injecting-the-3d-world-into-large | 32.6 | 8.4 | 65.6 | 23.2 | 13.5 | 34.8 |
3d-llm-injecting-the-3d-world-into-large | 38.3 | 11.6 | 69.6 | 19.1 | 14.9 | 35.3 |
3d-llm-injecting-the-3d-world-into-large | 37.3 | 10.7 | 67.1 | 19.1 | 14.3 | 34.5 |
scanqa-3d-question-answering-for-spatial | 29.46 | 6.08 | 58.23 | 19.71 | 12.07 | 30.97 |
towards-learning-a-generalist-model-for | 39.73 | 13.90 | 80.77 | 26.27 | 16.56 | 40.23 |
bridging-the-gap-between-2d-and-3d-visual | 34.49 | 24.06 | 83.75 | 31.29 | 16.51 | 43.26 |