3D Question Answering 3D Qa On Scanqa Test W
Metriken
BLEU-1
BLEU-4
CIDEr
Exact Match
METEOR
ROUGE
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Modellname | BLEU-1 | BLEU-4 | CIDEr | Exact Match | METEOR | ROUGE | Paper Title | Repository |
---|---|---|---|---|---|---|---|---|
ScanQA | 31.56 | 12.04 | 67.29 | 23.45 | 13.55 | 34.34 | ScanQA: 3D Question Answering for Spatial Scene Understanding | |
ScanRefer+MCAN | 27.85 | 7.46 | 57.56 | 20.56 | 11.97 | 30.68 | ScanQA: 3D Question Answering for Spatial Scene Understanding | |
3D-LLM (flamingo) | 32.6 | 8.4 | 65.6 | 23.2 | 13.5 | 34.8 | 3D-LLM: Injecting the 3D World into Large Language Models | |
3D-LLM (BLIP2-flant5) | 38.3 | 11.6 | 69.6 | 19.1 | 14.9 | 35.3 | 3D-LLM: Injecting the 3D World into Large Language Models | |
3D-LLM (BLIP2-opt) | 37.3 | 10.7 | 67.1 | 19.1 | 14.3 | 34.5 | 3D-LLM: Injecting the 3D World into Large Language Models | |
VoteNet+MCAN | 29.46 | 6.08 | 58.23 | 19.71 | 12.07 | 30.97 | ScanQA: 3D Question Answering for Spatial Scene Understanding | |
NaviLLM | 39.73 | 13.90 | 80.77 | 26.27 | 16.56 | 40.23 | Towards Learning a Generalist Model for Embodied Navigation | |
BridgeQA | 34.49 | 24.06 | 83.75 | 31.29 | 16.51 | 43.26 | Bridging the Gap between 2D and 3D Visual Question Answering: A Fusion Approach for 3D VQA |
0 of 8 row(s) selected.