Question Answering On Mapeval Api 1
評価指標
Accuracy (%)
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
モデル名 | Accuracy (%) | Paper Title | Repository |
---|---|---|---|
GPT-3.5-Turbo (Chameleon) | 49.33 | MapEval: A Map-Based Evaluation of Geo-Spatial Reasoning in Foundation Models | - |
Claude-3.5-Sonnet (ReAct) | 64.00 | MapEval: A Map-Based Evaluation of Geo-Spatial Reasoning in Foundation Models | - |
0 of 2 row(s) selected.