Question Answering On Mapeval Api 1
Métriques
Accuracy (%)
Résultats
Résultats de performance de divers modèles sur ce benchmark
Nom du modèle | Accuracy (%) | Paper Title | Repository |
---|---|---|---|
GPT-3.5-Turbo (Chameleon) | 49.33 | MapEval: A Map-Based Evaluation of Geo-Spatial Reasoning in Foundation Models | - |
Claude-3.5-Sonnet (ReAct) | 64.00 | MapEval: A Map-Based Evaluation of Geo-Spatial Reasoning in Foundation Models | - |
0 of 2 row(s) selected.