Question Answering On Mapeval Api 1
Metriken
Accuracy (%)
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Modellname | Accuracy (%) | Paper Title | Repository |
---|---|---|---|
GPT-3.5-Turbo (Chameleon) | 49.33 | MapEval: A Map-Based Evaluation of Geo-Spatial Reasoning in Foundation Models | - |
Claude-3.5-Sonnet (ReAct) | 64.00 | MapEval: A Map-Based Evaluation of Geo-Spatial Reasoning in Foundation Models | - |
0 of 2 row(s) selected.