Question Answering On Drop
Metriken
Accuracy
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Vergleichstabelle
Modellname | Accuracy |
---|---|
large-language-models-can-self-improve | 78.2 |
large-language-models-can-self-improve | 83 |
large-language-models-can-self-improve | 71.7 |
large-language-models-can-self-improve | 60 |
large-language-models-can-self-improve | 70.6 |
large-language-models-can-self-improve | 76.2 |