Multiple Choice Question Answering Mcqa On 29
Metriken
Accuracy
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Vergleichstabelle
Modellname | Accuracy |
---|---|
palm-2-technical-report-1 | 68.8 |
bloomberggpt-a-large-language-model-for | 42 |
scaling-language-models-methods-analysis-1 | 51.1 |
bloomberggpt-a-large-language-model-for | 50 |
bloomberggpt-a-large-language-model-for | 62.4 |
palm-2-technical-report-1 | 91.2 |
bloomberggpt-a-large-language-model-for | 42 |
bloomberggpt-a-large-language-model-for | 45.2 |
training-compute-optimal-large-language | 52.6 |