Multiple Choice Question Answering Mcqa On 28
Metriken
Accuracy
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Vergleichstabelle
Modellname | Accuracy |
---|---|
bloomberggpt-a-large-language-model-for | 86.4 |
bloomberggpt-a-large-language-model-for | 91.2 |
scaling-language-models-methods-analysis-1 | 50.5 |
bloomberggpt-a-large-language-model-for | 91.2 |
training-compute-optimal-large-language | 75.6 |
palm-2-technical-report-1 | 93.6 |
bloomberggpt-a-large-language-model-for | 90.4 |
palm-2-technical-report-1 | 94.4 |
bloomberggpt-a-large-language-model-for | 87.2 |