Common Sense Reasoning On Big Bench
Metriken
Accuracy
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Vergleichstabelle
Modellname | Accuracy |
---|---|
bloomberggpt-a-large-language-model-for | 40.4 |
scaling-language-models-methods-analysis-1 | 45.5 |
bloomberggpt-a-large-language-model-for | 40.8 |
bloomberggpt-a-large-language-model-for | 34 |
palm-2-technical-report-1 | 77.6 |
bloomberggpt-a-large-language-model-for | 60.8 |
palm-2-technical-report-1 | 78.8 |
bloomberggpt-a-large-language-model-for | 40.4 |
training-compute-optimal-large-language | 54.7 |