Multiple Choice Question Answering Mcqa On 27
Metriken
Accuracy
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Modellname | Accuracy | Paper Title | Repository |
---|---|---|---|
Gopher-280B (few-shot, k=5) | 51.7 | Scaling Language Models: Methods, Analysis & Insights from Training Gopher | |
BLOOM 176B (few-shot, k=3) | 92 | BloombergGPT: A Large Language Model for Finance | - |
OPT 66B (few-shot, k=3) | 91.6 | BloombergGPT: A Large Language Model for Finance | - |
Bloomberg GPT (few-shot, k=3) | 92 | BloombergGPT: A Large Language Model for Finance | - |
PaLM 2 (few-shot, k=3, Direct) | 84.8 | PaLM 2 Technical Report | |
GPT-NeoX (few-shot, k=3) | 92 | BloombergGPT: A Large Language Model for Finance | - |
PaLM 540B (few-shot, k=3) | 70.8 | BloombergGPT: A Large Language Model for Finance | - |
Chinchilla-70B (few-shot, k=5) | 54.2 | Training Compute-Optimal Large Language Models | |
PaLM 2 (few-shot, k=3, CoT) | 82.4 | PaLM 2 Technical Report |
0 of 9 row(s) selected.