Logical Reasoning On Big Bench Reasoning
Metrics
Accuracy
Results
Performance results of various models on this benchmark
Model Name | Accuracy | Paper Title | Repository |
---|---|---|---|
PaLM 540B (few-shot, k=3) | 38 | BloombergGPT: A Large Language Model for Finance | - |
BLOOM 176B (few-shot, k=3) | 36.8 | BloombergGPT: A Large Language Model for Finance | - |
Chinchilla-70B (few-shot, k=5) | 59.7 | Training Compute-Optimal Large Language Models | |
GPT-NeoX (few-shot, k=3) | 26 | BloombergGPT: A Large Language Model for Finance | - |
PaLM 2 (few-shot, k=3, Direct) | 61.2 | PaLM 2 Technical Report | |
OPT 66B (few-shot, k=3) | 31.2 | BloombergGPT: A Large Language Model for Finance | - |
PaLM 2 (few-shot, k=3, CoT) | 91.2 | PaLM 2 Technical Report | |
Bloomberg GPT (few-shot, k=3) | 34.8 | BloombergGPT: A Large Language Model for Finance | - |
Gopher-280B (few-shot, k=5) | 49.2 | Scaling Language Models: Methods, Analysis & Insights from Training Gopher |
0 of 9 row(s) selected.