Common Sense Reasoning On Big Bench Causal
Métriques
Accuracy
Résultats
Résultats de performance de divers modèles sur ce benchmark
Nom du modèle | Accuracy | Paper Title | Repository |
---|---|---|---|
GPT-NeoX 20B (few-shot, k=3) | 52.41 | BloombergGPT: A Large Language Model for Finance | - |
PaLM 2 (few-shot, k=3, Direct) | 62.0 | PaLM 2 Technical Report | |
BloombergGPT 50B (few-shot, k=3) | 49.73 | BloombergGPT: A Large Language Model for Finance | - |
OPT 66B (few-shot, k=3) | 51.87 | BloombergGPT: A Large Language Model for Finance | - |
PaLM 2 (few-shot, k=3, CoT) | 58.8 | PaLM 2 Technical Report | |
Chinchilla-70B (few-shot, k=5) | 57.4 | Training Compute-Optimal Large Language Models | |
PaLM 540B (few-shot, k=3) | 61.0 | BloombergGPT: A Large Language Model for Finance | - |
BLOOM 176B (few-shot, k=3) | 51.87 | BloombergGPT: A Large Language Model for Finance | - |
Gopher-280B (few-shot, k=5) | 50.8 | Scaling Language Models: Methods, Analysis & Insights from Training Gopher |
0 of 9 row(s) selected.