Common Sense Reasoning On Big Bench
평가 지표
Accuracy
평가 결과
이 벤치마크에서 각 모델의 성능 결과
모델 이름 | Accuracy | Paper Title | Repository |
---|---|---|---|
BLOOM 176B (few-shot, k=3) | 40.4 | BloombergGPT: A Large Language Model for Finance | - |
Gopher-280B (few-shot, k=5) | 45.5 | Scaling Language Models: Methods, Analysis & Insights from Training Gopher | |
GPT-NeoX 20B (few-shot, k=3) | 40.8 | BloombergGPT: A Large Language Model for Finance | - |
Bloomberg GPT 50B (few-shot, k=3) | 34 | BloombergGPT: A Large Language Model for Finance | - |
PaLM 2 (few-shot, k=3, CoT) | 77.6 | PaLM 2 Technical Report | |
PaLM 540B (few-shot, k=3) | 60.8 | BloombergGPT: A Large Language Model for Finance | - |
PaLM 2 (few-shot, k=3, Direct) | 78.8 | PaLM 2 Technical Report | |
OPT 66B (few-shot, k=3) | 40.4 | BloombergGPT: A Large Language Model for Finance | - |
Chinchilla-70B (few-shot, k=5) | 54.7 | Training Compute-Optimal Large Language Models |
0 of 9 row(s) selected.