Question Answering On Obqa
평가 지표
Accuracy
평가 결과
이 벤치마크에서 각 모델의 성능 결과
모델 이름 | Accuracy | Paper Title | Repository |
---|---|---|---|
LLaMA 7B (zero-shot) | 57.2 | LLaMA: Open and Efficient Foundation Language Models | |
LLaMA 13B (zero-shot) | 56.4 | LLaMA: Open and Efficient Foundation Language Models | |
FLAN 137B (few-shot, k=16) | 78.2 | Finetuned Language Models Are Zero-Shot Learners | |
PaLM 540B (zero-shot) | 53.4 | PaLM: Scaling Language Modeling with Pathways | |
FLAN 137B (zero-shot) | 78.4 | Finetuned Language Models Are Zero-Shot Learners | |
PaLM 62B (zero-shot) | 50.4 | PaLM: Scaling Language Modeling with Pathways | |
GPT-3 175B (zero-shot) | 57.6 | Language Models are Few-Shot Learners | |
LLaMA 65B (zero-shot) | 60.2 | LLaMA: Open and Efficient Foundation Language Models | |
LLaMA 33B (zero-shot) | 58.6 | LLaMA: Open and Efficient Foundation Language Models |
0 of 9 row(s) selected.