Multiple Choice Question Answering Mcqa On 11
Métriques
Accuracy
Résultats
Résultats de performance de divers modèles sur ce benchmark
Nom du modèle | Accuracy | Paper Title | Repository |
---|---|---|---|
OPT (few-shot, k=5) | 30.6 | Galactica: A Large Language Model for Science | |
Med-PaLM 2 (ER) | 95.8 | Towards Expert-Level Medical Question Answering with Large Language Models | |
GAL 120B (zero-shot) | 68.8 | Galactica: A Large Language Model for Science | |
Med-PaLM 2 (5-shot) | 94.4 | Towards Expert-Level Medical Question Answering with Large Language Models | |
BLOOM (few-shot, k=5) | 28.5 | Galactica: A Large Language Model for Science | |
Gopher (few-shot, k=5) | 70.8 | Galactica: A Large Language Model for Science | |
Chinchilla (few-shot, k=5) | 79.9 | Galactica: A Large Language Model for Science | |
Med-PaLM 2 (CoT + SC) | 95.1 | Towards Expert-Level Medical Question Answering with Large Language Models |
0 of 8 row(s) selected.