Multiple Choice Question Answering Mcqa On 8
Metriken
Accuracy
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Modellname | Accuracy | Paper Title | Repository |
---|---|---|---|
BLOOM (few-shot, k=5) | 36 | Galactica: A Large Language Model for Science | |
Med-PaLM 2 (ER) | 92 | Towards Expert-Level Medical Question Answering with Large Language Models | |
Med-PaLM 2 (CoT + SC) | 89 | Towards Expert-Level Medical Question Answering with Large Language Models | |
Med-PaLM 2 (5-shot) | 90 | Towards Expert-Level Medical Question Answering with Large Language Models | |
Chinchilla (few-shot, k=5) | 69 | Galactica: A Large Language Model for Science | |
GAL 30B (zero-shot) | 70 | Galactica: A Large Language Model for Science | |
GAL 120B (zero-shot) | 68 | Galactica: A Large Language Model for Science | |
OPT (few-shot, k=5) | 35 | Galactica: A Large Language Model for Science |
0 of 8 row(s) selected.