Question Answering On Drop
Métriques
Accuracy
Résultats
Résultats de performance de divers modèles sur ce benchmark
Tableau comparatif
Nom du modèle | Accuracy |
---|---|
large-language-models-can-self-improve | 78.2 |
large-language-models-can-self-improve | 83 |
large-language-models-can-self-improve | 71.7 |
large-language-models-can-self-improve | 60 |
large-language-models-can-self-improve | 70.6 |
large-language-models-can-self-improve | 76.2 |