Multi Task Language Understanding On Bbh Nlp
Métriques
Average (%)
Résultats
Résultats de performance de divers modèles sur ce benchmark
Tableau comparatif
Nom du modèle | Average (%) |
---|---|
Modèle 1 | 86.3 |
scaling-instruction-finetuned-language-models | 71.2 |
orca-2-teaching-small-language-models-how-to | 45.93 |
scaling-instruction-finetuned-language-models | 62.7 |
scaling-instruction-finetuned-language-models | 70.0 |
scaling-instruction-finetuned-language-models | 78.4 |
scaling-instruction-finetuned-language-models | 78.2 |
orca-2-teaching-small-language-models-how-to | 50.18 |
evaluating-large-language-models-trained-on | 73.5 |
Modèle 10 | 82.4 |
Modèle 11 | 86.1 |
scaling-instruction-finetuned-language-models | 72.4 |
Modèle 13 | 85.9 |
Modèle 14 | 84.07 |
Modèle 15 | 81.0 |