Natural Language Understanding On Glue
Métriques
Average
Résultats
Résultats de performance de divers modèles sur ce benchmark
Nom du modèle | Average | Paper Title | Repository |
---|---|---|---|
MT-DNN-SMART | 89.9 | SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization | |
BERT-LARGE | 82.1 | BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding |
0 of 2 row(s) selected.