Cross Lingual Natural Language Inference On 1
Métriques
Accuracy
Résultats
Résultats de performance de divers modèles sur ce benchmark
Nom du modèle | Accuracy | Paper Title | Repository |
---|---|---|---|
BERT | 74.3% | BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding | |
X-CBOW | 60.7% | Supervised Learning of Universal Sentence Representations from Natural Language Inference Data | |
X-BiLSTM | 68.7% | Supervised Learning of Universal Sentence Representations from Natural Language Inference Data | |
XLM-R R4F | 85.2% | Better Fine-Tuning by Reducing Representational Collapse |
0 of 4 row(s) selected.