Cross Lingual Natural Language Inference On 3
評価指標
Accuracy
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
モデル名 | Accuracy | Paper Title | Repository |
---|---|---|---|
XLM-R R4F | 84.2% | Better Fine-Tuning by Reducing Representational Collapse | - |
BERT | 70.5% | BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding | - |
X-BiLSTM | 67.7% | Supervised Learning of Universal Sentence Representations from Natural Language Inference Data | - |
X-CBOW | 61.0% | Supervised Learning of Universal Sentence Representations from Natural Language Inference Data | - |
0 of 4 row(s) selected.