Cross Lingual Natural Language Inference On 3
评估指标
Accuracy
评测结果
各个模型在此基准测试上的表现结果
模型名称 | Accuracy | Paper Title | Repository |
---|---|---|---|
XLM-R R4F | 84.2% | Better Fine-Tuning by Reducing Representational Collapse | - |
BERT | 70.5% | BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding | - |
X-BiLSTM | 67.7% | Supervised Learning of Universal Sentence Representations from Natural Language Inference Data | - |
X-CBOW | 61.0% | Supervised Learning of Universal Sentence Representations from Natural Language Inference Data | - |
0 of 4 row(s) selected.