Cross Lingual Natural Language Inference On 3
평가 지표
Accuracy
평가 결과
이 벤치마크에서 각 모델의 성능 결과
모델 이름 | Accuracy | Paper Title | Repository |
---|---|---|---|
XLM-R R4F | 84.2% | Better Fine-Tuning by Reducing Representational Collapse | |
BERT | 70.5% | BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding | |
X-BiLSTM | 67.7% | Supervised Learning of Universal Sentence Representations from Natural Language Inference Data | |
X-CBOW | 61.0% | Supervised Learning of Universal Sentence Representations from Natural Language Inference Data |
0 of 4 row(s) selected.