Cross Lingual Natural Language Inference On 1
평가 지표
Accuracy
평가 결과
이 벤치마크에서 각 모델의 성능 결과
모델 이름 | Accuracy | Paper Title | Repository |
---|---|---|---|
BERT | 74.3% | BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding | |
X-CBOW | 60.7% | Supervised Learning of Universal Sentence Representations from Natural Language Inference Data | |
X-BiLSTM | 68.7% | Supervised Learning of Universal Sentence Representations from Natural Language Inference Data | |
XLM-R R4F | 85.2% | Better Fine-Tuning by Reducing Representational Collapse |
0 of 4 row(s) selected.