Natural Language Inference On Wnli
평가 지표
Accuracy
평가 결과
이 벤치마크에서 각 모델의 성능 결과
비교 표
모델 이름 | Accuracy |
---|---|
albert-a-lite-bert-for-self-supervised | 91.8 |
a-hybrid-neural-network-model-for-commonsense | 89 |
structbert-incorporating-language-structures | 89.7 |
squeezebert-what-can-computer-vision-teach | 65.1 |
xlnet-generalized-autoregressive-pretraining | 92.5 |
exploring-the-limits-of-transfer-learning | 78.8 |
a-surprisingly-robust-trick-for-winograd | 71.9 |
roberta-a-robustly-optimized-bert-pretraining | 89 |
a-hybrid-neural-network-model-for-commonsense | 83.6 |
distilbert-a-distilled-version-of-bert | 44.4 |
finetuned-language-models-are-zero-shot | 70.4 |
ernie-20-a-continual-pre-training-framework | 67.8 |
exploring-the-limits-of-transfer-learning | 85.6 |
a-surprisingly-robust-trick-for-winograd | 74.7 |
finetuned-language-models-are-zero-shot | 74.6 |
exploring-the-limits-of-transfer-learning | 89.7 |
deberta-decoding-enhanced-bert-with | 94.5 |
rwkv-reinventing-rnns-for-the-transformer-era | 49.3 |
모델 19 | 95.9 |
exploring-the-limits-of-transfer-learning | 69.2 |
bert-pre-training-of-deep-bidirectional | 65.1 |
a-surprisingly-robust-trick-for-winograd | 70.5 |
exploring-the-limits-of-transfer-learning | 93.2 |