HyperAI

Natural Language Inference On Wnli

Metriken

Accuracy

Ergebnisse

Leistungsergebnisse verschiedener Modelle zu diesem Benchmark

Vergleichstabelle
ModellnameAccuracy
albert-a-lite-bert-for-self-supervised91.8
a-hybrid-neural-network-model-for-commonsense89
structbert-incorporating-language-structures89.7
squeezebert-what-can-computer-vision-teach65.1
xlnet-generalized-autoregressive-pretraining92.5
exploring-the-limits-of-transfer-learning78.8
a-surprisingly-robust-trick-for-winograd71.9
roberta-a-robustly-optimized-bert-pretraining89
a-hybrid-neural-network-model-for-commonsense83.6
distilbert-a-distilled-version-of-bert44.4
finetuned-language-models-are-zero-shot70.4
ernie-20-a-continual-pre-training-framework67.8
exploring-the-limits-of-transfer-learning85.6
a-surprisingly-robust-trick-for-winograd74.7
finetuned-language-models-are-zero-shot74.6
exploring-the-limits-of-transfer-learning89.7
deberta-decoding-enhanced-bert-with94.5
rwkv-reinventing-rnns-for-the-transformer-era49.3
Modell 1995.9
exploring-the-limits-of-transfer-learning69.2
bert-pre-training-of-deep-bidirectional65.1
a-surprisingly-robust-trick-for-winograd70.5
exploring-the-limits-of-transfer-learning93.2