Natural Language Inference On Qnli
Metrics
Accuracy
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | Accuracy |
---|---|
fnet-mixing-tokens-with-fourier-transforms | 85% |
informer-transformer-likes-informed-attention | 91.89% |
nystromformer-a-nystrom-based-algorithm-for | 88.7% |
q-bert-hessian-based-ultra-low-precision | 93.0 |
debertav3-improving-deberta-using-electra | 96% |
charformer-fast-character-transformers-via | 91.0% |
Model 7 | 95.4% |
smart-robust-and-efficient-fine-tuning-for | - |
data2vec-a-general-framework-for-self-1 | 91.1% |
spanbert-improving-pre-training-by | 94.3% |
how-to-train-bert-with-an-academic-budget | 90.6 |
adversarial-self-attention-for-language | 93.6% |
a-statistical-framework-for-low-bitwidth | 94.5 |
trans-blstm-transformer-with-bidirectional | 94.08% |
exploring-the-limits-of-transfer-learning | 90.3% |
adversarial-self-attention-for-language | 91.4% |
smart-robust-and-efficient-fine-tuning-for | 99.2% |
exploring-the-limits-of-transfer-learning | 93.7% |
exploring-the-limits-of-transfer-learning | 96.7% |
ernie-enhanced-language-representation-with | 91.3% |
ernie-20-a-continual-pre-training-framework | 94.6% |
bert-pre-training-of-deep-bidirectional | 92.7% |
xlnet-generalized-autoregressive-pretraining | 94.9% |
ernie-20-a-continual-pre-training-framework | 92.9% |
roberta-a-robustly-optimized-bert-pretraining | 98.9% |
190910351 | 87.7% |
albert-a-lite-bert-for-self-supervised | 99.2% |
entailment-as-few-shot-learner | 94.5% |
deberta-decoding-enhanced-bert-with | 95.3% |
190910351 | 90.4% |
q8bert-quantized-8bit-bert | 93.0 |
lm-cppf-paraphrasing-guided-data-augmentation | 70.2% |
big-bird-transformers-for-longer-sequences | 92.2% |
structbert-incorporating-language-structures | 99.2% |
smart-robust-and-efficient-fine-tuning-for | 99.2% |
exploring-the-limits-of-transfer-learning | 94.8% |
sensebert-driving-some-sense-into-bert | 90.6% |
llm-int8-8-bit-matrix-multiplication-for | 94.7% |
distilbert-a-distilled-version-of-bert | 90.2% |
smart-robust-and-efficient-fine-tuning-for | - |
squeezebert-what-can-computer-vision-teach | 90.1% |
exploring-the-limits-of-transfer-learning | 96.3% |
clear-contrastive-learning-for-sentence | 93.4% |