HyperAI

Semantic Textual Similarity On Mrpc

Metrics

F1

Results

Performance results of various models on this benchmark

Comparison Table
Model NameF1
big-bird-transformers-for-longer-sequences91.5
exploring-the-limits-of-transfer-learning92.5
mobilebert-a-compact-task-agnostic-bert-for-
intrinsic-dimensionality-explains-the-
charformer-fast-character-transformers-via91.4
entailment-as-few-shot-learner91.0
nystromformer-a-nystrom-based-algorithm-for88.1%
smart-robust-and-efficient-fine-tuning-for-
llm-int8-8-bit-matrix-multiplication-for-
fnet-mixing-tokens-with-fourier-transforms-
squeezebert-what-can-computer-vision-teach-
xlnet-generalized-autoregressive-pretraining-
smart-robust-and-efficient-fine-tuning-for-
exploring-the-limits-of-transfer-learning92.4
190910351-
190910351-
distilbert-a-distilled-version-of-bert-
ernie-20-a-continual-pre-training-framework-
exploring-the-limits-of-transfer-learning89.7
q8bert-quantized-8bit-bert-
a-statistical-framework-for-low-bitwidth-
albert-a-lite-bert-for-self-supervised-
exploring-the-limits-of-transfer-learning91.9
subregweigh-effective-and-efficient-
informer-transformer-likes-informed-attention90.91%
intrinsic-dimensionality-explains-the-
smart-robust-and-efficient-fine-tuning-for-
discriminative-improvements-to-distributional85.9%
clear-contrastive-learning-for-sentence-
autobert-zero-evolving-bert-backbone-from-
ernie-20-a-continual-pre-training-framework-
learning-general-purpose-distributed-sentence84.4%
q-bert-hessian-based-ultra-low-precision-
supervised-learning-of-universal-sentence83.1%
spanbert-improving-pre-training-by-
bert-pre-training-of-deep-bidirectional89.3
smart-robust-and-efficient-fine-tuning-for91.7
structbert-incorporating-language-structures93.6%
roberta-a-robustly-optimized-bert-pretraining-
autobert-zero-evolving-bert-backbone-from-
learning-to-encode-position-for-transformer-
Model 42-
how-to-train-bert-with-an-academic-budget-
exploring-the-limits-of-transfer-learning90.7
ernie-enhanced-language-representation-with-