HyperAI

Semantic Textual Similarity On Sts Benchmark

Metrics

Spearman Correlation

Results

Performance results of various models on this benchmark

Comparison Table
Model NameSpearman Correlation
exploring-the-limits-of-transfer-learning0.886
sentence-bert-sentence-embeddings-using0.8615
universal-sentence-encoder-
q8bert-quantized-8bit-bert-
distilbert-a-distilled-version-of-bert-
roberta-a-robustly-optimized-bert-pretraining-
sentence-bert-sentence-embeddings-using0.7703
structbert-incorporating-language-structures0.924
generating-datasets-with-pretrained-language0.7782
sentence-bert-sentence-embeddings-using0.7777
ernie-20-a-continual-pre-training-framework-
albert-a-lite-bert-for-self-supervised-
smart-robust-and-efficient-fine-tuning-for-
angle-optimized-text-embeddings0.8969
trans-encoder-unsupervised-sentence-pair0.867
ernie-enhanced-language-representation-with-
sentence-bert-sentence-embeddings-using0.79
angle-optimized-text-embeddings0.8897
trans-encoder-unsupervised-sentence-pair0.839
fast-effective-and-self-supervised0.764
exploring-the-limits-of-transfer-learning0.85
deep-continuous-prompt-for-contrastive-10.8787
exploring-the-limits-of-transfer-learning0.921
scaling-sentence-embeddings-with-large0.8914
an-unsupervised-sentence-embedding-method0.6921
generating-datasets-with-pretrained-language0.7651
Model 270.7981
big-bird-transformers-for-longer-sequences.878
190910351-
exploring-the-limits-of-transfer-learning-
llm-int8-8-bit-matrix-multiplication-for-
adversarial-self-attention-for-language0.892
def2vec-extensible-word-embeddings-from0.6372
informer-transformer-likes-informed-attention0.8988
fast-effective-and-self-supervised0.787
mnet-sim-a-multi-layered-semantic-similarity-10.931
spanbert-improving-pre-training-by-
trans-encoder-unsupervised-sentence-pair0.8616
exploring-the-limits-of-transfer-learning-
exploring-the-limits-of-transfer-learning0.898
adversarial-self-attention-for-language0.865
sentence-bert-sentence-embeddings-using0.8479
fnet-mixing-tokens-with-fourier-transforms0.84
rematch-robust-and-efficient-matching-of0.6652
q-bert-hessian-based-ultra-low-precision-
scaling-sentence-embeddings-with-large0.8856
bert-pre-training-of-deep-bidirectional0.865
smart-robust-and-efficient-fine-tuning-for0.925
smart-robust-and-efficient-fine-tuning-for-
clear-contrastive-learning-for-sentence-
a-statistical-framework-for-low-bitwidth-
Model 52-
xlnet-generalized-autoregressive-pretraining-
angle-optimized-text-embeddings0.8897
simcse-simple-contrastive-learning-of0.867
trans-encoder-unsupervised-sentence-pair0.8655
charformer-fast-character-transformers-via-
sentence-bert-sentence-embeddings-using0.8445
scaling-sentence-embeddings-with-large0.8833
Model 60-
on-the-sentence-embeddings-from-pre-trained0.7226
trans-encoder-unsupervised-sentence-pair0.8465
ernie-20-a-continual-pre-training-framework-
how-to-train-bert-with-an-academic-budget-
deberta-decoding-enhanced-bert-with-
entailment-as-few-shot-learner-