HyperAI
Home
News
Latest Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
English
HyperAI
Toggle sidebar
Search the site…
⌘
K
Home
SOTA
Semantic Textual Similarity
Semantic Textual Similarity On Sts13
Semantic Textual Similarity On Sts13
Metrics
Spearman Correlation
Results
Performance results of various models on this benchmark
Columns
Model Name
Spearman Correlation
Paper Title
Repository
SimCSE-RoBERTa-base
0.8136
SimCSE: Simple Contrastive Learning of Sentence Embeddings
DiffCSE-BERT-base
0.8443
DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings
Trans-Encoder-BERT-base-cross (unsup.)
0.8559
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
PromptEOL+CSE+LLaMA-30B
0.9025
Scaling Sentence Embeddings with Large Language Models
Trans-Encoder-RoBERTa-large-cross (unsup.)
0.8831
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
SBERT-NLI-large
0.7846
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
PromptEOL+CSE+OPT-13B
0.9024
Scaling Sentence Embeddings with Large Language Models
Trans-Encoder-BERT-large-bi (unsup.)
0.8851
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
PromCSE-RoBERTa-large (0.355B)
0.8897
Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning
AnglE-LLaMA-7B-v2
0.9056
AnglE-optimized Text Embeddings
SimCSE-RoBERTa-large
0.8727
SimCSE: Simple Contrastive Learning of Sentence Embeddings
BERTlarge-flow (target)
0.7339
On the Sentence Embeddings from Pre-trained Language Models
-
Mirror-RoBERTa-base (unsup.)
0.819
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders
Dino (STSb/̄
0.8126
Generating Datasets with Pretrained Language Models
DiffCSE-RoBERTa-base
0.8343
DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings
AnglE-LLaMA-7B
0.9058
AnglE-optimized Text Embeddings
Trans-Encoder-BERT-large-cross (unsup.)
0.8831
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
Mirror-BERT-base (unsup.)
0.796
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders
SimCSE-BERT-base
0.8241
SimCSE: Simple Contrastive Learning of Sentence Embeddings
Trans-Encoder-BERT-base-bi (unsup.)
0.851
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
0 of 22 row(s) selected.
Previous
Next