HyperAI

Semantic Textual Similarity On Sts Benchmark

Metrics

Spearman Correlation

Results

Performance results of various models on this benchmark

Model Name
Spearman Correlation
Paper TitleRepository
T5-Large 770M0.886Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
SRoBERTa-NLI-STSb-large0.8615Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
USE_T-Universal Sentence Encoder
Q8BERT (Zafrir et al., 2019)-Q8BERT: Quantized 8Bit BERT
DistilBERT 66M-DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
RoBERTa-RoBERTa: A Robustly Optimized BERT Pretraining Approach
SBERT-NLI-base0.7703Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
StructBERTRoBERTa ensemble0.924StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding-
Dino (STSb/̄0.7782Generating Datasets with Pretrained Language Models
SRoBERTa-NLI-base0.7777Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
ERNIE 2.0 Large-ERNIE 2.0: A Continual Pre-training Framework for Language Understanding
ALBERT-ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
SMART-BERT-SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
AnglE-LLaMA-13B0.8969AnglE-optimized Text Embeddings
Trans-Encoder-RoBERTa-large-cross (unsup.)0.867Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
ERNIE-ERNIE: Enhanced Language Representation with Informative Entities
SBERT-NLI-large0.79Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
AnglE-LLaMA-7B0.8897AnglE-optimized Text Embeddings
Trans-Encoder-BERT-base-bi (unsup.)0.839Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
Mirror-BERT-base (unsup.)0.764Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders
0 of 66 row(s) selected.