HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Semantic Textual Similarity
Semantic Textual Similarity On Sts16
Semantic Textual Similarity On Sts16
Metrics
Spearman Correlation
Results
Performance results of various models on this benchmark
Columns
Model Name
Spearman Correlation
Paper Title
AnglE-LLaMA-13B
0.8700
AnglE-optimized Text Embeddings
AnglE-LLaMA-7B-v2
0.8700
AnglE-optimized Text Embeddings
AnglE-LLaMA-7B
0.8691
AnglE-optimized Text Embeddings
PromptEOL+CSE+LLaMA-30B
0.8627
Scaling Sentence Embeddings with Large Language Models
PromptEOL+CSE+OPT-2.7B
0.8591
Scaling Sentence Embeddings with Large Language Models
PromptEOL+CSE+OPT-13B
0.8590
Scaling Sentence Embeddings with Large Language Models
Trans-Encoder-RoBERTa-large-cross (unsup.)
0.8503
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
PromCSE-RoBERTa-large (0.355B)
0.8496
Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning
Trans-Encoder-BERT-large-bi (unsup.)
0.8481
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
SimCSE-RoBERTalarge
0.8393
SimCSE: Simple Contrastive Learning of Sentence Embeddings
Trans-Encoder-RoBERTa-base-cross (unsup.)
0.8377
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
Trans-Encoder-BERT-base-bi (unsup.)
0.8305
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
DiffCSE-RoBERTa-base
0.8212
DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings
DiffCSE-BERT-base
0.8054
DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings
Mirror-RoBERTa-base (unsup.)
0.78
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders
BERTlarge-flow (target)
0.7763
On the Sentence Embeddings from Pre-trained Language Models
Dino (STSb/̄
0.7718
Generating Datasets with Pretrained Language Models
SRoBERTa-NLI-large
0.7682
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Mirror-BERT-base (unsup.)
0.743
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders
IS-BERT-NLI
0.7016
An Unsupervised Sentence Embedding Method by Mutual Information Maximization
0 of 20 row(s) selected.
Previous
Next
Semantic Textual Similarity On Sts16 | SOTA | HyperAI