HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Semantic Textual Similarity
Semantic Textual Similarity On Sts Benchmark
Semantic Textual Similarity On Sts Benchmark
Metrics
Spearman Correlation
Results
Performance results of various models on this benchmark
Columns
Model Name
Spearman Correlation
Paper Title
Mnet-Sim
0.931
MNet-Sim: A Multi-layered Semantic Similarity Network to Evaluate Sentence Similarity
MT-DNN-SMART
0.925
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
StructBERTRoBERTa ensemble
0.924
StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
T5-11B
0.921
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
RealFormer
0.8988
RealFormer: Transformer Likes Residual Attention
T5-3B
0.898
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
AnglE-LLaMA-13B
0.8969
AnglE-optimized Text Embeddings
ASA + RoBERTa
0.892
Adversarial Self-Attention for Language Understanding
PromptEOL+CSE+LLaMA-30B
0.8914
Scaling Sentence Embeddings with Large Language Models
AnglE-LLaMA-7B
0.8897
AnglE-optimized Text Embeddings
AnglE-LLaMA-7B-v2
0.8897
AnglE-optimized Text Embeddings
T5-Large 770M
0.886
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
PromptEOL+CSE+OPT-13B
0.8856
Scaling Sentence Embeddings with Large Language Models
PromptEOL+CSE+OPT-2.7B
0.8833
Scaling Sentence Embeddings with Large Language Models
PromCSE-RoBERTa-large (0.355B)
0.8787
Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning
BigBird
.878
Big Bird: Transformers for Longer Sequences
Trans-Encoder-RoBERTa-large-cross (unsup.)
0.867
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
SimCSE-RoBERTalarge
0.867
SimCSE: Simple Contrastive Learning of Sentence Embeddings
Trans-Encoder-RoBERTa-large-bi (unsup.)
0.8655
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
ASA + BERT-base
0.865
Adversarial Self-Attention for Language Understanding
0 of 66 row(s) selected.
Previous
Next
Semantic Textual Similarity On Sts Benchmark | SOTA | HyperAI