HyperAI
Accueil
Actualités
Articles de recherche récents
Tutoriels
Ensembles de données
Wiki
SOTA
Modèles LLM
Classement GPU
Événements
Recherche
À propos
Français
HyperAI
Toggle sidebar
Rechercher sur le site...
⌘
K
Accueil
SOTA
Semantic Textual Similarity
Semantic Textual Similarity On Sts15
Semantic Textual Similarity On Sts15
Métriques
Spearman Correlation
Résultats
Résultats de performance de divers modèles sur ce benchmark
Columns
Nom du modèle
Spearman Correlation
Paper Title
Repository
DiffCSE-BERT-base
0.8390
DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings
PromptEOL+CSE+OPT-2.7B
0.8951
Scaling Sentence Embeddings with Large Language Models
BERTlarge-flow (target)
0.7492
On the Sentence Embeddings from Pre-trained Language Models
-
Trans-Encoder-RoBERTa-base-cross (unsup.)
0.8577
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
PromCSE-RoBERTa-large (0.355B)
0.8808
Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning
Trans-Encoder-BERT-base-bi (unsup.)
0.8508
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
SimCSE-RoBERTalarge
0.8666
SimCSE: Simple Contrastive Learning of Sentence Embeddings
Trans-Encoder-BERT-large-bi (unsup.)
0.8816
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
AnglE-LLaMA-13B
0.8956
AnglE-optimized Text Embeddings
DiffCSE-RoBERTa-base
0.8281
DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings
Trans-Encoder-BERT-base-cross (unsup.)
0.8444
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
SRoBERTa-NLI-large
0.8185
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
IS-BERT-NLI
0.7523
An Unsupervised Sentence Embedding Method by Mutual Information Maximization
AnglE-LLaMA-7B-v2
0.8943
AnglE-optimized Text Embeddings
Dino (STSb/)
0.8049
Generating Datasets with Pretrained Language Models
Mirror-RoBERTa-base (unsup.)
0.798
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders
Trans-Encoder-RoBERTa-large-cross (unsup.)
0.8863
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
Mirror-BERT-base (unsup.)
0.814
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders
PromptEOL+CSE+OPT-13B
0.8952
Scaling Sentence Embeddings with Large Language Models
PromptEOL+CSE+LLaMA-30B
0.9004
Scaling Sentence Embeddings with Large Language Models
0 of 20 row(s) selected.
Previous
Next