HyperAI
Accueil
Actualités
Articles de recherche récents
Tutoriels
Ensembles de données
Wiki
SOTA
Modèles LLM
Classement GPU
Événements
Recherche
À propos
Français
HyperAI
Toggle sidebar
Rechercher sur le site...
⌘
K
Accueil
SOTA
Semantic Textual Similarity
Semantic Textual Similarity On Sts Benchmark
Semantic Textual Similarity On Sts Benchmark
Métriques
Spearman Correlation
Résultats
Résultats de performance de divers modèles sur ce benchmark
Columns
Nom du modèle
Spearman Correlation
Paper Title
Repository
T5-Large 770M
0.886
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
SRoBERTa-NLI-STSb-large
0.8615
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
USE_T
-
Universal Sentence Encoder
Q8BERT (Zafrir et al., 2019)
-
Q8BERT: Quantized 8Bit BERT
DistilBERT 66M
-
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
RoBERTa
-
RoBERTa: A Robustly Optimized BERT Pretraining Approach
SBERT-NLI-base
0.7703
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
StructBERTRoBERTa ensemble
0.924
StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
-
Dino (STSb/̄
0.7782
Generating Datasets with Pretrained Language Models
SRoBERTa-NLI-base
0.7777
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
ERNIE 2.0 Large
-
ERNIE 2.0: A Continual Pre-training Framework for Language Understanding
ALBERT
-
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
SMART-BERT
-
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
AnglE-LLaMA-13B
0.8969
AnglE-optimized Text Embeddings
Trans-Encoder-RoBERTa-large-cross (unsup.)
0.867
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
ERNIE
-
ERNIE: Enhanced Language Representation with Informative Entities
SBERT-NLI-large
0.79
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
AnglE-LLaMA-7B
0.8897
AnglE-optimized Text Embeddings
Trans-Encoder-BERT-base-bi (unsup.)
0.839
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
Mirror-BERT-base (unsup.)
0.764
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders
0 of 66 row(s) selected.
Previous
Next