HyperAI
HyperAI
الرئيسية
الأخبار
أحدث الأوراق البحثية
الدروس
مجموعات البيانات
الموسوعة
SOTA
نماذج LLM
لوحة الأداء GPU
الفعاليات
البحث
حول
العربية
HyperAI
HyperAI
Toggle sidebar
البحث في الموقع...
⌘
K
الرئيسية
SOTA
تشابه النصوص الدلالي
Semantic Textual Similarity On Sts Benchmark
Semantic Textual Similarity On Sts Benchmark
المقاييس
Spearman Correlation
النتائج
نتائج أداء النماذج المختلفة على هذا المعيار القياسي
Columns
اسم النموذج
Spearman Correlation
Paper Title
Repository
T5-Large 770M
0.886
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
-
SRoBERTa-NLI-STSb-large
0.8615
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
-
USE_T
-
Universal Sentence Encoder
-
Q8BERT (Zafrir et al., 2019)
-
Q8BERT: Quantized 8Bit BERT
-
DistilBERT 66M
-
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
-
RoBERTa
-
RoBERTa: A Robustly Optimized BERT Pretraining Approach
-
SBERT-NLI-base
0.7703
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
-
StructBERTRoBERTa ensemble
0.924
StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
-
Dino (STSb/̄
0.7782
Generating Datasets with Pretrained Language Models
-
SRoBERTa-NLI-base
0.7777
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
-
ERNIE 2.0 Large
-
ERNIE 2.0: A Continual Pre-training Framework for Language Understanding
-
ALBERT
-
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
-
SMART-BERT
-
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
-
AnglE-LLaMA-13B
0.8969
AnglE-optimized Text Embeddings
-
Trans-Encoder-RoBERTa-large-cross (unsup.)
0.867
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
-
ERNIE
-
ERNIE: Enhanced Language Representation with Informative Entities
-
SBERT-NLI-large
0.79
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
-
AnglE-LLaMA-7B
0.8897
AnglE-optimized Text Embeddings
-
Trans-Encoder-BERT-base-bi (unsup.)
0.839
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
-
Mirror-BERT-base (unsup.)
0.764
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders
-
0 of 66 row(s) selected.
Previous
Next
Semantic Textual Similarity On Sts Benchmark | SOTA | HyperAI