HyperAI
HyperAI超神経
ホーム
プラットフォーム
ドキュメント
ニュース
論文
チュートリアル
データセット
百科事典
SOTA
LLMモデル
GPU ランキング
学会
検索
サイトについて
利用規約
プライバシーポリシー
日本語
HyperAI
HyperAI超神経
Toggle Sidebar
サイトを検索…
⌘
K
Command Palette
Search for a command to run...
プラットフォーム
ホーム
SOTA
意味的文類似度
Semantic Textual Similarity On Sts12
Semantic Textual Similarity On Sts12
評価指標
Spearman Correlation
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
Columns
モデル名
Spearman Correlation
Paper Title
PromptEOL+CSE+OPT-13B
0.8020
Scaling Sentence Embeddings with Large Language Models
PromptEOL+CSE+LLaMA-30B
0.7972
Scaling Sentence Embeddings with Large Language Models
PromCSE-RoBERTa-large (0.355B)
0.7956
Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning
PromptEOL+CSE+OPT-2.7B
0.7949
Scaling Sentence Embeddings with Large Language Models
AnglE-LLaMA-7B
0.7868
AnglE-optimized Text Embeddings
AnglE-LLaMA-13B
0.7868
AnglE-optimized Text Embeddings
Trans-Encoder-RoBERTa-large-cross (unsup.)
0.7828
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
Trans-Encoder-BERT-large-bi (unsup.)
0.7819
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
SimCSE-RoBERTa-large
0.7746
SimCSE: Simple Contrastive Learning of Sentence Embeddings
Trans-Encoder-RoBERTa-base-cross (unsup.)
0.7637
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
Trans-Encoder-BERT-base-bi (unsup.)
0.7509
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
SRoBERTa-NLI-large
0.7453
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
DiffCSE-BERT-base
0.7228
DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings
Dino (STSb/̄
0.7027
Generating Datasets with Pretrained Language Models
SimCSE-RoBERTa-base
0.7016
SimCSE: Simple Contrastive Learning of Sentence Embeddings
DiffCSE-RoBERTa-base
0.7005
DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings
Mirror-BERT-base (unsup.)
0.674
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders
BERTlarge-flow (target)
0.6520
On the Sentence Embeddings from Pre-trained Language Models
Mirror-RoBERTa-base (unsup.)
0.648
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders
IS-BERT-NLI
0.5677
An Unsupervised Sentence Embedding Method by Mutual Information Maximization
0 of 20 row(s) selected.
Previous
Next
Semantic Textual Similarity On Sts12 | SOTA | HyperAI超神経