HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Text Summarization
Text Summarization On Mteb
Text Summarization On Mteb
Metrics
Spearman Correlation
Results
Performance results of various models on this benchmark
Columns
Model Name
Spearman Correlation
Paper Title
MPNet-multilingual
31.57
MTEB: Massive Text Embedding Benchmark
ST5-Base
31.39
MTEB: Massive Text Embedding Benchmark
SimCSE-BERT-unsup
31.15
MTEB: Massive Text Embedding Benchmark
MiniLM-L6
30.81
MTEB: Massive Text Embedding Benchmark
MiniLM-L12-multilingual
30.67
MTEB: Massive Text Embedding Benchmark
GTR-XXL
30.64
MTEB: Massive Text Embedding Benchmark
Komninos
30.49
MTEB: Massive Text Embedding Benchmark
Contriever
30.36
MTEB: Massive Text Embedding Benchmark
SGPT-125M-nli
30.26
MTEB: Massive Text Embedding Benchmark
GTR-XL
30.21
MTEB: Massive Text Embedding Benchmark
ST5-XXL
30.08
MTEB: Massive Text Embedding Benchmark
ST5-XL
29.91
MTEB: Massive Text Embedding Benchmark
BERT
29.82
MTEB: Massive Text Embedding Benchmark
GTR-Base
29.67
MTEB: Massive Text Embedding Benchmark
ST5-Large
29.64
MTEB: Massive Text Embedding Benchmark
coCondenser-msmarco
29.5
MTEB: Massive Text Embedding Benchmark
Glove
28.87
MTEB: Massive Text Embedding Benchmark
MiniLM-L12
27.9
MTEB: Massive Text Embedding Benchmark
SPECTER
27.66
MTEB: Massive Text Embedding Benchmark
MPNet
27.49
MTEB: Massive Text Embedding Benchmark
0 of 26 row(s) selected.
Previous
Next
Text Summarization On Mteb | SOTA | HyperAI