HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Text Retrieval
Text Retrieval On Mteb
Text Retrieval On Mteb
Metrics
nDCG@10
Results
Performance results of various models on this benchmark
Columns
Model Name
nDCG@10
Paper Title
SGPT-5.8B-msmarco
50.25
MTEB: Massive Text Embedding Benchmark
GTR-XXL
48.48
MTEB: Massive Text Embedding Benchmark
SGPT-BLOOM-7.1B-msmarco
48.21
MTEB: Massive Text Embedding Benchmark
GTR-XL
47.96
MTEB: Massive Text Embedding Benchmark
GTR-Large
47.42
MTEB: Massive Text Embedding Benchmark
SGPT-2.7B-msmarco
46.54
MTEB: Massive Text Embedding Benchmark
GTR-Base
44.67
MTEB: Massive Text Embedding Benchmark
SGPT-1.3B-msmarco
44.49
MTEB: Massive Text Embedding Benchmark
MPNet
43.81
MTEB: Massive Text Embedding Benchmark
MiniLM-L12
42.69
MTEB: Massive Text Embedding Benchmark
ST5-XXL
42.24
MTEB: Massive Text Embedding Benchmark
MiniLM-L6
41.95
MTEB: Massive Text Embedding Benchmark
Contriever
41.88
MTEB: Massive Text Embedding Benchmark
ST5-XL
38.47
MTEB: Massive Text Embedding Benchmark
SGPT-125M-msmarco
37.04
MTEB: Massive Text Embedding Benchmark
ST5-Large
36.71
MTEB: Massive Text Embedding Benchmark
MPNet-multilingual
35.34
MTEB: Massive Text Embedding Benchmark
ST5-Base
33.63
MTEB: Massive Text Embedding Benchmark
coCondenser-msmarco
32.96
MTEB: Massive Text Embedding Benchmark
MiniLM-L12-multilingual
32.45
MTEB: Massive Text Embedding Benchmark
0 of 30 row(s) selected.
Previous
Next
Text Retrieval On Mteb | SOTA | HyperAI