HyperAI
Home
News
Latest Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
English
HyperAI
Toggle sidebar
Search the site…
⌘
K
Home
SOTA
Passage Retrieval
Passage Retrieval On Msmarco Beir
Passage Retrieval On Msmarco Beir
Metrics
nDCG@10
Results
Performance results of various models on this benchmark
Columns
Model Name
nDCG@10
Paper Title
Repository
ColBERT
0.401
BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models
BM25
0.228
BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models
SPARTA
0.351
BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models
BM25+CE
0.413
BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models
docT5query
0.338
BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models
ANCE
0.388
BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models
SGPT-CE-6.1B
0.290
SGPT: GPT Sentence Embeddings for Semantic Search
SGPT-BE-5.8B
0.399
SGPT: GPT Sentence Embeddings for Semantic Search
SGPT-CE-2.7B
0.278
SGPT: GPT Sentence Embeddings for Semantic Search
DeepCT
0.296
BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models
TAS-b
0.408
BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models
DPR
0.177
BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models
0 of 12 row(s) selected.
Previous
Next