Question Answering On Nq Beir
평가 지표
nDCG@10
평가 결과
이 벤치마크에서 각 모델의 성능 결과
모델 이름 | nDCG@10 | Paper Title | Repository |
---|---|---|---|
Blended RAG | 0.67 | Blended RAG: Improving RAG (Retriever-Augmented Generation) Accuracy with Semantic Search and Hybrid Query-Based Retrievers | |
SGPT-BE-5.8B | 0.524 | SGPT: GPT Sentence Embeddings for Semantic Search | |
SGPT-CE-6.1B | 0.401 | SGPT: GPT Sentence Embeddings for Semantic Search | |
ColBERT | 0.524 | BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models | |
BM25+CE | 0.533 | BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models | |
monoT5-3B | 0.633 | No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval |
0 of 6 row(s) selected.