HyperAI
Accueil
Actualités
Articles de recherche récents
Tutoriels
Ensembles de données
Wiki
SOTA
Modèles LLM
Classement GPU
Événements
Recherche
À propos
Français
HyperAI
Toggle sidebar
Rechercher sur le site...
⌘
K
Accueil
SOTA
Réponse à des questions
Question Answering On Natural Questions Long
Question Answering On Natural Questions Long
Métriques
EM
Résultats
Résultats de performance de divers modèles sur ce benchmark
Columns
Nom du modèle
EM
Paper Title
Repository
FiE
58.4
0.8% Nyquist computational ghost imaging via non-experimental deep learning
-
DensePhrases
71.9
Learning Dense Representations of Phrases at Scale
R2-D2 w HN-DPR
55.9
R2-D2: A Modular Baseline for Open-Domain Question Answering
-
UnitedQA (Hybrid)
54.7
UnitedQA: A Hybrid Approach for Open Domain Question Answering
-
BERTwwm + SQuAD 2
-
Frustratingly Easy Natural Question Answering
-
Cluster-Former (#C=512)
-
Cluster-Former: Clustering-based Sparse Transformer for Long-Range Dependency Encoding
-
DrQA
-
Reading Wikipedia to Answer Open-Domain Questions
Locality-Sensitive Hashing
-
Reformer: The Efficient Transformer
UniK-QA
54.9
UniK-QA: Unified Representations of Structured and Unstructured Knowledge for Open-Domain Question Answering
BERTjoint
-
A BERT Baseline for the Natural Questions
Sparse Attention
-
Generating Long Sequences with Sparse Transformers
BPR (linear scan; l=1000)
41.6
Efficient Passage Retrieval with Hashing for Open-domain Question Answering
DecAtt + DocReader
-
Natural Questions: a Benchmark for Question Answering Research
0 of 13 row(s) selected.
Previous
Next