HyperAI초신경
홈
뉴스
최신 연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
한국어
HyperAI초신경
Toggle sidebar
전체 사이트 검색...
⌘
K
홈
SOTA
질문 응답
Question Answering On Natural Questions Long
Question Answering On Natural Questions Long
평가 지표
EM
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
EM
Paper Title
Repository
FiE
58.4
0.8% Nyquist computational ghost imaging via non-experimental deep learning
-
DensePhrases
71.9
Learning Dense Representations of Phrases at Scale
R2-D2 w HN-DPR
55.9
R2-D2: A Modular Baseline for Open-Domain Question Answering
-
UnitedQA (Hybrid)
54.7
UnitedQA: A Hybrid Approach for Open Domain Question Answering
-
BERTwwm + SQuAD 2
-
Frustratingly Easy Natural Question Answering
-
Cluster-Former (#C=512)
-
Cluster-Former: Clustering-based Sparse Transformer for Long-Range Dependency Encoding
-
DrQA
-
Reading Wikipedia to Answer Open-Domain Questions
Locality-Sensitive Hashing
-
Reformer: The Efficient Transformer
UniK-QA
54.9
UniK-QA: Unified Representations of Structured and Unstructured Knowledge for Open-Domain Question Answering
BERTjoint
-
A BERT Baseline for the Natural Questions
Sparse Attention
-
Generating Long Sequences with Sparse Transformers
BPR (linear scan; l=1000)
41.6
Efficient Passage Retrieval with Hashing for Open-domain Question Answering
DecAtt + DocReader
-
Natural Questions: a Benchmark for Question Answering Research
0 of 13 row(s) selected.
Previous
Next