HyperAI
الرئيسية
الأخبار
أحدث الأوراق البحثية
الدروس
مجموعات البيانات
الموسوعة
SOTA
نماذج LLM
لوحة الأداء GPU
الفعاليات
البحث
حول
العربية
HyperAI
Toggle sidebar
البحث في الموقع...
⌘
K
الرئيسية
SOTA
الإجابة على الأسئلة
Question Answering On Narrativeqa
Question Answering On Narrativeqa
المقاييس
BLEU-1
BLEU-4
METEOR
Rouge-L
النتائج
نتائج أداء النماذج المختلفة على هذا المعيار القياسي
Columns
اسم النموذج
BLEU-1
BLEU-4
METEOR
Rouge-L
Paper Title
Repository
Masque (NarrativeQA only)
48.7
20.98
21.95
54.74
Multi-style Generative Reading Comprehension
-
MHPGM + NOIC
43.63
21.07
19.03
44.16
Commonsense for Generative Multi-Hop Question Answering Tasks
DecaProp
44.35
27.61
21.80
44.69
Densely Connected Attention Propagation for Reading Comprehension
BERT-QA with Hard EM objective
-
-
-
58.8
A Discrete Hard EM Approach for Weakly Supervised Question Answering
FiD+Distil
35.3
7.5
11.1
32
Distilling Knowledge from Reader to Retriever for Question Answering
ConZNet
42.76
22.49
19.24
46.67
Cut to the Chase: A Context Zoom-in Network for Reading Comprehension
-
Masque (NarrativeQA + MS MARCO)
54.11
30.43
26.13
59.87
Multi-style Generative Reading Comprehension
-
Oracle IR Models
54.60/55.55
26.71/27.78
-
-
The NarrativeQA Reading Comprehension Challenge
BiAttention + DCU-LSTM
36.55
19.79
17.87
41.44
Multi-Granular Sequence Encoding via Dilated Compositional Units for Reading Comprehension
-
BiDAF
33.45
15.69
15.68
36.74
Bidirectional Attention Flow for Machine Comprehension
0 of 10 row(s) selected.
Previous
Next