HyperAI
HyperAI
الرئيسية
المنصة
الوثائق
الأخبار
الأوراق البحثية
الدروس
مجموعات البيانات
الموسوعة
SOTA
نماذج LLM
لوحة الأداء GPU
الفعاليات
البحث
حول
شروط الخدمة
سياسة الخصوصية
العربية
HyperAI
HyperAI
Toggle Sidebar
البحث في الموقع...
⌘
K
Command Palette
Search for a command to run...
المنصة
الرئيسية
SOTA
تحليل الخطاب
Discourse Parsing On Rst Dt
Discourse Parsing On Rst Dt
المقاييس
RST-Parseval (Nuclearity)
RST-Parseval (Relation)
RST-Parseval (Span)
النتائج
نتائج أداء النماذج المختلفة على هذا المعيار القياسي
Columns
اسم النموذج
RST-Parseval (Nuclearity)
RST-Parseval (Relation)
RST-Parseval (Span)
Paper Title
Bottom-up Linear-chain CRF-based Parser
71.0
58.2
85.7
-
Top-down (XLNet)
-
-
-
A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing
Top-down Llama 2 (13B)
-
-
-
Can we obtain significant success in RST discourse parsing by using Large Language Models?
Top-down (SpanBERT)
-
-
-
A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing
Guz et al. (2020)
-
-
-
Unleashing the Power of Neural Discourse Parsers -- A Context and Structure Aware Approach Using Large Scale Pretraining
Re-implemented HILDA RST parser
66.6*
54.6*
82.6*
-
Greedy Bottom-up Parser with Syntactic Features
67.1*
55.4*
82.6*
-
HILDA Parser
68.4
55.3
83.0
A Novel Discourse Parser Based on Support Vector Machine Classification
Bottom-up Llama 2 (70B)
-
-
-
Can we obtain significant success in RST discourse parsing by using Large Language Models?
LSTM Dynamic
-
-
-
Top-down Discourse Parsing via Sequence Labelling
Two-stage Parser
72.4
59.7
86.0
A Two-Stage Parsing Method for Text-Level Discourse Analysis
Bottom-up (DeBERTa)
-
-
-
A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing
DMRST
-
-
-
Bilingual Rhetorical Structure Parsing with Large Parallel Annotations
LSTM Sequential Discourse Parser (Braud et al., 2016)
63.6*
47.7*
79.7*
Multi-view and multi-task training of RST discourse parsers
Transformer (dynamic)
-
-
-
Top-down Discourse Parsing via Sequence Labelling
Bottom-up Llama 2 (13B)
-
-
-
Can we obtain significant success in RST discourse parsing by using Large Language Models?
Top-down Span-based Parser with Silver Agreement Subtrees
74.7
62.5
86.8
Improving Neural RST Parsing Model with Silver Agreement Subtrees
Bottom-up (RoBERTa)
-
-
-
A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing
End-to-end Top-down (XLNet)
76.0
61.8
87.6
RST Parsing from Scratch
Bottom-up (SpanBERT)
-
-
-
A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing
0 of 40 row(s) selected.
Previous
Next