HyperAI超神经

Constituency Parsing On Penn Treebank

评估指标

F1 score

评测结果

各个模型在此基准测试上的表现结果

模型名称
F1 score
Paper TitleRepository
Model combination94.66Improving Neural Parsing by Disentangling Model Combination and Reranking Effects-
Label Attention Layer + HPSG + XLNet96.38Rethinking Self-Attention: Towards Interpretability in Neural Parsing
RNN Grammar93.3Recurrent Neural Network Grammars
Tetra Tagging95.44Tetra-Tagging: Word-Synchronous Parsing with Linear-Time Inference
Head-Driven Phrase Structure Grammar Parsing (Joint) + XLNet96.33Head-Driven Phrase Structure Grammar Parsing on Penn Treebank
Hashing + XLNet96.43To be Continuous, or to be Discrete, Those are Bits of Questions
NFC + BERT-large95.92Investigating Non-local Features for Neural Constituency Parsing
Stack-only RNNG93.6What Do Recurrent Neural Network Grammars Learn About Syntax?
CRF Parser + RoBERTa96.32Fast and Accurate Neural CRF Constituency Parsing
Transformer92.7Attention Is All You Need
Self-attentive encoder + ELMo95.13Constituency Parsing with a Self-Attentive Encoder
SAPar + XLNet96.40Improving Constituency Parsing with Span Attention
Semi-supervised LSTM-LM93.8--
CNN Large + fine-tune95.6Cloze-driven Pretraining of Self-attention Networks-
N-ary semi-markov + BERT-large95.92N-ary Constituent Tree Parsing with Recursive Semi-Markov Model
Head-Driven Phrase Structure Grammar Parsing (Joint) + BERT95.84Head-Driven Phrase Structure Grammar Parsing on Penn Treebank
Parse fusion92.6--
LSTM Encoder-Decoder + LSTM-LM94.47Direct Output Connection for a High-Rank Language Model
Attach-Juxtapose Parser + XLNet96.34Strongly Incremental Constituency Parsing with Graph Neural Networks
Self-training92.1Effective Self-Training for Parsing
0 of 27 row(s) selected.