Label Attention Layer + HPSG + XLNet | 96.38 | Rethinking Self-Attention: Towards Interpretability in Neural Parsing | |
Head-Driven Phrase Structure Grammar Parsing (Joint) + XLNet | 96.33 | Head-Driven Phrase Structure Grammar Parsing on Penn Treebank | |
Transformer | 92.7 | Attention Is All You Need | |
Self-attentive encoder + ELMo | 95.13 | Constituency Parsing with a Self-Attentive Encoder | |
N-ary semi-markov + BERT-large | 95.92 | N-ary Constituent Tree Parsing with Recursive Semi-Markov Model | |
Head-Driven Phrase Structure Grammar Parsing (Joint) + BERT | 95.84 | Head-Driven Phrase Structure Grammar Parsing on Penn Treebank | |
LSTM Encoder-Decoder + LSTM-LM | 94.47 | Direct Output Connection for a High-Rank Language Model | |
Attach-Juxtapose Parser + XLNet | 96.34 | Strongly Incremental Constituency Parsing with Graph Neural Networks | |