Constituency Grammar Induction On Ptb
Metrics
Max F1 (WSJ)
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | Max F1 (WSJ) |
---|---|
neural-language-modeling-by-jointly-learning | 38.1 |
ensemble-distillation-for-unsupervised | - |
augmenting-transformers-with-recursively | - |
ordered-neurons-integrating-tree-structures | 49.4 |
unsupervised-latent-tree-induction-with-deep-1 | 56.2 |
structural-optimization-ambiguity-and | 68.4 |
ensemble-distillation-for-unsupervised | 71.9 |
co-training-an-unsupervised-constituency | 66.8 |
compound-probabilistic-context-free-grammars | 60.1 |
pcfgs-can-do-better-inducing-probabilistic | 61.4 |
ordered-neurons-integrating-tree-structures | 50.0 |
unsupervised-latent-tree-induction-with-deep-1 | 49.6 |
unsupervised-recurrent-neural-network | 52.4 |
generative-pretrained-structured-transformers | - |
compound-probabilistic-context-free-grammars | 52.6 |
structural-optimization-ambiguity-and | 70.3 |
unsupervised-parsing-with-s-diora-single-tree | 63.96 |
dynamic-programming-in-rank-space-scaling-1 | - |
on-eliciting-syntax-from-language-models-via | 64.1 |
neural-language-modeling-by-jointly-learning | 47.9 |
unsupervised-learning-of-syntactic-structure | - |
neural-bi-lexicalized-pcfg-induction | - |
fast-r2d2-a-pretrained-recursive-neural | - |