HyperAI

Constituency Grammar Induction On Ptb

Metriken

Max F1 (WSJ)

Ergebnisse

Leistungsergebnisse verschiedener Modelle zu diesem Benchmark

Vergleichstabelle
ModellnameMax F1 (WSJ)
neural-language-modeling-by-jointly-learning38.1
ensemble-distillation-for-unsupervised-
augmenting-transformers-with-recursively-
ordered-neurons-integrating-tree-structures49.4
unsupervised-latent-tree-induction-with-deep-156.2
structural-optimization-ambiguity-and68.4
ensemble-distillation-for-unsupervised71.9
co-training-an-unsupervised-constituency66.8
compound-probabilistic-context-free-grammars60.1
pcfgs-can-do-better-inducing-probabilistic61.4
ordered-neurons-integrating-tree-structures50.0
unsupervised-latent-tree-induction-with-deep-149.6
unsupervised-recurrent-neural-network52.4
generative-pretrained-structured-transformers-
compound-probabilistic-context-free-grammars52.6
structural-optimization-ambiguity-and70.3
unsupervised-parsing-with-s-diora-single-tree63.96
dynamic-programming-in-rank-space-scaling-1-
on-eliciting-syntax-from-language-models-via64.1
neural-language-modeling-by-jointly-learning47.9
unsupervised-learning-of-syntactic-structure-
neural-bi-lexicalized-pcfg-induction-
fast-r2d2-a-pretrained-recursive-neural-