HyperAI
HyperAI超神経
ホーム
プラットフォーム
ドキュメント
ニュース
論文
チュートリアル
データセット
百科事典
SOTA
LLMモデル
GPU ランキング
学会
検索
サイトについて
利用規約
プライバシーポリシー
日本語
HyperAI
HyperAI超神経
Toggle Sidebar
サイトを検索…
⌘
K
Command Palette
Search for a command to run...
プラットフォーム
ホーム
SOTA
構文文法誘導
Constituency Grammar Induction On Ptb
Constituency Grammar Induction On Ptb
評価指標
Max F1 (WSJ)
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
Columns
モデル名
Max F1 (WSJ)
Paper Title
PRPN
38.1
Neural Language Modeling by Jointly Learning Syntax and Lexicon
Ensemble (Selective MBR)
-
Ensemble Distillation for Unsupervised Constituency Parsing
ReCAT
-
Augmenting Transformers with Recursively Composed Multi-grained Representations
ON-LSTM
49.4
Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks
DIORA (+PP)
56.2
Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Auto-Encoders
Parse-Focused (NT=30)
68.4
Structural Optimization Ambiguity and Simplicity Bias in Unsupervised Neural Grammar Induction
Ensemble (Generative MBR)
71.9
Ensemble Distillation for Unsupervised Constituency Parsing
inside-outside co-training + weak supervision
66.8
Co-training an Unsupervised Constituency Parser with Weak Supervision
Compound PCFG
60.1
Compound Probabilistic Context-Free Grammars for Grammar Induction
TN-PCFG (p=500)
61.4
PCFGs Can Do Better: Inducing Probabilistic Context-Free Grammars with Many Symbols
ON-LSTM (tuned)
50.0
Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks
DIORA
49.6
Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Auto-Encoders
URNNG
52.4
Unsupervised Recurrent Neural Network Grammars
GPST(left to right parsing)
-
Generative Pretrained Structured Transformers: Unsupervised Syntactic Language Models at Scale
Neural PCFG
52.6
Compound Probabilistic Context-Free Grammars for Grammar Induction
Parse-Focused (NT=4500)
70.3
Structural Optimization Ambiguity and Simplicity Bias in Unsupervised Neural Grammar Induction
S-DIORA
63.96
Unsupervised Parsing with S-DIORA: Single Tree Encoding for Deep Inside-Outside Recursive Autoencoders
DP in rank space
-
Dynamic Programming in Rank Space: Scaling Structured Inference with Low-Rank HMMs and PCFGs
Hashing (Parserker 2)
64.1
On Eliciting Syntax from Language Models via Hashing
PRPN (tuned)
47.9
Neural Language Modeling by Jointly Learning Syntax and Lexicon
0 of 23 row(s) selected.
Previous
Next