Grammatical Error Detection On Conll 2014 A1
평가 지표
F0.5
평가 결과
이 벤치마크에서 각 모델의 성능 결과
모델 이름 | F0.5 | Paper Title | Repository |
---|---|---|---|
BiLSTM-JOINT (trained on FCE) | 22.14 | Jointly Learning to Label Sentences and Tokens | |
Bi-LSTM + POS (unrestricted data) | 36.1 | Auxiliary Objectives for Neural Error Detection Models | - |
Ann+PAT+MT | 21.87 | Artificial Error Generation with Machine Translation and Syntactic Patterns | - |
Bi-LSTM + LMcost (trained on FCE) | 17.86 | Semi-supervised Multitask Learning for Sequence Labeling | |
Bi-LSTM (unrestricted data) | 34.3 | Compositional Sequence Labeling Models for Error Detection in Learner Writing | - |
Bi-LSTM (trained on FCE) | 16.4 | Compositional Sequence Labeling Models for Error Detection in Learner Writing | - |
Bi-LSTM + POS (trained on FCE) | 17.5 | Auxiliary Objectives for Neural Error Detection Models | - |
VERNet | 54.3 | Neural Quality Estimation with Multiple Hypotheses for Grammatical Error Correction |
0 of 8 row(s) selected.