Grammatical Error Correction On Ua Gec
Metrics
F0.5
Results
Performance results of various models on this benchmark
Model Name | F0.5 | Paper Title | Repository |
---|---|---|---|
ChatGPT (zero-shot) | 27.4 | GPT-3.5 for Grammatical Error Correction | - |
mT5 large + 10M synth | 68.09 | A Low-Resource Approach to the Grammatical Error Correction of Ukrainian | |
mBART-based model with synthetic data | 68.17 | Comparative study of models trained on synthetic data for Ukrainian grammatical error correction | |
Llama + 1M BT + gold | 74.09 | To Err Is Human, but Llamas Can Learn It Too | |
RedPenNet | 67.71 | RedPenNet for Grammatical Error Correction: Outputs to Tokens, Attentions to Spans |
0 of 5 row(s) selected.