HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Grammatical Error Correction
Grammatical Error Correction On Conll 2014
Grammatical Error Correction On Conll 2014
Metrics
F0.5
Precision
Recall
Results
Performance results of various models on this benchmark
Columns
Model Name
F0.5
Precision
Recall
Paper Title
Ensembles of best 7 models + GRECO + GTP-rerank
72.8
83.9
47.5
Pillars of Grammatical Error Correction: Comprehensive Inspection Of Contemporary Approaches In The Era of Large Language Models
Majority-voting ensemble on best 7 models
71.8
83.7
45.7
Pillars of Grammatical Error Correction: Comprehensive Inspection Of Contemporary Approaches In The Era of Large Language Models
GRECO (voting+ESC)
71.12
79.6
49.86
System Combination via Quality Estimation for Grammatical Error Correction
GEC-DI (LM+GED)
69.6
79.2
46.8
Improving Seq2Seq Grammatical Error Correction via Decoding Interventions
Unsupervised GEC + cLang8
69.6
75.0
53.8
Unsupervised Grammatical Error Correction Rivaling Supervised Methods
ESC
69.51
81.48
43.78
Frustratingly Easy System Combination for Grammatical Error Correction
T5
68.87
-
-
A Simple Recipe for Multilingual Grammatical Error Correction
MoECE
67.79
74.29
50.21
Efficient and Interpretable Grammatical Error Correction with Mixture of Experts
SynGEC
67.6
74.7
49.0
SynGEC: Syntax-Enhanced Grammatical Error Correction with a Tailored GEC-Oriented Parser
Sequence tagging + token-level transformations + two-stage fine-tuning (+BERT, RoBERTa, XLNet)
66.5
78.2
41.5
GECToR -- Grammatical Error Correction: Tag, Not Rewrite
LM-Critic
65.8
-
-
LM-Critic: Language Models for Unsupervised Grammatical Error Correction
Sequence tagging + token-level transformations + two-stage fine-tuning (+XLNet)
65.3
77.5
40.1
GECToR -- Grammatical Error Correction: Tag, Not Rewrite
Transformer + Pre-train with Pseudo Data (+BERT)
65.2
-
-
Encoder-Decoder Models Can Benefit from Pre-trained Masked Language Models in Grammatical Error Correction
Transformer + Pre-train with Pseudo Data
65.0
-
-
An Empirical Study of Incorporating Pseudo Data into Grammatical Error Correction
VERNet
63.7
-
-
Neural Quality Estimation with Multiple Hypotheses for Grammatical Error Correction
BART
63.0
69.9
45.1
Stronger Baselines for Grammatical Error Correction Using Pretrained Encoder-Decoder Model
Sequence Labeling with edits using BERT, Faster inference
61.2
-
-
Parallel Iterative Edit Models for Local Sequence Transduction
Copy-augmented Model (4 Ensemble +Denoising Autoencoder)
61.15
71.57
38.65
Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data
Sequence Labeling with edits using BERT, Faster inference (Single Model)
59.7
-
-
Parallel Iterative Edit Models for Local Sequence Transduction
CNN Seq2Seq + Quality Estimation
56.52
-
-
Neural Quality Estimation of Grammatical Error Correction
0 of 23 row(s) selected.
Previous
Next
Grammatical Error Correction On Conll 2014 | SOTA | HyperAI