Text Simplification On Turkcorpus
Metrics
BLEU
FKGL
SARI (EASSEu003e=0.2.1)
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | BLEU | FKGL | SARI (EASSEu003e=0.2.1) |
---|---|---|---|
metric-based-in-context-learning-a-case-study | 79.83 | 9.33 | 43.46 |
simple-and-effective-text-simplification | 74.49 | - | 36.70 |
dynamic-multi-level-multi-task-learning-for | 81.49 | - | 37.45 |
unsupervised-neural-text-simplification | 74.02 | - | 37.20 |
integrating-transformer-and-paraphrase-rules | - | - | 40.45 |
exploring-neural-text-simplification-models | 80.69 | - | 37.25 |
editnts-an-neural-programmer-interpreter | 86.69 | - | 38.22 |
the-gem-benchmark-natural-language-generation | - | - | - |
sentence-simplification-with-deep | 77.18 | - | 37.08 |
unsupervised-neural-text-simplification | - | - | 37.15 |
unsupervised-neural-text-simplification | - | - | 36.29 |
sentence-simplification-by-monolingual | - | - | 38.04 |
multilingual-unsupervised-sentence | - | 8.79 | 40.85 |
sentence-simplification-with-memory-augmented | 92.02 | - | 33.43 |
learning-how-to-simplify-from-explicit | - | - | 37.08* |
sentence-simplification-with-memory-augmented | 80.43 | - | 36.88 |
controllable-sentence-simplification | 72.53 | - | 41.38 |
hybrid-simplification-using-deep-semantics | 48.97* | - | 31.40* |
optimizing-statistical-machine-translation | 73.08* | - | 39.56 |
control-prefixes-for-text-generation | - | 7.74 | 42.32 |
text-simplification-by-tagging | - | - | 41.46 |
sentence-simplification-with-deep | 80.12 | - | 37.27 |
multilingual-unsupervised-sentence | 78.17 | 7.60 | 42.53 |
iterative-edit-based-unsupervised-sentence | 73.62 | - | 37.85 |
the-gem-benchmark-natural-language-generation | - | - | - |