Machine Translation On Wmt2016 Romanian
Metriken
BLEU score
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Vergleichstabelle
Modellname | BLEU score |
---|---|
incorporating-a-local-translation-mechanism | 31.24 |
finetuned-language-models-are-zero-shot | 38.1 |
alleviating-the-inequality-of-attention-heads | 32.85 |
edinburgh-neural-machine-translation-systems | 33.3 |
finetuned-language-models-are-zero-shot | 37.3 |
non-autoregressive-neural-machine-translation-1 | 31.44 |
textbox-2-0-a-text-generation-library-with | - |
cross-lingual-language-model-pretraining | 35.3 |
flowseq-non-autoregressive-conditional | 32.03 |
flowseq-non-autoregressive-conditional | 32.91 |
deterministic-non-autoregressive-neural | 30.30 |
flowseq-non-autoregressive-conditional | 32.46 |
language-models-not-just-for-pre-training | 40.3 |
gentranslate-large-language-models-are | 33.5 |
flowseq-non-autoregressive-conditional | 30.69 |
flowseq-non-autoregressive-conditional | 30.16 |
adaptively-sparse-transformers | 33.1 |
alleviating-the-inequality-of-attention-heads | 32.95 |
levenshtein-transformer | 33.26 |
incorporating-a-local-translation-mechanism | 33.26 |
adaptively-sparse-transformers | 32.89 |