Machine Translation On Wmt2016 German English
Metrics
BLEU score
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | BLEU score |
---|---|
linguistic-input-features-improve-neural | 32.9 |
finetuned-language-models-are-zero-shot | 38.9 |
exploiting-monolingual-data-at-scale-for | - |
edinburgh-neural-machine-translation-systems | 38.6 |
unsupervised-neural-machine-translation-with-1 | 14.62 |
unsupervised-machine-translation-using | 13.33 |
unsupervised-statistical-machine-translation | 23.05 |
finetuned-language-models-are-zero-shot | 40.7 |