Machine Translation On Wmt2016 English 1
Métriques
BLEU score
Résultats
Résultats de performance de divers modèles sur ce benchmark
Tableau comparatif
Nom du modèle | BLEU score |
---|---|
phrase-based-neural-unsupervised-machine | 25.13 |
deterministic-non-autoregressive-neural | 29.66 |
convolutional-sequence-to-sequence-learning | 29.9 |
incorporating-a-local-translation-mechanism | 32.87 |
non-autoregressive-neural-machine-translation-1 | 29.79 |
delight-very-deep-and-light-weight | 34.7 |
the-qt21himl-combined-machine-translation | 28.9 |
phrase-based-neural-unsupervised-machine | 21.33 |
incorporating-a-local-translation-mechanism | 30.74 |
a-convolutional-encoder-model-for-neural | 27.8 |
flowseq-non-autoregressive-conditional | 31.97 |
finetuned-language-models-are-zero-shot | 20.5 |
a-convolutional-encoder-model-for-neural | 27.5 |
textbox-2-0-a-text-generation-library-with | - |
edinburgh-neural-machine-translation-systems | 28.1 |
flowseq-non-autoregressive-conditional | 29.26 |
flowseq-non-autoregressive-conditional | 29.86 |
finetuned-language-models-are-zero-shot | 18.9 |
phrase-based-neural-unsupervised-machine | 21.18 |
flowseq-non-autoregressive-conditional | 32.35 |
flowseq-non-autoregressive-conditional | 31.08 |