Unsupervised Machine Translation On Wmt2016 3
Metrics
BLEU
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | BLEU |
---|---|
language-models-are-few-shot-learners | 39.5 |
cross-lingual-language-model-pretraining | 31.8 |
mass-masked-sequence-to-sequence-pre-training | 33.1 |