Unsupervised Machine Translation On Wmt2016 2
Metrics
BLEU
Results
Performance results of various models on this benchmark
Model Name | BLEU | Paper Title | Repository |
---|---|---|---|
GPT-3 175B (Few-Shot) | 21 | Language Models are Few-Shot Learners | |
MLM pretraining for encoder and decoder | 33.3 | Cross-lingual Language Model Pretraining | |
MASS (6-layer Transformer) | 35.2 | MASS: Masked Sequence to Sequence Pre-training for Language Generation |
0 of 3 row(s) selected.