Unsupervised Machine Translation On Wmt2014 1
Métriques
BLEU
Résultats
Résultats de performance de divers modèles sur ce benchmark
Nom du modèle | BLEU | Paper Title | Repository |
---|---|---|---|
GPT-3 175B (Few-Shot) | 39.2 | Language Models are Few-Shot Learners | |
MASS (6-layer Transformer) | 34.9 | MASS: Masked Sequence to Sequence Pre-training for Language Generation | |
SMT + NMT (tuning and joint refinement) | 33.5 | An Effective Approach to Unsupervised Machine Translation | |
SMT | 25.9 | Unsupervised Statistical Machine Translation | |
SMT as posterior regularization | 28.9 | Unsupervised Neural Machine Translation with SMT as Posterior Regularization | |
MLM pretraining for encoder and decoder | 33.3 | Cross-lingual Language Model Pretraining | |
PBSMT + NMT | 27.7 | Phrase-Based & Neural Unsupervised Machine Translation |
0 of 7 row(s) selected.