Unsupervised Machine Translation On Wmt2014 2
평가 지표
BLEU
평가 결과
이 벤치마크에서 각 모델의 성능 결과
모델 이름 | BLEU | Paper Title | Repository |
|---|---|---|---|
| GPT-3 175B (Few-Shot) | 32.6 | Language Models are Few-Shot Learners | |
| MASS (6-layer Transformer) | 37.5 | MASS: Masked Sequence to Sequence Pre-training for Language Generation | |
| PBSMT + NMT | 27.6 | Phrase-Based & Neural Unsupervised Machine Translation | |
| BERT-fused NMT | 38.27 | Incorporating BERT into Neural Machine Translation | |
| MLM pretraining for encoder and decoder | 33.4 | Cross-lingual Language Model Pretraining | |
| SMT + NMT (tuning and joint refinement) | 36.2 | An Effective Approach to Unsupervised Machine Translation | |
| SMT as posterior regularization | 29.5 | Unsupervised Neural Machine Translation with SMT as Posterior Regularization |
0 of 7 row(s) selected.