Unsupervised Machine Translation On Wmt2016 2
Metriken
BLEU
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
| Paper Title | ||
|---|---|---|
| MASS (6-layer Transformer) | 35.2 | MASS: Masked Sequence to Sequence Pre-training for Language Generation |
| MLM pretraining for encoder and decoder | 33.3 | Cross-lingual Language Model Pretraining |
| GPT-3 175B (Few-Shot) | 21 | Language Models are Few-Shot Learners |
0 of 3 row(s) selected.