Machine Translation On Wmt2014 French English
Metrics
BLEU score
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | BLEU score |
---|---|
unsupervised-statistical-machine-translation | 25.87 |
finetuned-language-models-are-zero-shot | 37.9 |
finetuned-language-models-are-zero-shot | 35.9 |