Machine Translation On Wmt2014 English French
평가 지표
BLEU score
평가 결과
이 벤치마크에서 각 모델의 성능 결과
비교 표
모델 이름 | BLEU score |
---|---|
learning-phrase-representations-using-rnn | 34.54 |
pay-less-attention-with-lightweight-and | 43.1 |
can-active-memory-replace-attention | 26.4 |
attention-is-all-you-need | 41.0 |
the-best-of-both-worlds-combining-recent | 41.0 |
deep-recurrent-models-with-fast-forward | 35.9 |
attention-is-all-you-need | 38.1 |
outrageously-large-neural-networks-the | 40.56 |
sequence-to-sequence-learning-with-neural | 34.8 |
neural-machine-translation-by-jointly | 36.2 |
very-deep-transformers-for-neural-machine | 46.4 |
resmlp-feedforward-networks-for-image | 40.6 |
understanding-back-translation-at-scale | 45.6 |
random-feature-attention-1 | 39.2 |
phrase-based-neural-unsupervised-machine | 28.11 |
autodropout-learning-dropout-patterns-to | 40 |
convolutional-sequence-to-sequence-learning | 41.3 |
모델 18 | 37 |
self-attention-with-relative-position | 41.5 |
unsupervised-neural-machine-translation | 14.36 |
finetuning-pretrained-transformers-into-rnns | 42.1 |
deep-recurrent-models-with-fast-forward | 39.2 |
unsupervised-statistical-machine-translation | 26.22 |
the-evolved-transformer | 41.3 |
phrase-based-neural-unsupervised-machine | 27.6 |
finetuned-language-models-are-zero-shot | 33.8 |
exploring-the-limits-of-transfer-learning | 43.4 |
pay-less-attention-with-lightweight-and | 43.2 |
a-convolutional-encoder-model-for-neural | 35.7 |
finetuned-language-models-are-zero-shot | 33.9 |
scaling-neural-machine-translation | 43.2 |
190506596 | 43.3 |
fast-and-simple-mixture-of-softmaxes-with-bpe | 42.1 |
recurrent-neural-network-regularization | 29.03 |
convolutional-sequence-to-sequence-learning | 40.46 |
learning-to-encode-position-for-transformer | 42.7 |
sequence-to-sequence-learning-with-neural | 36.5 |
weighted-transformer-network-for-machine | 41.4 |
omninet-omnidirectional-representations-from | 42.6 |
time-aware-large-kernel-convolutions | 43.2 |
deliberation-networks-sequence-generation | 41.5 |
phrase-based-neural-unsupervised-machine | 25.14 |
understanding-the-difficulty-of-training | 43.8 |
googles-neural-machine-translation-system | 39.9 |
lite-transformer-with-long-short-range | 39.6 |
depth-growing-for-neural-machine-translation | 43.27 |
resmlp-feedforward-networks-for-image | 40.3 |
addressing-the-rare-word-problem-in-neural | 37.5 |
incorporating-bert-into-neural-machine-1 | 43.78 |
r-drop-regularized-dropout-for-neural | 43.95 |
the-evolved-transformer | 40.6 |
pre-training-multilingual-neural-machine | 44.3 |
very-deep-transformers-for-neural-machine | 43.8 |
memory-efficient-adaptive-optimization-for | 40.5 |
muse-parallel-multi-scale-attention-for | 43.5 |
hat-hardware-aware-transformers-for-efficient | 41.8 |
synthesizer-rethinking-self-attention-in | 41.85 |