HyperAI

Machine Translation On Wmt2014 English French

Metriken

BLEU score

Ergebnisse

Leistungsergebnisse verschiedener Modelle zu diesem Benchmark

Vergleichstabelle
ModellnameBLEU score
learning-phrase-representations-using-rnn34.54
pay-less-attention-with-lightweight-and43.1
can-active-memory-replace-attention26.4
attention-is-all-you-need41.0
the-best-of-both-worlds-combining-recent41.0
deep-recurrent-models-with-fast-forward35.9
attention-is-all-you-need38.1
outrageously-large-neural-networks-the40.56
sequence-to-sequence-learning-with-neural34.8
neural-machine-translation-by-jointly36.2
very-deep-transformers-for-neural-machine46.4
resmlp-feedforward-networks-for-image40.6
understanding-back-translation-at-scale45.6
random-feature-attention-139.2
phrase-based-neural-unsupervised-machine28.11
autodropout-learning-dropout-patterns-to40
convolutional-sequence-to-sequence-learning41.3
Modell 1837
self-attention-with-relative-position41.5
unsupervised-neural-machine-translation14.36
finetuning-pretrained-transformers-into-rnns42.1
deep-recurrent-models-with-fast-forward39.2
unsupervised-statistical-machine-translation26.22
the-evolved-transformer41.3
phrase-based-neural-unsupervised-machine27.6
finetuned-language-models-are-zero-shot33.8
exploring-the-limits-of-transfer-learning43.4
pay-less-attention-with-lightweight-and43.2
a-convolutional-encoder-model-for-neural35.7
finetuned-language-models-are-zero-shot33.9
scaling-neural-machine-translation43.2
19050659643.3
fast-and-simple-mixture-of-softmaxes-with-bpe42.1
recurrent-neural-network-regularization29.03
convolutional-sequence-to-sequence-learning40.46
learning-to-encode-position-for-transformer42.7
sequence-to-sequence-learning-with-neural36.5
weighted-transformer-network-for-machine41.4
omninet-omnidirectional-representations-from42.6
time-aware-large-kernel-convolutions43.2
deliberation-networks-sequence-generation41.5
phrase-based-neural-unsupervised-machine25.14
understanding-the-difficulty-of-training43.8
googles-neural-machine-translation-system39.9
lite-transformer-with-long-short-range39.6
depth-growing-for-neural-machine-translation43.27
resmlp-feedforward-networks-for-image40.3
addressing-the-rare-word-problem-in-neural37.5
incorporating-bert-into-neural-machine-143.78
r-drop-regularized-dropout-for-neural43.95
the-evolved-transformer40.6
pre-training-multilingual-neural-machine44.3
very-deep-transformers-for-neural-machine43.8
memory-efficient-adaptive-optimization-for40.5
muse-parallel-multi-scale-attention-for43.5
hat-hardware-aware-transformers-for-efficient41.8
synthesizer-rethinking-self-attention-in41.85