HyperAI超神经

Machine Translation On Wmt2014 English French

评估指标

BLEU score

评测结果

各个模型在此基准测试上的表现结果

模型名称
BLEU score
Paper TitleRepository
CSLM + RNN + WP34.54Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation
LightConv43.1Pay Less Attention with Lightweight and Dynamic Convolutions
GRU+Attention26.4Can Active Memory Replace Attention?
Transformer Big41.0Attention Is All You Need
RNMT+41.0The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
Deep-Att35.9Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
Transformer Base38.1Attention Is All You Need
MoE40.56Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
LSTM34.8Sequence to Sequence Learning with Neural Networks
RNN-search50*36.2Neural Machine Translation by Jointly Learning to Align and Translate
Transformer+BT (ADMIN init)46.4Very Deep Transformers for Neural Machine Translation
ResMLP-1240.6ResMLP: Feedforward networks for image classification with data-efficient training
Noisy back-translation45.6Understanding Back-Translation at Scale
Rfa-Gate-arccos39.2Random Feature Attention-
Unsupervised PBSMT28.11Phrase-Based & Neural Unsupervised Machine Translation
TransformerBase + AutoDropout40AutoDropout: Learning Dropout Patterns to Regularize Deep Networks
ConvS2S (ensemble)41.3Convolutional Sequence to Sequence Learning
PBMT37--
Transformer (big) + Relative Position Representations41.5Self-Attention with Relative Position Representations
Unsupervised attentional encoder-decoder + BPE14.36Unsupervised Neural Machine Translation
0 of 57 row(s) selected.