Machine Translation On Iwslt2014 German
Metriken
BLEU score
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Vergleichstabelle
Modellname | BLEU score |
---|---|
attention-is-all-you-need | 34.44 |
random-feature-attention-1 | 34.4 |
time-aware-large-kernel-convolutions | 35.5 |
bi-simcut-a-simple-strategy-for-boosting-1 | 38.37 |
mask-attention-networks-rethinking-and | 36.3 |
wide-minima-density-hypothesis-and-the | 37.78 |
non-autoregressive-translation-by-learning | 31.15 |
classical-structured-prediction-losses-for | 32.84 |
bert-mbert-or-bibert-a-study-on | 38.61 |
pay-less-attention-with-lightweight-and | 34.8 |
autodropout-learning-dropout-patterns-to | 35.8 |
pay-less-attention-with-lightweight-and | 35.2 |
r-drop-regularized-dropout-for-neural | 37.25 |
guidelines-for-the-regularization-of-gammas | 35.1385 |
tag-less-back-translation | 28.83 |
190506596 | 35.7 |
r-drop-regularized-dropout-for-neural | 37.90 |
data-diversification-an-elegant-strategy-for | 37.2 |
unidrop-a-simple-yet-effective-technique-to | 36.88 |
an-actor-critic-algorithm-for-sequence | 28.53 |
a-simple-but-tough-to-beat-data-augmentation | 37.6 |
bi-simcut-a-simple-strategy-for-boosting-1 | 37.81 |
deterministic-reversible-data-augmentation | 37.95 |
towards-neural-phrase-based-machine | 30.08 |
latent-alignment-and-variational-attention | 33.1 |
cipherdaug-ciphertext-based-data-augmentation | 37.53 |
sequence-generation-with-mixed | 36.41 |
muse-parallel-multi-scale-attention-for | 36.3 |
relaxed-attention-for-transformer-models | 37.96 |
multi-branch-attentive-transformer | 36.22 |
rethinking-perturbations-in-encoder-decoders | 36.22 |
integrating-pre-trained-language-model-into | 40.43 |
autoregressive-knowledge-distillation-through | 35.4 |
delight-very-deep-and-light-weight | 35.3 |