HyperAI
HyperAI超神経
ホーム
プラットフォーム
ドキュメント
ニュース
論文
チュートリアル
データセット
百科事典
SOTA
LLMモデル
GPU ランキング
学会
検索
サイトについて
日本語
HyperAI
HyperAI超神経
Toggle sidebar
サイトを検索…
⌘
K
Command Palette
Search for a command to run...
ホーム
SOTA
機械翻訳
Machine Translation On Wmt2016 English 1
Machine Translation On Wmt2016 English 1
評価指標
BLEU score
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
Columns
モデル名
BLEU score
Paper Title
Repository
DeLighT
34.7
DeLighT: Deep and Light-weight Transformer
CMLM+LAT+4 iterations
32.87
Incorporating a Local Translation Mechanism into Non-autoregressive Translation
FlowSeq-large (NPD n = 30)
32.35
FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow
FlowSeq-large (NPD n=15)
31.97
FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow
FlowSeq-large (IWD n = 15)
31.08
FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow
CMLM+LAT+1 iterations
30.74
Incorporating a Local Translation Mechanism into Non-autoregressive Translation
ConvS2S BPE40k
29.9
Convolutional Sequence to Sequence Learning
FlowSeq-large
29.86
FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow
NAT +FT + NPD
29.79
Non-Autoregressive Neural Machine Translation
Denoising autoencoders (non-autoregressive)
29.66
Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement
FlowSeq-base
29.26
FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow
GRU BPE90k
28.9
-
-
BiGRU
28.1
Edinburgh Neural Machine Translation Systems for WMT 16
Deep Convolutional Encoder; single-layer decoder
27.8
A Convolutional Encoder Model for Neural Machine Translation
BiLSTM
27.5
A Convolutional Encoder Model for Neural Machine Translation
PBSMT + NMT
25.13
Phrase-Based & Neural Unsupervised Machine Translation
Unsupervised PBSMT
21.33
Phrase-Based & Neural Unsupervised Machine Translation
Unsupervised NMT + Transformer
21.18
Phrase-Based & Neural Unsupervised Machine Translation
FLAN 137B (few-shot, k=9)
20.5
Finetuned Language Models Are Zero-Shot Learners
FLAN 137B (zero-shot)
18.9
Finetuned Language Models Are Zero-Shot Learners
0 of 21 row(s) selected.
Previous
Next