HyperAI
HyperAI초신경
홈
플랫폼
문서
뉴스
연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
한국어
HyperAI
HyperAI초신경
Toggle sidebar
전체 사이트 검색...
⌘
K
Command Palette
Search for a command to run...
홈
SOTA
기계 번역
Machine Translation On Wmt2016 English 1
Machine Translation On Wmt2016 English 1
평가 지표
BLEU score
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
BLEU score
Paper Title
Repository
DeLighT
34.7
DeLighT: Deep and Light-weight Transformer
CMLM+LAT+4 iterations
32.87
Incorporating a Local Translation Mechanism into Non-autoregressive Translation
FlowSeq-large (NPD n = 30)
32.35
FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow
FlowSeq-large (NPD n=15)
31.97
FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow
FlowSeq-large (IWD n = 15)
31.08
FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow
CMLM+LAT+1 iterations
30.74
Incorporating a Local Translation Mechanism into Non-autoregressive Translation
ConvS2S BPE40k
29.9
Convolutional Sequence to Sequence Learning
FlowSeq-large
29.86
FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow
NAT +FT + NPD
29.79
Non-Autoregressive Neural Machine Translation
Denoising autoencoders (non-autoregressive)
29.66
Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement
FlowSeq-base
29.26
FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow
GRU BPE90k
28.9
-
-
BiGRU
28.1
Edinburgh Neural Machine Translation Systems for WMT 16
Deep Convolutional Encoder; single-layer decoder
27.8
A Convolutional Encoder Model for Neural Machine Translation
BiLSTM
27.5
A Convolutional Encoder Model for Neural Machine Translation
PBSMT + NMT
25.13
Phrase-Based & Neural Unsupervised Machine Translation
Unsupervised PBSMT
21.33
Phrase-Based & Neural Unsupervised Machine Translation
Unsupervised NMT + Transformer
21.18
Phrase-Based & Neural Unsupervised Machine Translation
FLAN 137B (few-shot, k=9)
20.5
Finetuned Language Models Are Zero-Shot Learners
FLAN 137B (zero-shot)
18.9
Finetuned Language Models Are Zero-Shot Learners
0 of 21 row(s) selected.
Previous
Next