HyperAI
HyperAI초신경
홈
플랫폼
문서
뉴스
연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
한국어
HyperAI
HyperAI초신경
Toggle sidebar
전체 사이트 검색...
⌘
K
Command Palette
Search for a command to run...
홈
SOTA
기계 번역
Machine Translation On Wmt2016 English German
Machine Translation On Wmt2016 English German
평가 지표
BLEU score
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
BLEU score
Paper Title
Repository
MADL
40.68
Multi-Agent Dual Learning
-
Attentional encoder-decoder + BPE
34.2
Edinburgh Neural Machine Translation Systems for WMT 16
Linguistic Input Features
28.4
Linguistic Input Features Improve Neural Machine Translation
DeLighT
28.0
DeLighT: Deep and Light-weight Transformer
FLAN 137B (zero-shot)
27.0
Finetuned Language Models Are Zero-Shot Learners
Transformer
26.7
On the adequacy of untuned warmup for adaptive optimization
FLAN 137B (few-shot, k=11)
26.1
Finetuned Language Models Are Zero-Shot Learners
BiRNN + GCN (Syn + Sem)
24.9
Exploiting Semantics in Neural Machine Translation with Graph Convolutional Networks
-
SMT + iterative backtranslation (unsupervised)
18.23
Unsupervised Statistical Machine Translation
Unsupervised NMT + weight-sharing
10.86
Unsupervised Neural Machine Translation with Weight Sharing
Unsupervised S2S with attention
9.64
Unsupervised Machine Translation Using Monolingual Corpora Only
Exploiting Mono at Scale (single)
-
Exploiting Monolingual Data at Scale for Neural Machine Translation
-
0 of 12 row(s) selected.
Previous
Next