HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Multimodal Machine Translation
Multimodal Machine Translation On Multi30K
Multimodal Machine Translation On Multi30K
Metrics
BLUE (DE-EN)
Results
Performance results of various models on this benchmark
Columns
Model Name
BLUE (DE-EN)
Paper Title
PS-KD
32.3
Self-Knowledge Distillation with Progressive Refinement of Targets
ERNIE-UniX2
-
ERNIE-UniX2: A Unified Cross-lingual Cross-modal Framework for Understanding and Generation
del
-
Distilling Translations with Visual Awareness
IKD-MMT
-
Distill the Image to Nowhere: Inversion Knowledge Distillation for Multimodal Machine Translation
Multimodal Transformer
-
Multimodal Transformer for Multimodal Machine Translation
Caglayan
-
Multimodal Machine Translation through Visuals and Speech
ImagiT
-
Generative Imagination Elevates Machine Translation
NMTSRC+IMG
-
Doubly-Attentive Decoder for Multi-modal Neural Machine Translation
IMGD
-
Incorporating Global Visual Features into Attention-Based Neural Machine Translation
del+obj
-
Distilling Translations with Visual Awareness
VAG-NMT
-
A Visual Attention Grounding Neural Model for Multimodal Machine Translation
DCCN
-
Dynamic Context-guided Capsule Network for Multimodal Machine Translation
Transformer
29.0
Attention Is All You Need
Gumbel-Attention MMT
-
Gumbel-Attention for Multi-modal Machine Translation
VMMTF
-
Latent Variable Model for Multi-modal Translation
0 of 15 row(s) selected.
Previous
Next
Multimodal Machine Translation On Multi30K | SOTA | HyperAI