Speech To Text Translation On Must C En De
Métriques
Case-sensitive sacreBLEU
Résultats
Résultats de performance de divers modèles sur ce benchmark
Nom du modèle | Case-sensitive sacreBLEU | Paper Title | Repository |
---|---|---|---|
Transformer with Adapters | 24.63 | Lightweight Adapter Tuning for Multilingual Speech Translation | |
Transformer + Meta Learning(ASR/MT) + Data Augmentation | 27.51 | End-to-End Offline Speech Translation System for IWSLT 2020 using Modality Agnostic Meta-Learning | - |
Speechformer | 23.6 | Speechformer: Reducing Information Loss in Direct Speech Translation | |
Task Modulation + Multitask Learning(ASR/MT) + Data Augmentation | 28.88 | TASK AWARE MULTI-TASK LEARNING FOR SPEECH TO TEXT TASKS | - |
Dual-decoder Transformer | 23.63 | Dual-decoder Transformer for Joint Automatic Speech Recognition and Multilingual Speech Translation | |
Transformer + ASR Pretrain | 22.7 | fairseq S2T: Fast Speech-to-Text Modeling with fairseq | |
Transformer + ASR Pretrain | 22.8 | NeurST: Neural Speech Translation Toolkit | |
Wav2Vec2.0+mBART+Adaptors | 28.22 | End-to-End Speech Translation with Pre-trained Models and Adapters: UPC at IWSLT 2021 |
0 of 8 row(s) selected.