Named Entity Recognition On Slue
Métriques
F1 (%)
Résultats
Résultats de performance de divers modèles sur ce benchmark
Tableau comparatif
Nom du modèle | F1 (%) |
---|---|
wav2seq-pre-training-speech-to-text-encoder | 65.4 |
slue-new-benchmark-tasks-for-spoken-language | 69.6 |
slue-new-benchmark-tasks-for-spoken-language | 49.5 |
slue-new-benchmark-tasks-for-spoken-language | 61.9 |
slue-new-benchmark-tasks-for-spoken-language | 50.2 |
slue-new-benchmark-tasks-for-spoken-language | 64.8 |
slue-new-benchmark-tasks-for-spoken-language | 57.8 |
slue-new-benchmark-tasks-for-spoken-language | 61.8 |
slue-new-benchmark-tasks-for-spoken-language | 49.8 |
slue-new-benchmark-tasks-for-spoken-language | 68.0 |
slue-new-benchmark-tasks-for-spoken-language | 50.9 |
slue-new-benchmark-tasks-for-spoken-language | 47.9 |
slue-new-benchmark-tasks-for-spoken-language | 63.4 |