HyperAI

Named Entity Recognition On Slue

Metriken

F1 (%)

Ergebnisse

Leistungsergebnisse verschiedener Modelle zu diesem Benchmark

Vergleichstabelle
ModellnameF1 (%)
wav2seq-pre-training-speech-to-text-encoder65.4
slue-new-benchmark-tasks-for-spoken-language69.6
slue-new-benchmark-tasks-for-spoken-language49.5
slue-new-benchmark-tasks-for-spoken-language61.9
slue-new-benchmark-tasks-for-spoken-language50.2
slue-new-benchmark-tasks-for-spoken-language64.8
slue-new-benchmark-tasks-for-spoken-language57.8
slue-new-benchmark-tasks-for-spoken-language61.8
slue-new-benchmark-tasks-for-spoken-language49.8
slue-new-benchmark-tasks-for-spoken-language68.0
slue-new-benchmark-tasks-for-spoken-language50.9
slue-new-benchmark-tasks-for-spoken-language47.9
slue-new-benchmark-tasks-for-spoken-language63.4