Spoken Language Understanding On Fluent
Metrics
Accuracy (%)
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | Accuracy (%) |
---|---|
finstreder-simple-and-fast-spoken-language | 99.7 |
finstreder-simple-and-fast-spoken-language | 99.8 |
finstreder-simple-and-fast-spoken-language | 99.5 |
integration-of-pre-trained-networks-with | 99.7 |
do-we-still-need-automatic-speech-recognition | 99.6 |
sequential-end-to-end-intent-and-slot-label | 99.3 |
two-stage-textual-knowledge-distillation-to | 99.7 |
universlu-universal-spoken-language | 99.8 |
finstreder-simple-and-fast-spoken-language | 98.7 |
finstreder-simple-and-fast-spoken-language | 99.2 |
speech-language-pre-training-for-end-to-end | 99.7 |
improving-end-to-end-speech-to-intent | 99.2 |
speech-model-pre-training-for-end-to-end | 98.8 |
fans-fusing-asr-and-nlu-for-on-device-slu | 99.0 |
speechprompt-v2-prompt-tuning-for-speech | 98.2 |
end-to-end-spoken-language-understanding-for | 99.4 |
exploring-transfer-learning-for-end-to-end | 99.5 |