Crowdsourced Text Aggregation On Crowdspeech
評価指標
Word Error Rate (WER)
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
モデル名 | Word Error Rate (WER) | Paper Title | Repository |
---|---|---|---|
ROVER | 7.29 | CrowdSpeech and VoxDIY: Benchmark Datasets for Crowdsourced Audio Transcription | |
RASA | 8.6 | CrowdSpeech and VoxDIY: Benchmark Datasets for Crowdsourced Audio Transcription | |
HRRASA | 8.59 | CrowdSpeech and VoxDIY: Benchmark Datasets for Crowdsourced Audio Transcription |
0 of 3 row(s) selected.