Crowdsourced Text Aggregation On Crowdspeech
Metriken
Word Error Rate (WER)
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Modellname | Word Error Rate (WER) | Paper Title | Repository |
---|---|---|---|
ROVER | 7.29 | CrowdSpeech and VoxDIY: Benchmark Datasets for Crowdsourced Audio Transcription | |
RASA | 8.6 | CrowdSpeech and VoxDIY: Benchmark Datasets for Crowdsourced Audio Transcription | |
HRRASA | 8.59 | CrowdSpeech and VoxDIY: Benchmark Datasets for Crowdsourced Audio Transcription |
0 of 3 row(s) selected.