Sentiment Analysis On Slue
평가 결과
이 벤치마크에서 각 모델의 성능 결과
모델 이름 | Paper Title | Repository |
---|---|---|
W2V2-L-LL60K (pipeline approach) | SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech | |
W2V2-L-LL60K (pipeline approach, uses LM) | SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech | |
W2V2-B-LS960 (pipeline approach, uses LM) | SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech | |
HuBERT-B-LS960 (e2e approach) | SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech | |
W2V2-B-LS960 (pipeline approach) | SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech | |
W2V2-B-LS960 (e2e approach) | SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech | |
W2V2-L-LL60K (e2e approach) | SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech | |
W2V2-B-VP100K (e2e approach) | SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech |
0 of 8 row(s) selected.