HyperAI
HyperAI초신경
홈
뉴스
최신 연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
한국어
HyperAI
HyperAI초신경
Toggle sidebar
전체 사이트 검색...
⌘
K
홈
SOTA
음성 분리
Speech Separation On Wham
Speech Separation On Wham
평가 지표
SI-SDRi
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
SI-SDRi
Paper Title
Repository
TDANet Large
15.2
An efficient encoder-decoder architecture with top-down attention for speech separation
-
WHYV
12.964
An alternative Approach in Voice Extraction
-
MossFormer (L) + DM
17.3
MossFormer: Pushing the Performance Limit of Monaural Speech Separation using Gated Single-Head Transformer with Convolution-Augmented Joint Self-Attentions
-
TDANet
14.8
An efficient encoder-decoder architecture with top-down attention for speech separation
-
MossFormer2
18.1
MossFormer2: Combining Transformer and RNN-Free Recurrent Network for Enhanced Time-Domain Monaural Speech Separation
-
SepReformer-L + DM
18.4
Separate and Reconstruct: Asymmetric Encoder-Decoder for Speech Separation
-
0 of 6 row(s) selected.
Previous
Next
Speech Separation On Wham | SOTA | HyperAI초신경