HyperAIHyperAI초신경
홈뉴스최신 연구 논문튜토리얼데이터셋백과사전SOTALLM 모델GPU 랭킹컨퍼런스
전체 검색
소개
한국어
HyperAIHyperAI초신경
  1. 홈
  2. SOTA
  3. 다중모달 감정 인식
  4. Multimodal Emotion Recognition On Iemocap 4

Multimodal Emotion Recognition On Iemocap 4

평가 지표

Accuracy
F1

평가 결과

이 벤치마크에서 각 모델의 성능 결과

모델 이름
Accuracy
F1
Paper TitleRepository
Self-attention weight correction (A+T)76.876.85Speech Emotion Recognition Based on Self-Attention Weight Correction for Acoustic and Text Features-
Audio + Text (Stage III)-70.5HCAM -- Hierarchical Cross Attention Model for Multi-modal Emotion Recognition-
PATHOSnet v280.478Combining deep and unsupervised features for multilingual speech emotion recognition
MultiMAE-DER--MultiMAE-DER: Multimodal Masked Autoencoder for Dynamic Emotion Recognition
MMER81.7-MMER: Multimodal Multi-task Learning for Speech Emotion Recognition
COGMEN--COGMEN: COntextualized GNN based Multimodal Emotion recognitioN
GraphSmile86.53-Tracing Intricate Cues in Dialogue: Joint Graph Structure and Sentiment Dynamics for Multimodal Emotion Recognition
bc-LSTM--0/1 Deep Neural Networks via Block Coordinate Descent-
DANN82.7-Context-Dependent Domain Adversarial Neural Network for Multimodal Emotion Recognition-
CHFusion76.576.8Multimodal Sentiment Analysis using Hierarchical Fusion with Context Modeling
0 of 10 row(s) selected.
HyperAI

학습, 이해, 실천, 커뮤니티와 함께 인공지능의 미래를 구축하다

한국어

소개

회사 소개데이터셋 도움말

제품

뉴스튜토리얼데이터셋백과사전

링크

TVM 한국어Apache TVMOpenBayes

© HyperAI초신경

TwitterBilibili