HyperAI
الرئيسية
الأخبار
أحدث الأوراق البحثية
الدروس
مجموعات البيانات
الموسوعة
SOTA
نماذج LLM
لوحة الأداء GPU
الفعاليات
البحث
حول
العربية
HyperAI
Toggle sidebar
البحث في الموقع...
⌘
K
الرئيسية
SOTA
Multimodal Emotion Recognition
Multimodal Emotion Recognition On Iemocap 4
Multimodal Emotion Recognition On Iemocap 4
المقاييس
Accuracy
F1
النتائج
نتائج أداء النماذج المختلفة على هذا المعيار القياسي
Columns
اسم النموذج
Accuracy
F1
Paper Title
Repository
Self-attention weight correction (A+T)
76.8
76.85
Speech Emotion Recognition Based on Self-Attention Weight Correction for Acoustic and Text Features
-
Audio + Text (Stage III)
-
70.5
HCAM -- Hierarchical Cross Attention Model for Multi-modal Emotion Recognition
-
PATHOSnet v2
80.4
78
Combining deep and unsupervised features for multilingual speech emotion recognition
MultiMAE-DER
-
-
MultiMAE-DER: Multimodal Masked Autoencoder for Dynamic Emotion Recognition
MMER
81.7
-
MMER: Multimodal Multi-task Learning for Speech Emotion Recognition
COGMEN
-
-
COGMEN: COntextualized GNN based Multimodal Emotion recognitioN
GraphSmile
86.53
-
Tracing Intricate Cues in Dialogue: Joint Graph Structure and Sentiment Dynamics for Multimodal Emotion Recognition
bc-LSTM
-
-
0/1 Deep Neural Networks via Block Coordinate Descent
-
DANN
82.7
-
Context-Dependent Domain Adversarial Neural Network for Multimodal Emotion Recognition
-
CHFusion
76.5
76.8
Multimodal Sentiment Analysis using Hierarchical Fusion with Context Modeling
0 of 10 row(s) selected.
Previous
Next