HyperAI
Home
News
Latest Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
English
HyperAI
Toggle sidebar
Search the site…
⌘
K
Home
SOTA
Multimodal Emotion Recognition
Multimodal Emotion Recognition On Iemocap 4
Multimodal Emotion Recognition On Iemocap 4
Metrics
Accuracy
F1
Results
Performance results of various models on this benchmark
Columns
Model Name
Accuracy
F1
Paper Title
Repository
Self-attention weight correction (A+T)
76.8
76.85
Speech Emotion Recognition Based on Self-Attention Weight Correction for Acoustic and Text Features
-
Audio + Text (Stage III)
-
70.5
HCAM -- Hierarchical Cross Attention Model for Multi-modal Emotion Recognition
-
PATHOSnet v2
80.4
78
Combining deep and unsupervised features for multilingual speech emotion recognition
MultiMAE-DER
-
-
MultiMAE-DER: Multimodal Masked Autoencoder for Dynamic Emotion Recognition
MMER
81.7
-
MMER: Multimodal Multi-task Learning for Speech Emotion Recognition
COGMEN
-
-
COGMEN: COntextualized GNN based Multimodal Emotion recognitioN
GraphSmile
86.53
-
Tracing Intricate Cues in Dialogue: Joint Graph Structure and Sentiment Dynamics for Multimodal Emotion Recognition
bc-LSTM
-
-
0/1 Deep Neural Networks via Block Coordinate Descent
-
DANN
82.7
-
Context-Dependent Domain Adversarial Neural Network for Multimodal Emotion Recognition
-
CHFusion
76.5
76.8
Multimodal Sentiment Analysis using Hierarchical Fusion with Context Modeling
0 of 10 row(s) selected.
Previous
Next