HyperAI
Startseite
Neuigkeiten
Neueste Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Deutsch
HyperAI
Toggle sidebar
Seite durchsuchen…
⌘
K
Startseite
SOTA
Multimodal Emotion Recognition
Multimodal Emotion Recognition On Iemocap 4
Multimodal Emotion Recognition On Iemocap 4
Metriken
Accuracy
F1
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
Accuracy
F1
Paper Title
Repository
Self-attention weight correction (A+T)
76.8
76.85
Speech Emotion Recognition Based on Self-Attention Weight Correction for Acoustic and Text Features
-
Audio + Text (Stage III)
-
70.5
HCAM -- Hierarchical Cross Attention Model for Multi-modal Emotion Recognition
-
PATHOSnet v2
80.4
78
Combining deep and unsupervised features for multilingual speech emotion recognition
MultiMAE-DER
-
-
MultiMAE-DER: Multimodal Masked Autoencoder for Dynamic Emotion Recognition
MMER
81.7
-
MMER: Multimodal Multi-task Learning for Speech Emotion Recognition
COGMEN
-
-
COGMEN: COntextualized GNN based Multimodal Emotion recognitioN
GraphSmile
86.53
-
Tracing Intricate Cues in Dialogue: Joint Graph Structure and Sentiment Dynamics for Multimodal Emotion Recognition
bc-LSTM
-
-
0/1 Deep Neural Networks via Block Coordinate Descent
-
DANN
82.7
-
Context-Dependent Domain Adversarial Neural Network for Multimodal Emotion Recognition
-
CHFusion
76.5
76.8
Multimodal Sentiment Analysis using Hierarchical Fusion with Context Modeling
0 of 10 row(s) selected.
Previous
Next