HyperAI
Startseite
Neuigkeiten
Neueste Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Deutsch
HyperAI
Toggle sidebar
Seite durchsuchen…
⌘
K
Startseite
SOTA
Multimodal Sentiment Analysis
Multimodal Sentiment Analysis On Cmu Mosei 1
Multimodal Sentiment Analysis On Cmu Mosei 1
Metriken
Accuracy
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
Accuracy
Paper Title
Repository
MARLIN (ViT-B)
73.7
MARLIN: Masked Autoencoder for facial video Representation LearnINg
Proposed: B2 + B4 w/ multimodal fusion
81.14
Gated Mechanism for Attention Based Multimodal Sentiment Analysis
-
Multilogue-Net
82.10
Multilogue-Net: A Context Aware RNN for Multi-modal Emotion Detection and Sentiment Analysis in Conversation
MARLIN (ViT-S)
72.69
MARLIN: Masked Autoencoder for facial video Representation LearnINg
MMML
88.22
Multimodal Multi-loss Fusion Network for Sentiment Analysis
SeMUL-PCD
88.62
Multi-label Emotion Analysis in Conversation via Multimodal Knowledge Distillation
-
Transformer-based joint-encoding
82.48
A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis
ALMT
-
Learning Language-guided Adaptive Hyper-modality Representation for Multimodal Sentiment Analysis
Graph-MFN
76.9
Multimodal Language Analysis in the Wild: CMU-MOSEI Dataset and Interpretable Dynamic Fusion Graph
-
UniMSE
87.50
UniMSE: Towards Unified Multimodal Sentiment Analysis and Emotion Recognition
SPECTRA
87.34
Speech-Text Dialog Pre-training for Spoken Dialog Understanding with Explicit Cross-Modal Alignment
MMLatch
82.4
MMLatch: Bottom-up Top-down Fusion for Multimodal Sentiment Analysis
MARLIN (ViT-L)
74.83
MARLIN: Masked Autoencoder for facial video Representation LearnINg
Modulated-fusion transformer
82.45
Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition
CAE-LR
78
Unsupervised Multimodal Language Representations using Convolutional Autoencoders
-
0 of 15 row(s) selected.
Previous
Next