11 days ago
Context-Dependent Sentiment Analysis in User-Generated Videos
{Louis-Philippe Morency, Amir Zadeh, Soujanya Poria, Navonil Majumder, Erik Cambria, Devamanyu Hazarika}

Abstract
Multimodal sentiment analysis is a developing area of research, which involves the identification of sentiments in videos. Current research considers utterances as independent entities, i.e., ignores the interdependencies and relations among the utterances of a video. In this paper, we propose a LSTM-based model that enables utterances to capture contextual information from their surroundings in the same video, thus aiding the classification process. Our method shows 5-10{%} performance improvement over the state of the art and high robustness to generalizability.