HyperAIHyperAI
11 days ago

Accumulating Word Representations in Multi-level Context Integration for ERC Task

{Nguyen Minh Le, Matheny Blake, Phuong Minh Nguyen, Jieying Xue}
Abstract

Emotion Recognition in Conversations (ERC) has attracted augmented interest recently because of its pronounced adaptability, which is to forecast the sentiment label for each utterance given a conversation as context. In order to identify the emotion of a focal sentence, it is crucial to model its meaning fused with contextual information. Many recent studies have focused on capturing different types of context as supporting information and integrated it in various ways: local and global contexts or at the speaker level through intra-speaker and inter-speaker integration. However, the importance of word representations after context integration has not been investigated completely, while word information is also essential to reflect the speaker's emotions in the conversation. Therefore, in this work, we endeavor to investigate the impact of accumulating word vector representations on sentence modeling fused with multi-level contextual integration. To this end, we propose an effective method for sentence modeling in ERC tasks and achieve competitive state-of-the-art results across four widely recognized bench-mark datasets: Iemocap, MELD, EmoryNLP, and DailyDialog. Our source code can be accessed via the following link: github.com/yingjie7/per_erc/tree/AccumWR.

Accumulating Word Representations in Multi-level Context Integration for ERC Task | Latest Papers | HyperAI