Background-aware Moment Detection for Video Moment Retrieval

Video moment retrieval (VMR) identifies a specific moment in an untrimmedvideo for a given natural language query. This task is prone to suffer the weakalignment problem innate in video datasets. Due to the ambiguity, a query doesnot fully cover the relevant details of the corresponding moment, or the momentmay contain misaligned and irrelevant frames, potentially limiting furtherperformance gains. To tackle this problem, we propose a background-aware momentdetection transformer (BM-DETR). Our model adopts a contrastive approach,carefully utilizing the negative queries matched to other moments in the video.Specifically, our model learns to predict the target moment from the jointprobability of each frame given the positive query and the complement ofnegative queries. This leads to effective use of the surrounding background,improving moment sensitivity and enhancing overall alignments in videos.Extensive experiments on four benchmarks demonstrate the effectiveness of ourapproach. Our code is available at:\url{https://github.com/minjoong507/BM-DETR}