Multi-granularity Correspondence Learning from Long-term Noisy Videos

Existing video-language studies mainly focus on learning short video clips,leaving long-term temporal dependencies rarely explored due to over-highcomputational cost of modeling long videos. To address this issue, one feasiblesolution is learning the correspondence between video clips and captions, whichhowever inevitably encounters the multi-granularity noisy correspondence (MNC)problem. To be specific, MNC refers to the clip-caption misalignment(coarse-grained) and frame-word misalignment (fine-grained), hindering temporallearning and video understanding. In this paper, we propose NOise RobustTemporal Optimal traNsport (Norton) that addresses MNC in a unified optimaltransport (OT) framework. In brief, Norton employs video-paragraph andclip-caption contrastive losses to capture long-term dependencies based on OT.To address coarse-grained misalignment in video-paragraph contrast, Nortonfilters out the irrelevant clips and captions through an alignable promptbucket and realigns asynchronous clip-caption pairs based on transportdistance. To address the fine-grained misalignment, Norton incorporates asoft-maximum operator to identify crucial words and key frames. Additionally,Norton exploits the potential faulty negative samples in clip-caption contrastby rectifying the alignment target with OT assignment to ensure precisetemporal modeling. Extensive experiments on video retrieval, videoQA, andaction segmentation verify the effectiveness of our method. Code is availableat https://lin-yijie.github.io/projects/Norton.