Find First, Track Next: Decoupling Identification and Propagation in Referring Video Object Segmentation

Referring video object segmentation aims to segment and track a target objectin a video using a natural language prompt. Existing methods typically fusevisual and textual features in a highly entangled manner, processingmulti-modal information together to generate per-frame masks. However, thisapproach often struggles with ambiguous target identification, particularly inscenes with multiple similar objects, and fails to ensure consistent maskpropagation across frames. To address these limitations, we introduceFindTrack, a novel decoupled framework that separates target identificationfrom mask propagation. FindTrack first adaptively selects a key frame bybalancing segmentation confidence and vision-text alignment, establishing arobust reference for the target object. This reference is then utilized by adedicated propagation module to track and segment the object across the entirevideo. By decoupling these processes, FindTrack effectively reduces ambiguitiesin target association and enhances segmentation consistency. We demonstratethat FindTrack outperforms existing methods on public benchmarks.