MemorySAM: Memorize Modalities and Semantics with Segment Anything Model 2 for Multi-modal Semantic Segmentation

Research has focused on Multi-Modal Semantic Segmentation (MMSS), wherepixel-wise predictions are derived from multiple visual modalities captured bydiverse sensors. Recently, the large vision model, Segment Anything Model 2(SAM2), has shown strong zero-shot segmentation performance on both images andvideos. When extending SAM2 to MMSS, two issues arise: 1. How can SAM2 beadapted to multi-modal data? 2. How can SAM2 better understand semantics?Inspired by cross-frame correlation in videos, we propose to treat multi-modaldata as a sequence of frames representing the same scene. Our key idea is to''memorize'' the modality-agnostic information and 'memorize' the semanticsrelated to the targeted scene. To achieve this, we apply SAM2's memorymechanisms across multi-modal data to capture modality-agnostic features.Meanwhile, to memorize the semantic knowledge, we propose a training-onlySemantic Prototype Memory Module (SPMM) to store category-level prototypesacross training for facilitating SAM2's transition from instance to semanticsegmentation. A prototypical adaptation loss is imposed between global andlocal prototypes iteratively to align and refine SAM2's semantic understanding.Extensive experimental results demonstrate that our proposed MemorySAMoutperforms SoTA methods by large margins on both synthetic and real-worldbenchmarks (65.38% on DELIVER, 52.88% on MCubeS). Source code will be madepublicly available.