Sentence-level Prompts Benefit Composed Image Retrieval

Composed image retrieval (CIR) is the task of retrieving specific images byusing a query that involves both a reference image and a relative caption. Mostexisting CIR models adopt the late-fusion strategy to combine visual andlanguage features. Besides, several approaches have also been suggested togenerate a pseudo-word token from the reference image, which is furtherintegrated into the relative caption for CIR. However, these pseudo-word-basedprompting methods have limitations when target image encompasses complexchanges on reference image, e.g., object removal and attribute modification. Inthis work, we demonstrate that learning an appropriate sentence-level promptfor the relative caption (SPRC) is sufficient for achieving effective composedimage retrieval. Instead of relying on pseudo-word-based prompts, we propose toleverage pretrained V-L models, e.g., BLIP-2, to generate sentence-levelprompts. By concatenating the learned sentence-level prompt with the relativecaption, one can readily use existing text-based image retrieval models toenhance CIR performance. Furthermore, we introduce both image-text contrastiveloss and text prompt alignment loss to enforce the learning of suitablesentence-level prompts. Experiments show that our proposed method performsfavorably against the state-of-the-art CIR methods on the Fashion-IQ and CIRRdatasets. The source code and pretrained model are publicly available athttps://github.com/chunmeifeng/SPRC