HyperAIHyperAI
2 months ago

Pic2Word: Mapping Pictures to Words for Zero-shot Composed Image Retrieval

Saito, Kuniaki ; Sohn, Kihyuk ; Zhang, Xiang ; Li, Chun-Liang ; Lee, Chen-Yu ; Saenko, Kate ; Pfister, Tomas
Pic2Word: Mapping Pictures to Words for Zero-shot Composed Image
  Retrieval
Abstract

In Composed Image Retrieval (CIR), a user combines a query image with text todescribe their intended target. Existing methods rely on supervised learning ofCIR models using labeled triplets consisting of the query image, textspecification, and the target image. Labeling such triplets is expensive andhinders broad applicability of CIR. In this work, we propose to study animportant task, Zero-Shot Composed Image Retrieval (ZS-CIR), whose goal is tobuild a CIR model without requiring labeled triplets for training. To this end,we propose a novel method, called Pic2Word, that requires only weakly labeledimage-caption pairs and unlabeled image datasets to train. Unlike existingsupervised CIR models, our model trained on weakly labeled or unlabeleddatasets shows strong generalization across diverse ZS-CIR tasks, e.g.,attribute editing, object composition, and domain conversion. Our approachoutperforms several supervised CIR methods on the common CIR benchmark, CIRRand Fashion-IQ. Code will be made publicly available athttps://github.com/google-research/composed_image_retrieval.