HyperAIHyperAI
2 months ago

Collaborative Group: Composed Image Retrieval via Consensus Learning from Noisy Annotations

Zhang, Xu ; Zheng, Zhedong ; Zhu, Linchao ; Yang, Yi
Collaborative Group: Composed Image Retrieval via Consensus Learning
  from Noisy Annotations
Abstract

Composed image retrieval extends content-based image retrieval systems byenabling users to search using reference images and captions that describetheir intention. Despite great progress in developing image-text compositors toextract discriminative visual-linguistic features, we identify a hithertooverlooked issue, triplet ambiguity, which impedes robust feature extraction.Triplet ambiguity refers to a type of semantic ambiguity that arises betweenthe reference image, the relative caption, and the target image. It is mainlydue to the limited representation of the annotated text, resulting in manynoisy triplets where multiple visually dissimilar candidate images can bematched to an identical reference pair (i.e., a reference image + a relativecaption). To address this challenge, we propose the Consensus Network(Css-Net), inspired by the psychological concept that groups outperformindividuals. Css-Net comprises two core components: (1) a consensus module withfour diverse compositors, each generating distinct image-text embeddings,fostering complementary feature extraction and mitigating dependence on anysingle, potentially biased compositor; (2) a Kullback-Leibler divergence lossthat encourages learning of inter-compositor interactions to promote consensualoutputs. During evaluation, the decisions of the four compositors are combinedthrough a weighting scheme, enhancing overall agreement. On benchmark datasets,particularly FashionIQ, Css-Net demonstrates marked improvements. Notably, itachieves significant recall gains, with a 2.77% increase in R@10 and 6.67%boost in R@50, underscoring its competitiveness in addressing the fundamentallimitations of existing methods.