CompoDiff: Versatile Composed Image Retrieval With Latent Diffusion

This paper proposes a novel diffusion-based model, CompoDiff, for solvingzero-shot Composed Image Retrieval (ZS-CIR) with latent diffusion. This paperalso introduces a new synthetic dataset, named SynthTriplets18M, with 18.8million reference images, conditions, and corresponding target image tripletsto train CIR models. CompoDiff and SynthTriplets18M tackle the shortages of theprevious CIR approaches, such as poor generalizability due to the small datasetscale and the limited types of conditions. CompoDiff not only achieves a newstate-of-the-art on four ZS-CIR benchmarks, including FashionIQ, CIRR, CIRCO,and GeneCIS, but also enables a more versatile and controllable CIR byaccepting various conditions, such as negative text, and image mask conditions.CompoDiff also shows the controllability of the condition strength between textand image queries and the trade-off between inference speed and performance,which are unavailable with existing CIR methods. The code and dataset areavailable at https://github.com/navervision/CompoDiff