HyperAIHyperAI
2 months ago

Quality-Aware Image-Text Alignment for Opinion-Unaware Image Quality Assessment

Agnolucci, Lorenzo ; Galteri, Leonardo ; Bertini, Marco
Quality-Aware Image-Text Alignment for Opinion-Unaware Image Quality
  Assessment
Abstract

No-Reference Image Quality Assessment (NR-IQA) focuses on designing methodsto measure image quality in alignment with human perception when a high-qualityreference image is unavailable. Most state-of-the-art NR-IQA approaches areopinion-aware, i.e. they require human annotations for training. Thisdependency limits their scalability and broad applicability. To overcome thislimitation, we propose QualiCLIP (Quality-aware CLIP), a CLIP-basedself-supervised opinion-unaware approach that does not require human opinions.In particular, we introduce a quality-aware image-text alignment strategy tomake CLIP generate quality-aware image representations. Starting from pristineimages, we synthetically degrade them with increasing levels of intensity.Then, we train CLIP to rank these degraded images based on their similarity toquality-related antonym text prompts. At the same time, we force CLIP togenerate consistent representations for images with similar content and thesame level of degradation. Our experiments show that the proposed methodimproves over existing opinion-unaware approaches across multiple datasets withdiverse distortion types. Moreover, despite not requiring human annotations,QualiCLIP achieves excellent performance against supervised opinion-awaremethods in cross-dataset experiments, thus demonstrating remarkablegeneralization capabilities. The code and the model are publicly available athttps://github.com/miccunifi/QualiCLIP.